I’ve seen a few related items in science education research recently that present a fairly coherent picture of science education, and show the value of evidence-based education research. Live demos and educational videos considered harmful — read on:
Science Teaching is about replacing, not adding
The paper title here says it all: Scientiﬁc knowledge suppresses but does not supplant earlier intuitions. A consistent theme across the research described in this post is that when you are explaining science to pupils, you are not adding totally new knowledge, in the way that you might when explaining a lesser-known historical event. When you explain forces to someone, they will already have an idea about the way the world works (drop something, and it falls to the ground), so you are trying to adjust and correct their existing understanding (falling is actually due to gravity), not start from scratch. The paper suggests that the old knowledge is generally not replaced, but merely suppressed, meaning people carry their original misconceptions with them forever-after.
Demonstrations don’t help
This idea that old bad knowledge hangs around cropped up again in the work of Eric Mazur (further reading on Mazur here and here). Mazur looked at the effects of live demonstrations of science, long thought to be beneficial due to their engaging nature. Mazur and colleagues found that live demonstrations generally lowered students’ marks! The students were better off with no demonstration.
The reason for this was that students thought they knew what was going to happen in the demo, and afterwards they would actually misremember what happened! If the students thought the liquid would turn blue, but actually it turned red, when interviewed later, they would (wrongly) say that it turned blue, just as they had (wrongly) expected.
The solution seems to be to get the students to make a prediction before the demo. Making this prediction emphasises the disagreement between their expectation and what actually happens in the demo, and hopefully makes them remember that their original idea was wrong. Judging by the first paper I mentioned, they will still carry around their original understanding, but hopefully their failed prediction will add a “but that was wrong” tag in their memory.
Videos don’t help
This notion of students wrongly confirming their knowledge crops up again with respect to videos. Derek Muller did a PhD looking at the effectiveness of science education videos, like those on the Khan academy. He has summarised his findings excellently in an 8-minute youtube video (why don’t all PhDs do this!), but here it is in text. He found that students would watch science education videos and, just like in the live demos, would think that their video confirmed their original wrong understanding. So not only did students get no benefit in their test scores, they became more confident in their wrong understanding. Another negative effect on the students! (First, do no harm!)
Prediction is mentioned in the thesis as a possible solution, but Muller’s tested and working solution was to explicitly feature common misconceptions in his videos, and then explain why they were wrong. Again, by drawing out and making explicit the disagreement between students’ misconceptions and the correct explanation, students’ scores could actually be improved.
Another theme that came up in Mazur and Muller’s work was that students who had the wrong understanding were more likely to report that they were not confused — presumably because they thought their original conceptions were correct. Students who had gotten the correct understanding (and were presumably struggling to use this to replace their original misconception)
reported that they were confused, but were actually likely to be right. So a lack of confusion is probably a bad thing!
I find all this interesting for two reasons. Firstly, it shows that several educational techniques widely thought to be a good idea are in fact harmful. Educational videos (perfect for the flipped classroom!) and live demos reinforce students’ wrong understanding unless deliberate steps are taken to counter this; this shows the value of empirical educational research, to make sure that we are actually having a positive effect. Secondly, I wonder if any of this transfers to computing education. I’ve already written about the relevance of Mazur’s work (and see Mark Guzdial’s comment in response) — is computing close enough to science for the results to transfer? In computing, are we correcting misconceptions and existing iffy mental models as in science and maths? If so, we need to look at these issues in computing, too.