ResearchEd 2013 took place today. It was a (UK) conference intended to bring together teachers and education academics, to try to better support collaboration between the two. One of the most pleasing things was the appetite for the conference: 500 attendees got themselves to London on a Saturday, with 400 more on the waiting list. And this for a conference that had never been run before, advertised mainly through Twitter and blogs. I went — here’s some of my thoughts.
Coe on Evidence
Robert Coe’s talk was my favourite of the day (slides here, PPT). He mentioned the Education Endowment Foundation, and their work on trying to summarise useful educational research (a bit like the Cochrane Review?) — something I want to look into further.
Coe sounded some notes of caution on transferring research into practice: using his example, “Assessment for Learning” apparently comes out very well in trials, but this did not translate into a massive effect in practice when the government pushed it. Understanding why would be useful for future efforts to transfer research into practice.
Coe also pointed out that some practices continue to be used despite a lack of evidence for their effectiveness. Tom Bennett previously documented most of the obviously barmy ones (in his book Teacher Proof, my review here), but as a less obvious example, Coe questioned where the evidence is that classroom observation (teachers observing their peers) improves teaching.
The Effect Size Debate
Coe also cropped up in another interesting session: a debate between Coe and Ollie Orange about whether effect size is a good measure. Coe was for effect size, Orange against. I think Coe’s argument boiled down to: it’s not ideal, but it is a useful, slightly crude, heuristic in several circumstances (comparing incompatible measures of the same outcome, performing meta analyses).
Orange’s argument was not as convincing. A large part of his argument was that proponents/inventors of effect size do not have maths/statistics degrees (he actually listed them out loud and their degrees) and that mathematicians do not use effect sizes. Dealing with the first part: I agree that the lack of training could be a warning sign, but it is not itself an argument against effect size. Science and rationalism are about reasoned arguments, not who said what. Coe said in counter-argument to the second point: why would a pure mathematician use an effect size? It’s a pragmatic measure used by empirical researchers (in education, psychology, medicine and so forth). It seemed a shame that Orange did not dispense with all this and spend more time on critiquing the mathematical properties of effect size instead. (Not all the audience might have followed it, but it seems to me that debating effect size requires getting into the mathematics at least a little.)
In the comments afterwards, discussion inevitably moved to Hattie, who based his large meta-meta-analysis on effect size. Coe said that he thought Hattie’s work was “riddled with errors” (which sounds like it roughly agrees with my assessment of the book). I think it’s important not to use inappropriate uses of a statistic to argue against all uses of the statistic. The mean is a bad measure for skew data (like salaries) but that does not imply that we should all stop using the mean as a statistical measure.
Pick and Mix
A few leftover bits. Ben Goldacre’s keynote was good, and he did admirably well at surviving the nightmare hitch of not being able to display his slides. Useful point: if we start properly assessing the claims of new education intiatives and products, this will encourage their proponents to make smaller, more reasonable claims. Amanda Spielman in her talk mentioned having a “drawer of debunking papers” ready to hand out to people who suggested a known-to-be-ineffective initiative to her — I liked that notion. Tom Bennett: good science seeks to disprove itself, not to confirm existing beliefs.
Overall, I enjoyed the conference and wished I could have gone to a couple more sessions (the conference had six parallel sessions!). However, I gather many of the sessions were recorded, so I should shortly get my wish. A good sign from my perspective was meeting Sue Sentance there, who now works for Computing At School, and who is keen on encouraging more research collaboration between computing teachers and academics in the UK. It’s clearly some kind of zeitgeist.