ResearchEd 2013

ResearchEd 2013 took place today. It was a (UK) conference intended to bring together teachers and education academics, to try to better support collaboration between the two. One of the most pleasing things was the appetite for the conference: 500 attendees got themselves to London on a Saturday, with 400 more on the waiting list. And this for a conference that had never been run before, advertised mainly through Twitter and blogs. I went — here’s some of my thoughts.

Dulwich college, host to ResearchEd 2013

Dulwich college, host to ResearchEd 2013

Coe on Evidence

Robert Coe’s talk was my favourite of the day (slides here, PPT). He mentioned the Education Endowment Foundation, and their work on trying to summarise useful educational research (a bit like the Cochrane Review?) — something I want to look into further.

Coe sounded some notes of caution on transferring research into practice: using his example, “Assessment for Learning” apparently comes out very well in trials, but this did not translate into a massive effect in practice when the government pushed it. Understanding why would be useful for future efforts to transfer research into practice.

Coe also pointed out that some practices continue to be used despite a lack of evidence for their effectiveness. Tom Bennett previously documented most of the obviously barmy ones (in his book Teacher Proof, my review here), but as a less obvious example, Coe questioned where the evidence is that classroom observation (teachers observing their peers) improves teaching.

The Effect Size Debate

Coe also cropped up in another interesting session: a debate between Coe and Ollie Orange about whether effect size is a good measure. Coe was for effect size, Orange against. I think Coe’s argument boiled down to: it’s not ideal, but it is a useful, slightly crude, heuristic in several circumstances (comparing incompatible measures of the same outcome, performing meta analyses).

Orange’s argument was not as convincing. A large part of his argument was that proponents/inventors of effect size do not have maths/statistics degrees (he actually listed them out loud and their degrees) and that mathematicians do not use effect sizes. Dealing with the first part: I agree that the lack of training could be a warning sign, but it is not itself an argument against effect size. Science and rationalism are about reasoned arguments, not who said what. Coe said in counter-argument to the second point: why would a pure mathematician use an effect size? It’s a pragmatic measure used by empirical researchers (in education, psychology, medicine and so forth). It seemed a shame that Orange did not dispense with all this and spend more time on critiquing the mathematical properties of effect size instead. (Not all the audience might have followed it, but it seems to me that debating effect size requires getting into the mathematics at least a little.)

In the comments afterwards, discussion inevitably moved to Hattie, who based his large meta-meta-analysis on effect size. Coe said that he thought Hattie’s work was “riddled with errors” (which sounds like it roughly agrees with my assessment of the book). I think it’s important not to use inappropriate uses of a statistic to argue against all uses of the statistic. The mean is a bad measure for skew data (like salaries) but that does not imply that we should all stop using the mean as a statistical measure.

Pick and Mix

A few leftover bits. Ben Goldacre’s keynote was good, and he did admirably well at surviving the nightmare hitch of not being able to display his slides. Useful point: if we start properly assessing the claims of new education intiatives and products, this will encourage their proponents to make smaller, more reasonable claims. Amanda Spielman in her talk mentioned having a “drawer of debunking papers” ready to hand out to people who suggested a known-to-be-ineffective initiative to her — I liked that notion. Tom Bennett: good science seeks to disprove itself, not to confirm existing beliefs.

Overall, I enjoyed the conference and wished I could have gone to a couple more sessions (the conference had six parallel sessions!). However, I gather many of the sessions were recorded, so I should shortly get my wish. A good sign from my perspective was meeting Sue Sentance there, who now works for Computing At School, and who is keen on encouraging more research collaboration between computing teachers and academics in the UK. It’s clearly some kind of zeitgeist.

About these ads

4 Comments

Filed under Uncategorized

4 responses to “ResearchEd 2013

  1. Pingback: ResearchED 2013: a Reading List | researchED 2013

  2. HI Neil. Hope you enjoyed the debate as much as I did. Professor Coe was correct to say that ‘Pure’ Mathematicians wouldn’t be interested in the Effect Size but of course Statisticians would be very interested if it were correct.

    I assumed that anybody attending the debate would have already read my blog where I had talked a bit about the Maths of it all, maybe that was an error. I had 6 minutes to persuade a group of non-Mathematicians that there were problems with the Effect Size. The line of argument that I took was that there has been very little input or interest from Mathematicians and most of the people involved don’t have Maths degrees. By the end some of the people were starting to ask “Why is it only Education and Psychology that are using the Effect Size?”

    To be honest the whole thing was worth it just to hear Professor Coe say that “Hattie’s book is riddled with errors”.

    • Hi Ollie. I agree it’s tricky to get too in-depth in six minutes, but I still think that the line of argument that the people who proposed it are not mathematicians is not in itself a good enough argument: several developments in various disciplines have arisen from people coming in from outside with a fresh perspective or because they have different needs. But those versed in the discipline should be able to say why it is wrong. And popular is not necessarily correct: for example, significance tests are problematic for various reasons (e.g. 0.05 is arbitrary, statistical significance is not practical significance, etc) but they are used in a huge range of disciplines because they are simple and people understand how to do them.

      Anyway, I’m pleased the debate took place, I enjoyed it (and am still happy with choosing to attend it vs the other fine-sounding 5 sessions), and thank you for participating in it.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s