Yesterday on Twitter, Jens Moenig had some kind words to say about our journal paper on Stride and complained about its repeated rejection from other journals as a symptom of incorrect criteria for accepting computing education research papers (head to twitter for the full thread):
The core issue was this: the original Stride journal paper was a long detailed description of the design of our Stride editor and the decisions involved, with very minimal evaluation. In general, should this be accepted as a paper?
The case against accepting
Computing education is full of tools. There are lots of block-based editors and beginners’ IDEs and learning assistants and so on. I’ll admit that — even as I work on making new tools — when I come to review a paper with a new tool, I do roll my eyes briefly and wonder if yet another tool is needed. The problem for our field is that we have a lot of tool-makers, but few tool evaluators. There are much fewer researchers (like David Weintrop, for example) who perform detailed comparisons between tools that they did not write themselves. The field has a glut of unevaluated tools, which is surely not helpful for someone wondering which tool to use, and we can’t be sure that any of the tools actually aid learning. In this light, rejecting our paper seems reasonable: yet another paper on a new tool with no evaluation.
The case for accepting
There’s two main arguments I see for accepting the paper. One is that the design description itself can be of value. As someone who builds tools I find it very useful to talk to other designers, like Jens and John Maloney, to find out why they made certain design decisions. I can use their tools — Scratch, Snap, GP — but that doesn’t explain the full story behind the design choices. Jens’ point is that he found it useful to read our decisions in order to improve the decisions they make in their tools. This type of exchange is beneficial for the field — the question then is should these design descriptions be considered computing education research papers by themselves, or should they be put somewhere else (some kind of design journal? or things like a tools paper track?).
The other argument for accepting design by itself is the amount of work. Our paper with design alone was 25-30 pages, which pushes the limit for most journals, and was the summation of three years’ work. A full evaluation would add another year and another 10 pages. Should this be one mega piece of work, or two separate bits of work? It can only be two papers if the first design paper can get accepted by itself. (The counter-argument is that there’s no guarantee the second paper ever appears…)
I will say that there are differences in quality of writing about design. A lot of papers I see on tools fall into the trap of describing technical details which do not generalise (e.g. we used web server X and hooked it up to cloud Y for storage) rather than discussing design decisions and trade-offs and user considerations. They also tend, due to page limits, to have minimal descriptions and pictures of the system as a whole. I had to look quite far back in time for the related work section in the Stride paper, and I can confirm that your paper will outlast your tool, so it needs to be useful to someone who does not have the tool itself. I’ve written in more detail in a previous post about this issue.
Summary
I’m not interested in grousing about one particular paper, but this is an issue that we run into repeatedly in our research. Our team has a lot of expertise in building tools but not much in evaluation. Should we be able to publish our designs as a computing education research paper, or should it always be coupled with an evaluation? It would be much easier for us if we were able to only publish designs, but I’m sympathetic to arguments on both sides.
At the ICER doctoral consortium a few years ago, Sally Fincher and Mark Guzdial ran an exercise to ask the students and discussants what computing education research comprised. I said that it was investiations of student learning and that our tools-only approach was on the periphery. What surprised was that all the people doing such investigations said that computing education research was tool-building, and their investigations were peripheral. I think perhaps this tension between tools and evaluation is inevitable in our field — but maybe it’s also useful.