Programming from Memory

What is the mental process involved in programming? It is a difficult question to answer, but I think many of us have our own conception. Until recently, my off-the-cuff answer would have been something like: I think about what I want the code to do, and then I construct the algorithm that is needed by thinking about the control flow and the variables. A sort of programming by reason: I size up the problem and use logical reasoning skills to progressively construct a solution. I would have applied the same model to beginners and experts alike. But I’ve recently been thinking that this model might be wide of the mark.

Memory, not Thinking

In Daniel Willingham’s book, “Why Don’t Students Like School?”, he talks about how expert chess players play the game. My naive model of chess players would have been similar to programmers — I would have assumed that they look at the current state, and reason about the ramifications of each different move. Not so:

In fact, people draw on memory to solve problems more than often that you might expect. For example, it appears that much of the difference among the world’s best chess players is not their ability to reason about the game or to plan the best move; rather, it is their memory for game positions. [Willingham, Why Don’t Students Like School?, p38]

Reading the book has led to me picturing a different model of programming. Now I envisage programmers looking at a problem, finding a similar solution in memory, then adapting it as necessary. At an early level this will likely be basic programming constructs or small idioms: a programmer might know that to implement a choice, they will need an if statement, or they may know how to write a loop in C that goes through an array. This presumably moves up to the level of high-level idioms or design patterns: a moe experienced programmer might know how to write code that produces an aggregate result from processing a list of elements, or how to use the observer pattern in designing their object-oriented system. Only when programmers are tackling a new, distinct problem do they need to seriously reason:

When tournament-level chess players select a move, they first size up the game, deciding which part of the board is the most critical, the location of weak spots in their defense and that of their opponents, and so on. This process relies on the player’s memory for similar board positions, and because it’s a memory process, it takes very little time, perhaps a few seconds. This assessment greatly narrows the possible moves the play might make. Only then does the player engage slower reasoning processes to select the best among several candidate moves. [Willingham, Why Don’t Students Like School?, p39]

This from-memory model of programming seems to clarify a few issues for me. Many computing teachers I’ve spoken to say that they get their students to write pseudo-code on paper before coding. I’ve always felt that this was a beginner’s technique, but with my previous conception of programming as largely reasoning-based, had never been able to put my finger on why. If I really am reasoning about my code like the beginners are, surely paper would help me, too? But if I’m programming from memory, it is clearer why I don’t need design: I can literally look at the problem, and pretty much recall the answer from memory. It does chime with me that, like the chess players, most of the reasoning that I require is for comparing candidate solutions rather than struggling to construct a solution in the first place.

Novices vs Experts

One of the points that Willingham makes, to summarise his chapter 6 is:

Cognition early in training is fundamentally different from cognition late in training.

One of the primary differences that Willingham identifies between beginners and experts is that the latter has a lot more background knowledge to draw on. He points out that it’s largely useless to try and encourage beginners to think like experts, because they lack the experts’ advantages. When I look at a programming problem to solve, I typically know the solution straight off. When a beginner looks at the same problem, they may be totally lost. The skill of a teacher is of course to find the right steps to help the beginner, who is surely programming in a qualitatively different way to the way that an expert programs.

Not From Google

So, we have this idea that experts program quickly because they can recall a solution (or something very close to it) from their long-term memory. There is a large amount of solutions to programming problems on the Internet; can we just get students to use those, effectively taking Google as a substitute for our own long-term memory? In her book on the “Seven myths of education”, Daisy Christodoulou discusses “You can always just look it up” as one of the myths. Christodoulou has two main objections to the idea of just googling it (which people often apply to things like times tables, spelling, historical facts and so on):

  1. If you look something up, this enters your working memory. Working memory has very limited capacity, and thus is very precious — when it gets full, you can often lose the thread of what you are trying to do. When you recall something from prior knowledge, you are using your long-term memory, and do not occupy space in working memory. Therefore recalling something you know is better than looking it up.
  2. Looking something up often requires existing knowledge to process it. Looking up a dictionary definition will often define it in terms of other words — if you don’t know the other words either, you’re sunk! Therefore looking something up is more useful for experts than it is for beginners, because they have the required knowledge to understand the answer.

This seems to map quite well to the problems with googling for code fragments. If beginners try to understand the full code fragment, they generally can’t manage it, so then they must work with code they don’t fully understand — this is liable to cause problems. Googling for code fragments is a useful skill for experts, but not necessarily for beginners, which fits with the previous point that experts work differently to beginners.

Summary

So I started out with an idea that we program primarily by reasoning, but now I’m wondering if reasoning is actually more of a beginner’s tool, and experts program primarily from memory (and thus faster). If so, perhaps that is why experts often find it hard to teach beginners, because they follow a different mental process? I’m finding it an interesting topic to think about as I try to make sense of several general education books in a computing context.

Of course, I am not the first to wonder about applying the kind of cognitive ideas that Willingham discusses in the context of programming education. Caspersen and Bennedsen have a paper on exactly this, from ICER 2007 (freely available here, also slides). They discuss using cognitive load theory (working memory and long-term memory) to inform the design of an object-oriented programming course, and make a few computing-specific suggestions for instruction. I recommend taking a look at the paper — it would also be interesting to see more work in this area.

Advertisements

3 Comments

Filed under Uncategorized

3 responses to “Programming from Memory

  1. Gurtaj

    Interesting idea. I rarely write down algorithms (pseudo-code or not) before actually doing the programming. However, when I do write something down, it’s because a lot of thinking is going on.

  2. Pingback: Goals, Plans and Code | Academic Computing

  3. Pingback: John Sweller on Cognitive Load Theory and Computer Science Education | Academic Computing

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s