Diversity in Computer Science, Revisited

The closing keynote of SIGCSE 2016 was given by Karen Ashcraft on the subject of diversity in Computer Science. A perennial topic which I’ve seen discussed many times, but I felt this talk offered some convincing explanations and ways forward. Here’s my understanding of the talk — you may also be interested in Pamela Fox’s more detailed notes.

Ashcraft’s central argument was that professions take on an identity through a figurative practitioner. That is, software development has an identity which comes from the commonly imagined programmer: white, male, quiet, socially awkward, interested in games, comics and sci-fi, etc. This idea that professions have an identity is different from the idea that the workers have an identity that is transferred to the profession. She gave the example of pilots, where the identity of the profession was deliberately re-sculpted from “debonair daredevil adventurer” to “fatherly naval officer” in the 1930s, because passengers were much more likely to want to fly with the latter than the former. The identity of a profession is constructed and can be manipulated independently of who is actually a member of the profession.

Ashcraft had particular criticism for the idea of fixing diversity by “leaning in”: the idea that women should adjust themselves to fit into the male world. (I’m partial to the description of lean-in as “victim blaming”.) She argued that any attempt to solve diversity which emphasises the divide is bound to lose. That is, if you tell women “women can program too”, you’re emphasising the idea that women differ from men when it comes to programming, and are not going to be successful in eliminating the gender division in programming.

Ashcraft’s suggestion was that to increase diversity in a profession, you should not focus on messages about “more women”, but rather to emphasise the diversity already present in the profession which is not automatically tied to attributes like gender. For example, let’s take my imagined programmer. Rather than worry about “programmers don’t have to be white males”, we should start with “programmers don’t have to be quiet, or socially awkward, or interested in games”. Building this type of diversity will eventually encompass non-whites and non-males, if we can keep the identity of a profession broad and fluid, rather than pigeon-holed into something very specific and narrow. And this could benefit everyone.

It’s a well-known phenomenon in ergonomics and user-interface design that if you adjust your design for the extreme users, you often improve the design for core users. For example, kitchen tools were given extra-grippy larger handles to help the elderly but it also made holding them easier for everyone else, so it became the default for all tools. A few more examples are given here. (Closer to home, the interesting work of John Maloney, Jens Moenig and Yoshiki Ohshima on keyboard support in their GP blocks language arose from wanting to support blind users.)

I think that this design principle transfers into the diversity argument. Narrow stereotypes for professions harm everyone, not just those who are far outside it. There are already programmers who aren’t quiet, who aren’t socially awkward, who don’t like computer games, who play sports and rock-climb, who don’t chug coffee and code all night, who are not single, who have families. Just because you’re a white male doesn’t mean you aren’t impacted by everything else that comes with a stereotype, that you can’t feel like something of an outsider when you meet few of the criteria associated with a stereotype. For example, it’s been suggested that for programmer types, World of Warcraft (or similar games) is the new golf: the social mechanism outside work where people meet and get ahead. Golf used to exclude women, World of Warcraft excludes non-gamers, or those who don’t have time to play because of family commitments. Exclusion can negatively affect any of us. There is often a shoulder-shrugging among programmers: yeah our diversity sucks, but we don’t really care whether more women enter the profession or not. But diversity and stereotypes can pinch for anyone. It’s worth trying to fix for the benefit of everyone.

Is Teaching Programming Not Enough?

The Atlantic has an interesting take on the programming/computer science/computational thinking debate, with several points for disagreement. I’m going to pull out various quotes from the article, and interleave them with some of my own thoughts.

Who Needs Java or JavaScript Anyway?

“Educators and technology professionals are voicing concerns about the singular focus on coding—for all students, whether learning coding is enough to build computational thinking and knowledge, and for students of color in particular, whether the emphasis on knowing Java and JavaScript only puts them on the bottom rung of the tech workforce.”

That last bit’s a jaw-dropper. A mastery of two of the most popular programming languages in the world would surely not leave you on the bottom rung of the whole tech workforce! I think this quote is a misinformed summary of some of the later comments, and thus a very unfortunate quote to repeat in the byline. Research like that from Sweller’s talk suggests generic cognitive skills like computational thinking cannot be directly taught. Even if they can be instilled, through transferring learning from another domain, programming may well be the best place to transfer from.

The Self-Programming Computer

“The artificial-intelligence system will build the app… Coding might then be nearly obsolete, but computational thinking isn’t going away.”

I can foresee that increasingly powerful programming tools, higher-level languages and code-reuse (e.g. micro services) could reduce the number of programmers needed, but the idea that AI will write the code seems incorrect. I don’t think any efforts to take the programming out of creating computer programs have ever really succeeded. It always ends up with programming again, just at a different level of abstraction.

The Relation to Mathematics

To avert the risk of technical ghettos, all students must have access to an expansive computer- science education with a quality math program… It’s a myth to think that students can simply learn to code and flourish without a minimum level of mathematical sophistication.

This is a perennial point of contention, as to whether mathematics knowledge is required for programming, and/or whether the skills in them correlate. The assumption that they are intrinsically related is a personal bugbear, and I’m not sure there’s strong evidence either way. Not that I’m arguing against the notion that all students should have access to a quality mathematical education!


This issue of privilege is interesting, though. Historically, many programmers are self-taught. This means that programmers are primarily those with access to computers, who are encouraged (or not discouraged) by their families, which means you get less women (who are less likely to be encouraged to try it) or those from poorer backgrounds (who are more likely to need to get a part-time job, and thus have less time available for self-teaching). The UK’s gambit of teaching it to everybody has the potential to partially correct this artificial selection. However, it is a very touchy subject to suggest that self-taught programmers are automatically inferior, which is what one interviewee implies:

Further, [Bobb] recommends combining the endless array of out-of-school programs for coding and hackathons with learning opportunities in schools. Brown echoes this point, adding that coding to the uninformed can take many forms, but “you simply cannot learn the analytical skills… without a formal and rigorous education.”

This chimes with the potential disagreement with Sweller’s talk. Is computing special (because you can learn from the computer’s feedback) so that self-teaching can work more effectively than in other disciplines? The legions of self-taught programmers in the workforce surely shows that self-teaching must be possible. It may not be the most efficient method to teach yourself, but I think claiming that only formal education can teach the necessary analytical skills for programming is surely incorrect.

John Sweller on Cognitive Load Theory and Computer Science Education

The opening keynote at SIGCSE 2016 was given by John Sweller, who invented cognitive load theory and works on issues such as the relation between human memory and effective instructional design. The keynote was purportedly about “Cognitive Load Theory and Computer Science Education”. As one or two people pointed out, the “Computer Science Education” part was a bit lacking, which I’ll come back to. (I should also point out this is a bit hazy as there was a full conference day before I had chance to write this — happy to take corrections in the comments.)

Knowledge and Memory

Sweller started by discussing biologically primary versus biologically secondary knowledge. Biologically primary knowledge is that which we have evolved to acquire: how to walk, how to talk, how to recognise faces, etc. None of this is taught, it is all learnt automatically. In contrast, biologically secondary knowledge is the rest, which he argues must be explicitly taught. Algebra, history, science: his rough summary was that everything that is taught in school is biologically secondary.

I won’t go deeply into working memory and long-term memory here, but the relevant parts were that working memory is small and transient (i.e. disappears quickly), whereas long-term memory is almost unlimited and can be retained permanently. Novices can get overwhelmed because everything novel must live in working memory, whereas experts have knowledge stored in long-term memory and so use different processes to tackle the same tasks. One thing I did not know before was that we have two working memories: one auditory and one visual, which means you can potentially achieve benefits by presenting information in a mixed modality, providing they mesh well together.


One issue that came up in the questions was about algorithm visualisation. Algorithm visualisation is something that many people are convinced is useful for understanding programming, but which has rarely, if ever, been shown to be effective for learning. Sweller’s suggestion was that if comparing two states (before/after) is important and non-trivial then it is better to provide two static diagrams for careful comparison, rather than a video which animates the transition from one to the other. My take-away message from this is that visualisations need to be comics, not cartoons.

Experts Use Memory More Than Reasoning

Sweller made the point that experts tend to operate through pattern-matching. Although we may think of programming as carefully constructing logical code paths to fit a given problem, we are often just recognising a problem as similar to one we have solved before, and adjusting our template to fit. More expert programmers just know more patterns (and “patterns” here matches well to the idea of design patterns). The difficult part of programming is thus only when we stray outside our pattern catalogue. This was work covered in some work in the 1980s which I’ve previously discussed here (and see also this post).

What May Make Programming Special

The issue of how knowledge is transmitted was contentious. Sweller is against constructivism: he believes the idea that knowledge is best gained through discovery is incorrect, and explicit instruction is always superior. This is where the Computer Science domain becomes important. I can see that for something like algebra, you must be taught the rules, and taught the processes by which to rearrange an equation. You can’t just mess around with equations and hope to learn how they work — because you have no feedback.

But the computer is an interesting beast. It provides an environment which gives you feedback. If you know the basics of how to write a program, you can potentially learn the semantics of a language solely by exploring and discovering. Can is not the same as should, though: explicit instruction may still be faster and more effective. But I think it went unacknowledged in the talk that programming is somewhat different to many other learning experiences, because you have an environment that offers precise, immediate, automatic feedback based on your actions, even if no other humans are involved.

Final Notes

There’s a bunch more content that I haven’t included here for space reasons or because I didn’t remember it clearly enough (didn’t reach long-term memory, I guess!). Terms to google if you want to know more: cognitive load theory, randomness as genesis principle, expertise reversal effect, worked-example effect, split-attention effect.

I liked Sweller’s talk, and I believe that understanding what he called human cognitive architecture is important for education. I think the main issue with the talk is that Sweller is a psychologist, not a computer scientist, and thus there was little consideration given to the potential ways in which computer science may be different. How does having syntax and semantics in code play with working memory for novices? What should explicit instruction look like in programming education? Do different notional machines cause different interactions with our memory? How can we best organise our teaching, and our design of programming languages and tools to minimise cognitive load? Some answers to that arise in Briana Morrison’s work (including the paper later in the day yesterday, not yet listed there), but there is clearly a lot more potential work that could be done in this area.

Book Review: Learner-Centered Design of Computing Education

Mark Guzdial works at Georgia Tech and writes the most prolific and most read blog in computer science education. Thus I was intrigued to read his new book, “Learner-Centered Design of Computing Education: Research on Computing for Everyone”.

The book is a short and accessible read, that summarises huge chunks of research, especially the parts about teaching computing to everyone (i.e. non-specialists). One of Guzdial’s most well-known interests is media computation, which is used at Georgia Tech to teach non-majors (i.e. those taking non-CS courses) about programming, and it’s interesting to see how much positive impact it has had. But there’s no favouritism here: the book always has the research to back up its claims, and this is one of the great strengths of the book. The reference section has some 300 references, making it a great place to start getting to grips with research in the area. If I have any complaint with the book it is that it could do with a bit of copy-editing to remove some typos, but it’s not a serious issue.

Computational Thinking

The recent buzz area of computational thinking is covered in the book. One of the key ideas around computational thinking is that of teaching computing in order to teach general problem solving skills. Again backed by research, Guzdial dismisses this idea as unsupported: there never seems to be any transfer from programming into more general problem-solving skills. Programming’s utility would seem to be two-fold: teaching programming can transfer into other programming (i.e. between programming languages), and teaching programming does give an idea of how computers operate and execute. Beyond that, much of the rest is wishful thinking. But in an increasingly computer-centric and computing-reliant world, I believe this is still sufficient to argue for teaching computing to all.

Identity and Community

Several parts of the book remind me of this article I read recently on computing as a pop culture. One interesting aspect of programming is how programming languages live and die. It’s tempting to think that it’s solely to do with suitability for purpose. However, there are all sorts of other factors, such as tool support, library availability, and popularity. Languages do clearly move in fads, and this is just as true in computing education. Pascal is not necessarily any worse today to teach programming than it was twenty or thirty years ago, but it is dead in classrooms. Guzdial relates this to communities of practice: students want to believe they are learning something which leads them towards a community of professionals. For physicists, this might be MATLAB; for programmers this might be Java. This can be frustrating as a teacher: why should a teaching language be discounted just because students (perhaps wrongly) come to see it as a toy language. But this seems to be a danger to languages for use in classrooms, especially at university.

The issue of identity and belonging crops up repeatedly. Guzdial makes the point that for people to become programmers, they need to feel like they belong. If they don’t see people like themselves doing it then they are likely to be put off. This may be minorities not becoming full-time software developers, but it can also affect non-specialists. There’s an anecdote from one of Brian Dorn’s studies where a graphic designer who writes scripts in Photoshop tells of being told that he’s not a real programmer, he’s just a scripter. Technically, programming is programming, but it’s clear that issues of identity and community are often more important than the technical aspects. Programmers reject those they see as non-programmers, non-programmers reject those they see as programmers.

One really interesting case study on belonging as a programmer is that of the Glitch project, which engaged African American students in programming by using game testing as a way in. Several of the participants learned programming and did extra work because they were so engaged — but they would not admit this to anyone else. They constructed cover stories about how they did extra for extra money, or that they were only doing it to play games or learn money, disguising the fact that they were learning computer science.

The issue of identity also resurfaces when Guzdial talks about computing teachers. As in the UK, many US computing teachers come from other subjects: business, maths, science, etc. Many felt that they were not a computing teacher and (in the US) lacked sight of a community of computing teachers for them to join, which demotivated them. I also appreciated the section on how computing teachers’ needs differ from those of software developers. “Teachers need to be able to read and comment on code… We don’t see successful CS teachers writing much code.” I wonder to what extent this is reflected in CS teacher training. The point is also made that teachers need to “know what students are typically going to wrong”, which is something my colleague Amjad Altamdri and I have done some work looking into.


I’d recommend the book to any computing teacher who is interested in learning more about what the research tells us about teaching computing. I could imagine lots of secondary school computing teachers in the UK being interested in this, and a fair few computing university academics who could benefit from reading it. Although there may be little chance of widespread impact, as the book itself describes: “Several studies have supported the hypothesis that CS teachers are particularly unlikely to adopt new teaching approaches” and “Despite being researchers themselves, the CS faculty we spoke to for the most part did not believe that results from educational studies were credible reasons to try out teaching practice.” (I wonder how many of those believe in evidence-based medicine, etc, but not evidence-based teaching.) The book would also be particularly useful to new computing education PhD students who want a head start in getting to grips with previous work in the area. If any of those describe you, I’d recommend picking up a copy.

Greenfoot and Blackbox at SIGCSE 2016

This year, the SIGCSE (Special Interest Group in Computer Science Education) conference is in Memphis, Tennessee, at the beginning of March. With the early registration deadline of February 2nd fast approaching, I thought it’s worth quickly highlighting our events.

On Friday evening, we’ll be delivering a workshop on Greenfoot, using our new Stride language. Stride is a halfway house between blocks-based programming like in Scratch, Snap! & co, and full text programming in languages like Java or Python. We officially released Stride in Greenfoot 3.0 towards the end of last year, so we’re looking forward to having the opportunity to now deliver workshops using Stride. The workshop is at 7pm on Friday night. The committee like to see sign-ups in advance, so if you are thinking of attending, it would help us if you sign up now rather than once you arrive.

Stride Editor
Stride Editor

Also related to our work on Stride, I’ll be taking part in a panel discussion (proposal available here) on Friday morning at 10:45, entitled “Future Directions of Block-based Programming”. I’m very excited by the panel we’ve put together, where I’ll be joining: Jens Moenig, creator of Snap! and now working on GP, a new block-based language; David Anthony Bau, one of the leads of Pencil Code, an impressive blocks-based editor; and David Weintrop, who has done some excellent work looking at block-based programming (see 2015 section here). I’m looking forward to talking about the important details of designing and implement block-based and block-like editors, and how the design affects the way they are used in teaching.

Our other event is not focused on Greenfoot, but instead on our Blackbox data collection project. We’re holding a BoF (Birds of a Feather) event on Thursday evening at 5:30pm where we hope to get together with other researchers interested in educational data collection and analysis to talk about the data set and data collection platform that we provide in BlueJ.

I’m also looking forward to attending and seeing all the other interesting sessions. Glancing at the program, the paper sessions on CS Ed Research and Program Design look interesting for me. If you have any other suggestions on what to attend, feel free to put them in the comments.

Adventures in Graphing

One of my favourite parts of research, probably because I don’t get to do it often, is trying to come up with a graph or similar visualisation to display some results. You often have a lot of choice in how you could visualise some data; the art is in trying to find the best way to convey the interesting aspect of the results so that it’s obvious to the reader’s gaze. I thought it might be interesting to demonstrate this through an example. In one strand of our work, we look at frequency of different programming mistakes in our Blackbox data set. We’ve recently added a location API for the data so as a descriptive statistic, I wanted to see if the frequency differed by geography. To that end, I produced a table of mistake (A-R) versus continent:

The data is already normalised by total number of mistakes for each continent, so each column will sum to 1. One option would be simply to use the table to portray the data. This is often the best option for single-column data, or for small tables (e.g. 4×4). But it’s too difficult to get any sense of the pattern from that table. So we need a graph of some sort. The particular challenge with this data is that although it’s only a two-dimensional table, both dimensions are categorical, with no obvious ordering available.

Since each column sums to one, we could use a stacked bar chart:

I don’t like this option, though, for several reasons. 18 different stacked bars is too visually noisy, and we still need to add labels, too. Stacked bar charts also prevent comparison: is there much of a difference in the proportion of C (the light blue bar, third from bottom) between areas? It’s too hard to tell when stacked.

If stacked doesn’t work, it’s tempting to try a clustered bar chart, with the bars for each area alongside each other instead. That looks like this:

This time, you can see the difference between the proportion of C between areas. But… it’s still too noisy. It’s too difficult to take in a glance, and annoying fiddly to look at each cluster in turn and check the proportions. You actually get a reasonable sense of difference between mistakes (A-R) rather than between areas. You could thus try to cluster in the other dimension, giving you six clusters of eighteen rather than eighteen clusters of six:

A first obvious problem is that the colours repeat. It sounds like a simple fix to just pick more colours, but picking eighteen distinct colours would be very difficult. And even then, comparing one set of eighteen bars to another to look for differences is just too fiddly.

So maybe we should forget the bars. An alternative is to use line graphs. Now, technically, you shouldn’t use line graphs for this data because a line typically indicates continuous data rather than categorical. But I think this should be considered a strong guideline rather than absolute rule; sometimes with difficult data, it’s about finding the most reasonable (or “least worst”) approach when nothing is ideal. So here’s a line graph with mistakes on the X and proportions on the Y:

Not really very clear; any differences at a given point are obscured by the lines rapidly merging again before and after. Also, I don’t like the mistakes on the X axis like this, as it does strongly suggest some sort of relation between adjacent items. Maybe it would help to make it radial:

Ahem. Let’s just forget I considered it making it radial, alright? Looking back at the previous line graph, I said one issue was that the lines jump around too much, merging after a difference. We could try to alleviate this by sorting the mistakes by overall frequency (note the changed X axis):

Once the labels get added, it’s probably the one I like most so far. You at least get a sense, for the most frequent mistakes, of some difference in the pattern between the continents (one line per continent).

At this point, we’ve exhausted the bar and line graph possibilities, with either mistake or continent on the X axis, a different bar/line for the other variable, and proportion always on the Y axis. A final possibility I’m going to consider is a scatter graph. In this case, we can use continent as the X axis, and use the mistake label as the point itself on the scatter graph, with the proportion again on the Y:

At first glance, it’s a bit of a mess. Several of the letters overlap. But as it happens, I’m less interested in these. Minor differences in the low frequency mistakes are probably not that interesting for this particular data. The difference between the most frequent mistakes is, I think, more clearly displayed in this graph than in any of the others.

There is one finishing touch to add. One issue with normalising the data by total frequency for that continent is that it obscures the fact that we have quite different amounts of data for each continent. Happily, this scatter graph gives us the opportunity to scale the size of the points to match how much data we have for that continent. (There are two ways to do this: scale the height of each character by amount of data, or the area of each character by amount of data. Having tried both, the latter seemed to be a better choice.) That leaves us with this:

Easy to see where most of our data comes from! I may yet add some colour, but really the final decision is: is this interesting enough to include in a paper, or should I just scrap my work on this? Such is research.

Frame-Based Editing: Easing the Transition from Blocks to Text-Based Programming

The Workshop in Primary and Secondary Computing Education (WiPSCE) 2015 was held this week at King’s College London. My colleagues Michael Kölling and Amjad Altadmri presented our paper, entitled “Frame-Based Editing: Easing the Transition from Blocks to Text-Based Programming”. The paper is now available freely online.

The gist of the paper is as follows. Blocks-based programming is usually used to teach programming to younger age groups, but text-based programming is used for older age groups. This creates a necessary transition period inbetween, wherein learners must transition from blocks- to text-based programming, but experience of teachers is showing that this can be a difficult transition. The paper first enumerates all the issues involved in making this transition, and then goes on to discuss how frame-based editing (as now available in the latest Greenfoot release) divides these issues into two smaller sets, and thus may make a suitable stepping stone in the gap between blocks and text — see the diagram below, or the paper for a larger version and all the details. If you’re interested, take a read and/or try the software.

Greenfoot: Frames, Pi, and Events

Lots of positive Greenfoot developments recently, which seemed worth summarising. Firstly, we released preview2 of Greenfoot 3.0.0, our new version of Greenfoot which includes our new frame-based editor (a hybrid of block- and text-based programming). The download of 3.0.0-preview2 is now available. Our intention is that this is a feature-complete preview, and we’re now just fixing bugs and optimising ahead of the final release (likely to be November or so).

Our other big development is that BlueJ and Greenfoot are now included in the default image of the official Raspbian Linux distribution for Raspberry Pi. A few weeks ago we made it into the main repository, which means that on any Raspbian, you can now run: sudo apt-get update && sudo apt-get install bluej greenfoot and have both tools available in the Programming menu in the GUI. But being included in the default image means that anyone installing Raspbian from now on will automatically already have BlueJ and Greenfoot installed. We’re really pleased to have our software so easily available of course. Thanks must go to the Raspberry Pi folks who supported this move, and my colleague Fabio Hedayioglu who worked on some optimisations to help our performance on the Raspberry Pi. BlueJ in particular comes bundled with libraries for interfacing with hardware via the GPIO pins.

We also have a set of events coming up which packs all my travel for the second half of the year into three weeks. I will be attending the Blocks and Beyond workshop on 22 October in Atlanta, which leads into a trip to the JavaOne conference in San Francisco where we will be teaching Greenfoot at a kids event (open to the public) on 24 October and talking about Greenfoot 3 on 26 October in the main conference.

After that there is the CAS Scotland conference on 7 November in Dundee where I’ll be doing a session on Greenfoot 3, and we will also be at the WiPSCE conference in London on 9–11 November, presenting our paper “Frame-Based Editing: Easing the Transition from Blocks to Text-Based Programming” on the Tuesday — when there is a cheaper day rate available for teachers. If you’re a UK teacher, it’s well worth considering attending if you can get out of school that day.

Improving Blocks with Keyboard Support

The Blocks and Beyond workshop will take place in Atlanta on October 22. It’s shaping up to be an interesting workshop, based around discussion of the submitted papers. Our own contribution is a position paper, entitled “Lack of Keyboard Support Cripples Block-Based Programming“. It’s a brief summary of the design philosophy which has informed our design of Greenfoot 3 (which we’re busy working on): the mouse-centric nature of existing block-based programming systems (Scratch, Snap!, etc) hampers their ability to scale up to larger programs, which in turn puts a ceiling on their use. Our gambit is that if we can remove this restriction, there might no longer be any reason to switch back to using text-based programming languages.

Greenfoot 3 Editor
The Greenfoot 3 Editor — public preview available

The paper (accepted version, copyright IEEE) is freely available, and only 3 pages, so rather than reproduce the full argument here, I suggest you take a look at the paper — and if you want to discuss it, or see similar work, then why not attend Blocks and Beyond?

Better Mathematics Through Types

I’m a huge fan of strong static types; it’s one of the reasons I like Haskell so much. Recently I’ve been toying with some F#, which has a typing feature I’m really enjoying: units of measure. I want to explain a few examples of where units of measure can prevent bugs. To do this, let’s start off with some simple F# code for a car moving around the screen. It has a direction and a speed, and you can steer and speed up/slow down using the keys. Here’s some basic code to do this:

type Key = Up | Down | Left | Right
let isKeyDown (key : Key) : bool = ...

type Position = {x : float32; y : float32}

let mutable pos = {x = 0.0f; y = 0.0f}
let mutable direction = 0.0f
let mutable curSpeed = 1.0f
let acc = 0.5f
let turnSpeed = 4.0f

let update (frameTime : float32) =
  if isKeyDown Up then
    curSpeed <- curSpeed + acc
  elif isKeyDown Down then
    curSpeed <- max 0.0f (curSpeed - acc)

  if isKeyDown Left then
    direction <- direction - turnSpeed
  elif isKeyDown Right then
    direction <- direction + turnSpeed

  pos <- {x = pos.x + cos(direction) * curSpeed * frameTime;
          y = pos.y + direction * curSpeed * frameTime}

This code looks fairly reasonable, and it compiles and runs. It’s also riddled with mistakes, which we can eliminate through the use of units of measure. To start with, we need to define a few units:

type [<Measure>] second
type [<Measure>] pixel
type [<Measure>] radian

Units of measure allow easy tagging of numeric types with units. The units are really just arbitrary names. The compiler doesn’t know second means, it just knows that second is distinct from radian. I think most people use single letters for brevity, but I like full names for clarity. You can use them to type a variable like so:

let mutable direction : float32<radian> = 0.0f

But actually this is insufficient; the compiler will complain that you are assigning 0.0f (a literal with no units) to direction, which has units. Much easier is to only type the literal and let the type inference do the rest:

let mutable direction = 0.0f<radian>

Typing our initial variables is pretty straightforward. Positions are in pixels, and we know from school that speed is distance over time, acceleration is distance over time squared:

type Position = { x : float32<pixel>; y : float32<pixel>}

let mutable pos = {x = 0.0f<pixel>; y = 0.0f<pixel>}
let mutable direction = 0.0f<radian>
let mutable curSpeed = 1.0f<pixel/second>
let acc = 0.5f<pixel/second^2>
let turnSpeed = 4.0f<radian/second>

This alone is enough to show up all the errors in our code. Let’s start with:

  if isKeyDown Up then
    curSpeed <- curSpeed + acc

The units of curSpeed is pixel/second, and we’re trying to add acc[eleration], which has units pixel/second^2. This is a subtle bug. To explain: as you may know, frames in a game or simulation don’t realistically run at completely fixed frame rates. They tend to vary between machines because different machines can maintain different maximum frame rates, and they even vary on the same machine while the program is running because other processes may interrupt to use the CPU. If I add acc to my speed every frame, then on a desktop doing roughly 60 frames per second (FPS) I’ll have a car that accelerates twice as fast as on a laptop doing 30FPS, which would make for quite a different game. The units of measure system has pointed this bug out. We shouldn’t just add the acceleration, we need to multiply the acceleration by the frame duration:

  if isKeyDown Up then
    curSpeed <- curSpeed + acc * frameTime

The same type of bug occurs in a few other places, such as here:

  if isKeyDown Left then
    direction <- direction - turnSpeed

Similar to above, turnSpeed is radians per second, but directions is radians, so we must again multiply by frameTime.

Standard Functions

Here’s another bug, where I have missed out a call to the sin function:

  pos <- {x = pos.x + cos(direction) * curSpeed * frameTime;
          y = pos.y + direction * curSpeed * frameTime}

Previously, the compiler was happy because all the variables were plain float32. Now, I’m trying to add pos.y [pixel] to direction * curSpeed * frameTime [radian*pixel]. Looking at it, the problem is the missing sin call. So we can add that in:

  pos <- {x = pos.x + cos(direction) * curSpeed * frameTime;
          y = pos.y + sin(direction) * curSpeed * frameTime}

However, we still get an error on these lines, complaining that we’ve attempted to pass float32<radian> to a function which takes plain float32. Hmmm. The problem here is that the standard library functions don’t have any units specified on their parameters or return values, so cannot pass a quantity with units to any standard library function. This is a bit of an ugly effect of units of measure arriving after the language’s initial release. One fix is that each time you call sin/cos/tan/atan2/sqrt/oh-my-god-it-goes-on-and-on you have to cast away the units, like so:

  pos <- {x = pos.x + cos(float32 direction) * curSpeed * frameTime;
          y = pos.y + sin(float32 direction) * curSpeed * frameTime}

Eugh. Firstly: this is annoying to have to do every time, especially if you use a particular standard library function a lot. Secondly: this is losing type safety. What if direction wasn’t in radians — which should be an error — but we’re masking the error by casting away the units? Much better is to add your own typed thin wrappers around the functions:

let cosr (t : float32<radian>) : float32 = cos (float32 t)
let sinr (t : float32<radian>) : float32 = sin (float32 t)

(Is there already an F# library that does this for all the standard numeric functions? Seems like it would be a sensible thing to do.) Now we can fix our code in a type-safe manner:

  pos <- {x = pos.x + cosr(direction) * curSpeed * frameTime;
          y = pos.y + sinr(direction) * curSpeed * frameTime}

You may question why you can’t pass a float32 to a function expecting float32<radian>. My understanding is that the designers chose float32 to represent a unitless measure, which is not the same as unit-indifferent. Think atheist vs agnostic: float32 is not unit-agnostic in modern F#, it’s unit-atheist. The output of my cosr function really is unitless, so it is specifically [unitless] float32, not [I was just too lazy to give a unit] float32. Even if you are using units of measure in your code, you will still have functions and variables that are unitless, and using units in their place is an error. For example, an acosr (i.e. inverse cosine) function would take a (unitless) float32 and give back a float32<radian>. It should, and would, be an error to pass in a float32<radian> as the parameter.

This also means units of measure aren’t a subtype: much of their advantage comes from the fact that you can’t substitute a plain float32 and a float32<whatever> without using an explicit cast. They’re more like an easy way to clone numeric types.

Units of Measure: Summary

I really like units of measure. I’ve used types for similar purpose in Haskell in the past, but the way that units of measure effortlessly wrap the existing numerical types make it very easy in F#. If you are doing anything with any arithmetic, not just geometry, units of measure can spot a lot of bugs and provide a lot of automatic safety. Units of measure really come into their own because they track units through arithmetic expressions. Units of measure are not just about avoiding confusing feet with metres; it’s more about not accidentally adding an office space rent cost per day (pounds/metre^2/day) to a cost per day (pounds/day) without remembering to multiply the first one by the desired floor area. Think how much easier your secondary school maths/physics (and later calculus) would have been if you had had a automated system like this, checking the units in all your equations.

One final note: this post is more computing than education, but you may be interested in this previous post (and its comments) on the role of types in programming education.