Pedagogy of Programming Tools

If you want to teach programming, you have several decisions you need to make. You need to choose:

  • a programming language, such as Java, Python, Javascript, Ruby,
  • a programming environment, which may be something like Notepad + command-line, or a full-blown IDE like Visual Studio,
  • a context, such as making games, media computation, website creation, robotics, and
  • a pedagogical approach, such as what you will teach, in what order, and using which activities.

Not everyone thinks about that last item in terms of explicitly deciding on a pedagogical approach. But as soon as you start making decisions such as: “what task do I start with?”, you are implicitly deciding. Do you start with “What is a variable?” or “Here’s how to print ‘Hello World'” or “This is the syntax of a function call”? Do you teach automated testing? Do you start with a blank program or start by modifying an existing program? You have always chosen a pedagogical approach, whether you realise it or not.

What’s interesting about the four items above is that they all interact with each other. The top three clearly so: you can’t write Java in IDLE, for example, and you may find your robot of choice doesn’t support Javascript. But the tool and the language you choose will affect the available pedagogy and vice versa. Programming tools are not pedagogy-neutral. Your tool determines which programming-related activities are easy and which are hard, which in turn will affect how you use the tool to teach.

Code tracing is a useful skill but doing it an environment with a debugger that shows variable values step-by-step makes it much easier than in Notepad+command-line. Parsons problems (where you drag bits of pre-written code into order) are easier in Scratch than in a text editor. BlueJ lets you call methods on objects via the context menu without writing any code, whereas an IDE like IntelliJ does not. It’s useful to understand what pedagogies your tool supports or makes difficult when making a choice.

In our latest Greenfoot Live video, my colleague Hamza and I sat down for half an hour to do some Greenfoot programming and talk about pedagogical strategies in Greenfoot: ways you can use it to teach, and what pedagogical approaches we have in mind when designing the tool. I’m quite pleased with how it turned out, and I think it’s worth watching:

Whether you agree with our particular pedagogical philosophies or not, next time you choose a programming language and tool, be aware of its impact on what teaching approaches and activities it can support well, and which activities it will make hard for you to engage in.

Frame-Based Editing: The Paper

Frame-based editing is our work which combines blocks and text-based programming into a single method of editing. Our frame-based language, Stride, is available for use right now in the public releases of both Greenfoot and BlueJ.

Now, our large paper on frame-based editing has been published. You can freely download the individual paper, or the whole special issue with several other interesting-looking papers on block-based programming.

This paper is the canonical description of our frame-based editing work, describing its features and our design choices. I’m also quite pleased with section 13, which tries to explain why structured editing failed to take off, and yet block-based programming became a great success, despite being very similar concepts.

Thanks are due to John Maloney and Jens Moenig who have been very supportive of our work, as has the editor Franklyn Turbak, who did a very thorough job of editing the paper.

Greenfoot Scenarios Back Online

Many years ago, we created the ability for users to upload Greenfoot scenarios to our website, and play them from the website without requiring Greenfoot to be installed locally. This was a great selling point. If you were a learner creating games at home or at school, it allowed you to share the game with friends and family by just giving them a link to the online game.

However, the implementation used Java applets, which have turned out to not be secure. Over time, browsers have one by one dropped support for applets, and slowly it became harder and harder to run the Greenfoot scenarios in a typical web browser. The only technology guaranteed to work in every browser is Javascript. Despite the names, Java and Javascript are not really related, and so converting Java to Javascript is a very difficult technical challenge. But… my colleague Davin has been working on this, and in a surprisingly short time, has got a Java to Javascript converter working for Greenfoot scenarios. It’s currently in beta testing, and details are available on our forum about how to run this.

The Marbles scenario is a good one to try. Remember that you need to press the Run button before playing. (Enlarging this button and making it easier to start playing is one of the items still on our todo list.). Then drag from the gold ball to fire it.

On desktops and laptops, this really just restores functionality that we had several years ago with applets. However, the new advantage of being in Javascript is that it means the scenarios on the website can now run on Android and iOS devices. You can’t develop the games on such devices, but once published to the website, they will be playable on other devices (essentially, touch works like a mouse device as far as the scenario is concerned). This should allow Greenfoot games to be shared widely to friends and family. Once it is finished, this will be publicly available by default, but for now you need to follow our instructions to enable this.

Greenfoot Live

There are many aspects to making a learners’ programming tool successful. You obviously need the tool to be working and useful, but you also need to have material available on how to use it. Over the past year or two, our team been very busy with a lot of technical implementation work: having made our Stride editor available, we’re now neck-deep in a rewrite of BlueJ’s GUI from Swing to JavaFX. So, we have fallen a bit behind on our efforts to communicate how to teach with Greenfoot, and to engage with teachers who are using it. But this Monday we started a new initiative to try to rectify that: Greenfoot Live. Our plan is that every two weeks on a Monday at 17:00 UK time (which is UTC+1 over the summer), we’re going to do about 30-40 minutes live stream talking about Greenfoot. If you can make it live, great — but if you can’t then you can still watch the recording afterwards.

Our first show involved Michael and me covering how to display text on the screen in Greenfoot using the showText method, but we also discuss a few software design choices and encounter an exception along the way:

The next live stream will be on Monday 22nd May at 16:00 UTC, available via our Greenfoot Youtube channel.

What Do Novice Programmers Write Literally?

Recently I was asked about use of hex and floating point literals (especially “E” notation) in the Blackbox data set: do beginners use them? I was intrigued enough to knock up a simple program to find out. My method is quite straightforward: I take the latest version of each source file which successfully compiled, run it through a Java lexer and pick out the literals. This gives us about 40 million source files to look at.

Before we get into the results, here’s some predictions I made beforehand about our data (where most users are assumed to be programming novices):

  • Very few users use hex literals
  • Most hex literals are 0xFF or similar bitmasks
  • Almost no-one uses underscores (Java lets you write numbers with underscores, e.g. 1_000_000)
  • Almost no-one uses E notation (and when they do, mainly 1e-6 for epsilon values in floating point comparison)
  • Most floating point values are between 0 and 1

Hexadecimal Integers

Let’s start with hexadecimal integer literals. There were 814,920 hex integer literals, compared to 29,044,559 decimal integer literals. So 2.7% of hex/decimal integer literals were in hex. (I didn’t bother going into octal, but there were a handful of uses. I suspect many of these were an accident.). That is a bit higher than I was expecting, admittedly. In terms of their value, here’s the top five:

  • 0xFF: frequency 89,663
  • 0x0: frequency 52,732
  • 0x30: frequency 16,742
  • 0xF: frequency 16,009
  • 0x1: frequency 13,799

There are two F bitmask values there as predicted. I was a bit surprised by how many zeroes and ones were in there: why write them as hex (0x0) and not just decimal (0)? My guess is that they are working with bitmasks nearby, and out of habit/consistency write the values as hex.

Decimal Integers

There’s not too much to say about decimal integer literals, but I will mention the most frequent items. It’s a sequence that runs as you might expect (zero being most frequent, and increasing numbers being less frequent), punctuated by some numbers which testify to computing’s love of powers of two. Most frequent first:

0, 1, 2, 255, 128, 3, 4, 5, 256, 8, 10, 7, 6, 100, 127, 16, 20, 1000, 9, 50

The frequency is a decreasing power law (1 is half that of 0, 2 is half that of 1, then the tail begins to flatten out).

Underscores

Underscores are a relatively recent addition to Java (added in Java 7) and little-known. Indeed, only 692 decimal literals had underscores: 0.002% of all decimal literals. Oddly, 737 hex literals had underscores, which as a proportion is much higher: 0.09%. I suspect this is because both underscores and hex literals are both used by more advanced users. Generally though, our users are clearly not making much use of this underscore feature.

Decimal Floating Point

There were 1,791,915 floating point decimal literals. Of these, only 3,002 used the “E” notation (e.g. 1.15E12): 0.16%. Clearly not a very used feature. As for their values, the top five were: 1e-3, 1e-8, 1e-6, 1e6, 1e-20. I’d say my prediction about the use for epsilon values was borne out.

Regardless of notation, across all floating point decimal literals, the most frequent values were: 1.0, 0.0, 100.0, 3.0, 2.0. Technically, my prediction that most values were between zero and one was almost correct: 47% of values were between zero and one. But really, this is only because 23% of them were zero or one. As a last side note on these literals: 7,130 (0.40%) started with a dot (e.g. “.5”) — something we disbarred in Stride due to the awkwardness of parsing that in expressions. But actually we could have banned E notation (also a pain) with less immediate impact.

Hexadecimal Floating Point

If you even knew that hexadecimal floating point notation was a thing in Java, then give yourself a pat on the back. Added in Java 5, they look like “0x1.fe2p5”, where p takes the place of the usual “E” notation because E is of course a valid hex character. I only know about this because we have a parser in BlueJ, which does accept these. I found precisely four uses of this notation, which is probably more than expected.

Limitations

This is a pretty cursory look at literals with a fairly crude methodology. Note that although we only looked at the latest version of each source file, source files in Blackbox are not independent of each other (e.g. if a teacher gives out a project with a floating point literal, that will show up identically in each student’s copy). For example, the four hex float point literals were the same value, suggesting they are not independent. And on a related note, I’ve only looked at source files regardless of whether they come from the same user or not, so we’re only measuring source occurrences here, not the number of users who use a particular notation. But I think our N is high enough that individual users cannot tilt the statistics.

Naming abstractions

Naming things is important: names permit more precise and concise communication. We don’t say “the movable clickable pointer controller box”, we say “mouse”. In computing we face the challenge of naming a lot of things, both physical and virtual. Some names are intended to be useful analogies: files or documents are like their physical equivalent. This can quickly get quite tenuous: scrolling is named after the action needed to read a historical scroll despite most people having never scrolled a physical roll of paper, and a mouse was named due to its tail.

Computing has been surprisingly effective at digging up and re-using words which pre-date computing, and vastly increasing their use:

(What was delete used for before computing, I wonder? Ledgers?)

Naming difficulties

Sometimes we can find useful words to refer to computing concepts, even as the concepts become more abstract. A sequence of items is a list. An group without duplicates is a set. A list/set is a collection. That’s not too bad. It gets harder when you need a name for a data structure which corresponds one value to another. We usually talk about keys and values, and variously call this collection a “associative array”, “dictionary” or “map”. It’s clear that there is no useful existing word to borrow which carries the right meaning, so we must just pick one and collectively learn what it means.

A list is a type, as is a string or integer. But a list can have different inner types, so we want a different name for that: a polymorphic type. But a list (which has one inner type) is different from a map (which has two), and so we need a name to talk about different types of types. For this, type theory refers to kinds: the arity of types, where arity is the name for the number of parameters that something takes. If your brain is creaking at this point: is that because type, kind and arity are poorly chosen words, or just because the concepts themselves are abstract and difficult? Not to mention all the other abstractly-named programming terms: class, interface, protocol, metaclass, polymorphism — the list goes on.

I think people struggling to learn something with a non-obvious name (i.e. most programming concepts!) often make a false assumption: that the concept is difficult to learn because the name is unhelpful. (And thus if it was just named better, it would be easier to understand.)

Names can be helpful if they are descriptive (e.g. a network star topology) or transfer intuition (e.g. memory in computers). But intuition is not always available for abstract concepts, such as classes in object-oriented programming. As we get progressively more abstract, names aren’t going to save you from the difficulty of understanding an abstraction: the name isn’t the problem, the concept is.

Living Computing

SIGCSE this year was held in Seattle. It turns out there is a computer museum in Seattle, not too far south of the downtown area, called the Living Computer Museum. The key distinguishing feature is that almost all of the computers there are actually running, and usable by visitors to the museum. Sue Sentance, Ian Utting and I headed off to take a look.

The ground floor is filled with modern gadgets and sensors which were all too familiar after spending the week at a computing education conference: a real busman’s holiday. But on the first floor were all the vintage computers, which were much more interesting. There is an oscilloscope which is running Tennis for Two, considered to be one of the first computer games:

This typifies what is great about the museum: it uses original hardware, not emulation, and is usable by visitors: I was playing the game with Ian while recording the video.

Historic Computers

If there’s one lesson I’ve learned from computing museums, it’s that you either die by age 25, or live long enough to see your childhood computers in a museum. I was amused to find an Amstrad in the museum, given that it’s a British manufacturer. The Amstrad was a fairly cheap machine, and the company’s head Alan Sugar has always had a bit of a reputation as a producer of low-quality goods. Hence my amusement at this:

After a moment or two I worked out how to reset the machine:

And then I found Sim City in that box of 5.25″ floppy disks. I’m not sure I expected to ever again use a floppy disk! I first accidentally ejected the floppy with the OS on it, but unlike USB sticks which complain immediately if you yank them out, older PCs didn’t even notice if you didn’t do anything to access the disk while it’s out. So I managed to load Sim City:

To explain: older games used to prevent copying by having special codes in the manual which you had to enter on load. To avoid being defeated by photocopies, they were often printed in yellow-on-white or black-on-black, and/or spread throughout the manual so you’d have to copy the whole manual. I did google on my phone, but at the time I couldn’t find the relevant info (now found, for the curious).

Interface Evolution

There is a card-punching machine in the museum on which you can punch your own cards. I’m a bit too young to have done any card-punching myself, but it’s quite an experience. You can’t see the last 1-2 characters you typed so it’s easy to get lost punching even simple sequences. There’s no backspace/undo, of course. And the keyboard itself is a marvel of bizarre design:

Punch cards are quaint curiosities now, but what a bloody awful way to program. 10x programmers may not exist, but we are all 100x programmers compared to the punch card era.

I also found a computer running Windows 1.0, which I’d never used before (3.1 is where I came in). The basic ideas of GUIs are there, but it’s horribly slow and clunky:

Summary

If you’re ever in Seattle, I highly recommended visiting the Living Computer Museum. It’s not a huge place, but it is my favourite of the computing museums I’ve been to, for the simple fact of being able to use the computers. It’s something I’ve mentioned before when discussing computing research: I don’t think you can truly understand an interface without using it. You can’t appreciate how clunky Windows 1.0 was until you drag the horrible ball mouse across a mouse mat and have to hold and drag the mouse to access menus and mess around with monochrome Paint.

Finally, I’ll end with this, as a thought on how computing has evolved (the scale isn’t apparent here, but it’s about a metre tall and a couple of metres wide):