Frame-Based Editing: Easing the Transition from Blocks to Text-Based Programming

The Workshop in Primary and Secondary Computing Education (WiPSCE) 2015 was held this week at King’s College London. My colleagues Michael Kölling and Amjad Altadmri presented our paper, entitled “Frame-Based Editing: Easing the Transition from Blocks to Text-Based Programming”. The paper is now available freely online.

The gist of the paper is as follows. Blocks-based programming is usually used to teach programming to younger age groups, but text-based programming is used for older age groups. This creates a necessary transition period inbetween, wherein learners must transition from blocks- to text-based programming, but experience of teachers is showing that this can be a difficult transition. The paper first enumerates all the issues involved in making this transition, and then goes on to discuss how frame-based editing (as now available in the latest Greenfoot release) divides these issues into two smaller sets, and thus may make a suitable stepping stone in the gap between blocks and text — see the diagram below, or the paper for a larger version and all the details. If you’re interested, take a read and/or try the software.

Leave a comment

Filed under Stride

Greenfoot: Frames, Pi, and Events

Lots of positive Greenfoot developments recently, which seemed worth summarising. Firstly, we released preview2 of Greenfoot 3.0.0, our new version of Greenfoot which includes our new frame-based editor (a hybrid of block- and text-based programming). The download of 3.0.0-preview2 is now available. Our intention is that this is a feature-complete preview, and we’re now just fixing bugs and optimising ahead of the final release (likely to be November or so).

Our other big development is that BlueJ and Greenfoot are now included in the default image of the official Raspbian Linux distribution for Raspberry Pi. A few weeks ago we made it into the main repository, which means that on any Raspbian, you can now run: sudo apt-get update && sudo apt-get install bluej greenfoot and have both tools available in the Programming menu in the GUI. But being included in the default image means that anyone installing Raspbian from now on will automatically already have BlueJ and Greenfoot installed. We’re really pleased to have our software so easily available of course. Thanks must go to the Raspberry Pi folks who supported this move, and my colleague Fabio Hedayioglu who worked on some optimisations to help our performance on the Raspberry Pi. BlueJ in particular comes bundled with libraries for interfacing with hardware via the GPIO pins.

We also have a set of events coming up which packs all my travel for the second half of the year into three weeks. I will be attending the Blocks and Beyond workshop on 22 October in Atlanta, which leads into a trip to the JavaOne conference in San Francisco where we will be teaching Greenfoot at a kids event (open to the public) on 24 October and talking about Greenfoot 3 on 26 October in the main conference.

After that there is the CAS Scotland conference on 7 November in Dundee where I’ll be doing a session on Greenfoot 3, and we will also be at the WiPSCE conference in London on 9–11 November, presenting our paper “Frame-Based Editing: Easing the Transition from Blocks to Text-Based Programming” on the Tuesday — when there is a cheaper day rate available for teachers. If you’re a UK teacher, it’s well worth considering attending if you can get out of school that day.

Leave a comment

Filed under Uncategorized

Improving Blocks with Keyboard Support

The Blocks and Beyond workshop will take place in Atlanta on October 22. It’s shaping up to be an interesting workshop, based around discussion of the submitted papers. Our own contribution is a position paper, entitled “Lack of Keyboard Support Cripples Block-Based Programming“. It’s a brief summary of the design philosophy which has informed our design of Greenfoot 3 (which we’re busy working on): the mouse-centric nature of existing block-based programming systems (Scratch, Snap!, etc) hampers their ability to scale up to larger programs, which in turn puts a ceiling on their use. Our gambit is that if we can remove this restriction, there might no longer be any reason to switch back to using text-based programming languages.

Greenfoot 3 Editor

The Greenfoot 3 Editor — public preview available

The paper (accepted version, copyright IEEE) is freely available, and only 3 pages, so rather than reproduce the full argument here, I suggest you take a look at the paper — and if you want to discuss it, or see similar work, then why not attend Blocks and Beyond?


Filed under Uncategorized

Better Mathematics Through Types

I’m a huge fan of strong static types; it’s one of the reasons I like Haskell so much. Recently I’ve been toying with some F#, which has a typing feature I’m really enjoying: units of measure. I want to explain a few examples of where units of measure can prevent bugs. To do this, let’s start off with some simple F# code for a car moving around the screen. It has a direction and a speed, and you can steer and speed up/slow down using the keys. Here’s some basic code to do this:

type Key = Up | Down | Left | Right
let isKeyDown (key : Key) : bool = ...

type Position = {x : float32; y : float32}

let mutable pos = {x = 0.0f; y = 0.0f}
let mutable direction = 0.0f
let mutable curSpeed = 1.0f
let acc = 0.5f
let turnSpeed = 4.0f

let update (frameTime : float32) =
  if isKeyDown Up then
    curSpeed <- curSpeed + acc
  elif isKeyDown Down then
    curSpeed <- max 0.0f (curSpeed - acc)

  if isKeyDown Left then
    direction <- direction - turnSpeed
  elif isKeyDown Right then
    direction <- direction + turnSpeed

  pos <- {x = pos.x + cos(direction) * curSpeed * frameTime;
          y = pos.y + direction * curSpeed * frameTime}

This code looks fairly reasonable, and it compiles and runs. It’s also riddled with mistakes, which we can eliminate through the use of units of measure. To start with, we need to define a few units:

type [<Measure>] second
type [<Measure>] pixel
type [<Measure>] radian

Units of measure allow easy tagging of numeric types with units. The units are really just arbitrary names. The compiler doesn’t know second means, it just knows that second is distinct from radian. I think most people use single letters for brevity, but I like full names for clarity. You can use them to type a variable like so:

let mutable direction : float32<radian> = 0.0f

But actually this is insufficient; the compiler will complain that you are assigning 0.0f (a literal with no units) to direction, which has units. Much easier is to only type the literal and let the type inference do the rest:

let mutable direction = 0.0f<radian>

Typing our initial variables is pretty straightforward. Positions are in pixels, and we know from school that speed is distance over time, acceleration is distance over time squared:

type Position = { x : float32<pixel>; y : float32<pixel>}

let mutable pos = {x = 0.0f<pixel>; y = 0.0f<pixel>}
let mutable direction = 0.0f<radian>
let mutable curSpeed = 1.0f<pixel/second>
let acc = 0.5f<pixel/second^2>
let turnSpeed = 4.0f<radian/second>

This alone is enough to show up all the errors in our code. Let’s start with:

  if isKeyDown Up then
    curSpeed <- curSpeed + acc

The units of curSpeed is pixel/second, and we’re trying to add acc[eleration], which has units pixel/second^2. This is a subtle bug. To explain: as you may know, frames in a game or simulation don’t realistically run at completely fixed frame rates. They tend to vary between machines because different machines can maintain different maximum frame rates, and they even vary on the same machine while the program is running because other processes may interrupt to use the CPU. If I add acc to my speed every frame, then on a desktop doing roughly 60 frames per second (FPS) I’ll have a car that accelerates twice as fast as on a laptop doing 30FPS, which would make for quite a different game. The units of measure system has pointed this bug out. We shouldn’t just add the acceleration, we need to multiply the acceleration by the frame duration:

  if isKeyDown Up then
    curSpeed <- curSpeed + acc * frameTime

The same type of bug occurs in a few other places, such as here:

  if isKeyDown Left then
    direction <- direction - turnSpeed

Similar to above, turnSpeed is radians per second, but directions is radians, so we must again multiply by frameTime.

Standard Functions

Here’s another bug, where I have missed out a call to the sin function:

  pos <- {x = pos.x + cos(direction) * curSpeed * frameTime;
          y = pos.y + direction * curSpeed * frameTime}

Previously, the compiler was happy because all the variables were plain float32. Now, I’m trying to add pos.y [pixel] to direction * curSpeed * frameTime [radian*pixel]. Looking at it, the problem is the missing sin call. So we can add that in:

  pos <- {x = pos.x + cos(direction) * curSpeed * frameTime;
          y = pos.y + sin(direction) * curSpeed * frameTime}

However, we still get an error on these lines, complaining that we’ve attempted to pass float32<radian> to a function which takes plain float32. Hmmm. The problem here is that the standard library functions don’t have any units specified on their parameters or return values, so cannot pass a quantity with units to any standard library function. This is a bit of an ugly effect of units of measure arriving after the language’s initial release. One fix is that each time you call sin/cos/tan/atan2/sqrt/oh-my-god-it-goes-on-and-on you have to cast away the units, like so:

  pos <- {x = pos.x + cos(float32 direction) * curSpeed * frameTime;
          y = pos.y + sin(float32 direction) * curSpeed * frameTime}

Eugh. Firstly: this is annoying to have to do every time, especially if you use a particular standard library function a lot. Secondly: this is losing type safety. What if direction wasn’t in radians — which should be an error — but we’re masking the error by casting away the units? Much better is to add your own typed thin wrappers around the functions:

let cosr (t : float32<radian>) : float32 = cos (float32 t)
let sinr (t : float32<radian>) : float32 = sin (float32 t)

(Is there already an F# library that does this for all the standard numeric functions? Seems like it would be a sensible thing to do.) Now we can fix our code in a type-safe manner:

  pos <- {x = pos.x + cosr(direction) * curSpeed * frameTime;
          y = pos.y + sinr(direction) * curSpeed * frameTime}

You may question why you can’t pass a float32 to a function expecting float32<radian>. My understanding is that the designers chose float32 to represent a unitless measure, which is not the same as unit-indifferent. Think atheist vs agnostic: float32 is not unit-agnostic in modern F#, it’s unit-atheist. The output of my cosr function really is unitless, so it is specifically [unitless] float32, not [I was just too lazy to give a unit] float32. Even if you are using units of measure in your code, you will still have functions and variables that are unitless, and using units in their place is an error. For example, an acosr (i.e. inverse cosine) function would take a (unitless) float32 and give back a float32<radian>. It should, and would, be an error to pass in a float32<radian> as the parameter.

This also means units of measure aren’t a subtype: much of their advantage comes from the fact that you can’t substitute a plain float32 and a float32<whatever> without using an explicit cast. They’re more like an easy way to clone numeric types.

Units of Measure: Summary

I really like units of measure. I’ve used types for similar purpose in Haskell in the past, but the way that units of measure effortlessly wrap the existing numerical types make it very easy in F#. If you are doing anything with any arithmetic, not just geometry, units of measure can spot a lot of bugs and provide a lot of automatic safety. Units of measure really come into their own because they track units through arithmetic expressions. Units of measure are not just about avoiding confusing feet with metres; it’s more about not accidentally adding an office space rent cost per day (pounds/metre^2/day) to a cost per day (pounds/day) without remembering to multiply the first one by the desired floor area. Think how much easier your secondary school maths/physics (and later calculus) would have been if you had had a automated system like this, checking the units in all your equations.

One final note: this post is more computing than education, but you may be interested in this previous post (and its comments) on the role of types in programming education.


Filed under Uncategorized

Thoughts on F Sharp

F# is an attempt to bring a functional programming language (based on ML) to the .NET framework. It seems to have Microsoft blessing, with support out of the box in Visual Studio 2015 (now free for personal use). I thought I’d give some initial impressions of the language, mainly coming from a Haskell perspective. I am happy to be corrected, as I’m still quite new to the language.

I quite like F#, but the main impression I get is that whatever I want to do, there’s two or more separate ways to do it, and no obvious way to choose between them. F# is, very clearly, the result of stitching together two programming systems: the Object-Oriented Programming [OOP] of .NET and the Functional Programming [FP] of the ML language. Much like Java post-generics (with its “int” and “Integer”) or Windows 8 with its tablets-meets-desktop, systems like this often have quite glaring joins. And especially if you come from neither a .NET nor an ML background, you are just left a bit bamboozled as to which you should be using. Where there is a reasonable set of rules to choose between int and Integer in Java, in F# it often just seems to come down to preference or convention. This multiplicity of approaches is especially evident in the type system.


Let’s start with lists. You can specify a list’s type in F# using the postfix syntax “int list” or the prefix syntax “List<int>”. Both mean the same thing and there’s no obvious reason to choose one or the other (this StackOverflow answer mentions a suggestion to use prefix for all except four types, but with no clear justification). Lists have some properties which can be accessed directly, like “myList.Head”. But there’s also a function you can call using “List.head myList” for the same purpose. The property is probably an OOP thing, the function is FP, but as a programmer who just wants to write some code, what’s the difference? The dividing line seems to vary depending on the origin of what you’re using: a C# class is going to have lots of member functions, an F# library will have lots of external functions, and your code will seemingly always end up with a mess of the two. When writing your own code, it’s often not clear which way you should lean, either.

The Class System

F# allows you to write classes which override other .NET classes, which makes for a potentially great marriage of functional programming with existing [OOP] .NET libraries. In the classes you define in F#, there’s multiple ways to define fields:

type MyClass2 =
  let readOnlyPrivateValue = 1
  let mutable readWritePrivateValue = 2
  [<DefaultValue>] val mutable private readWritePrivateValueNoInit : int
  [<DefaultValue>] val mutable readWritePublicValueNoInit : int
  member this.ReadOnlyPublicValue = 5
  member private this.ReadOnlyPrivateValue = 6
  member val ReadWritePublicValue = 7 with get, set
  member val private ReadWritePrivateValue = 8 with get, set

This StackOverflow answer suggests when to use each, but I was still left puzzled. I’m slowly getting the hang of it: let when you have an initial value, val when you don’t, mutable when you want to change it, but I’m not much wiser as to when to use member vs let/val — are they semantically equivalent if I’m not customising the get/set? For example, I found this code in another StackOverflow answer which does a “memberwise clone”. Will that clone my “let mutable” fields or just my “member” fields?

To define a member method you use a similar member syntax, but with brackets for any parameters. You also need an object reference, which you can see is also required for some field declarations but not others. It took me a while to understand the syntax:

type MyClass() =
  member MyClass.GetHundredA() = 100
  member x.GetHundredB() = 100
  member y.GetHundredC() = 100
  member whateveryoulikeiguess.GetHundredD() = 100

I thought the part before the dot was some reference to the class I was making the method for, hence I ended up using the class name (like the first member, above) because that made sense to me, and it compiled fine (not sure it should!). I saw people using “x” instead and thought it was a keyword. It turns out the first part, before the dot, is a name declaration for the “this” concept. So in the last method above, “whateveryoulikeiguess” defines a name to refer to what you would use “this” for in Java or C++. I’m not sure yet why they didn’t just swallow a keyword and always use “this”, and it still strikes me as a pretty weird syntax.

Data kinds

There’s many ways to define a data structure with a couple of members. If you want an int and string value pair, you could use:

type aPair = int * string
type anADT = Combo of int * string
type aRecord = {theInt : int; theString : string}
type aClass(i : int, s : string) =
  member this.TheInt = i
  member this.TheString =s
type aStruct =
      val theInt : int
      val theString: string

Quite the variety! Sometimes there’s a good reason to choose one or the other, but quite often you just stick your finger in the air and guess. (I tend towards the second one because it’s what I’d use in Haskell, but I suspect the third is the best choice.) Suddenly Java with its everything-is-a-class philosophy seems sensibly restrained.

Ignorance is Dangerous Bliss

F# has a feature shared with Haskell where it tries to warn you if you are discarding a return value. Let’s say you have some function to write a file, which takes a path, some content and returns a boolean for success:

// F#:
let writeFile (path : string) (content : string) : bool = ...
-- Haskell:
writeFile :: String -> String -> IO Bool

If you call this function without assigning the return value or otherwise using it, F# and Haskell will both give you a warning (I think Haskell may require a compiler flag to turn the warning on) that the return value is discarded:

// F#, gives warning:
writeFile "C:/log.txt" "Bad stuff happened" 
-- Haskell, gives warning:
writeFile "C:/log.txt" "Bad stuff happened" 

You can suppress this using the ignore/void functions respectively:

// F#, no warning:
ignore (writeFile "C:/log.txt" "Bad stuff happened")
-- Haskell, no warning:
void (writeFile "C:/log.txt" "Bad stuff happened")

But if you mess up and miss out a parameter, you get quite different results in each language:

// F#, missed parameter: compiles fine, no warning:
ignore (writeFile "C:/log.txt")
-- Haskell, missed parameter: compiler type error:
void (writeFile "C:/log.txt") 

I’ll avoid an in-depth discussion of type systems here, but Haskell’s type system (in particular, monads) saves you here from a mistake, while F# happily ignores the return value, which is a partially applied function that does nothing. Lesson: use ignore with great care!

Misc Notes

A few more brief notes:

  • I know this one comes from ML, but whoever came up with expressing a triple type as “int * string * float” needs to be put back in the box with all the other mathematicians. Haskell’s equivalent syntax “(Int, String, Float)” is much nicer and easier to remember.
  • I got caught out at one point by bracketing pattern matches. I had a type “type Position = Position (int * int)”. I wanted to pull the values out using:

    let Position (x, y) = calculatePos()
    printfn "Position: %d, %d" x y
    // Error above: x and y not found

    I’m not sure what F# thinks that code is doing, but it doesn’t give a compile error on the let binding. Turns out you need more brackets to achieve what I was aiming for:

    let (Position (x, y)) = calculatePos()
    printfn "Position: %d, %d" x y
  • I remember, many years back, when I learnt Java after using C++, it was very refreshing to not worry about declaration locations. No header files, no needing to have forward declarations of classes before using them — you just put each type in its own file and off you go. Haskell cannot have cycles in its module dependencies; if module A imports B then B cannot import A, even indirectly — but within each module, declaration order doesn’t matter. Thus it felt like a bit of a backwards step in F# to start worrying about declaration order within a file; you can’t use any function or value before you have declared it in the file.
  • So far, I find writing F# unexpectedly similar to writing modern Java. A lot of the streams or lambda code I’d now write in Java 8 is very similar to what I find myself writing in F# with Seq and fun. If Java were to add algebraic data types (aka discriminated unions) with pattern matching, and syntactic sugar for currying, the core of two languages don’t seem like they would be that far apart. I guess both are part of the recent convergence/smashing together of OOP and FP, but coming at the problem from different sides.

The Good

I’ve mainly talked about the bad and the ugly instead of the good here, but I do actually like F#! There’s several aspects that improve on Haskell. The convention to use the |> operator is good, meaning you write “source |> operation1 |> operation2 …” which I find more readable than Haskell’s conventional “operation2 . operation1 $ source”. Similarly, I prefer <<, or more often >>, as function composition rather than “.”. Records are nice and easy to use; if you know Haskell, you’ll understand what a relief writing myRecord.myField is. Similarly, the ability to stick printf/trace statements anywhere in your code is very welcome.

I’d still take Haskell over F# as a functional language given a free choice (for its purity and type system), but as a way to bridge the gap between FP and OOP (with a recognised VM and thus lots of libraries behind it), F# is a good language. I’ve complained a lot about the joins in the FP/.NET aspects here, but with most of them I can see the language designers’ quandaries which led to the compromise. I suspect you can’t get much better than this when trying to bolt FP on to an OOP framework and still retain easy interoperability. I should look at Scala again at some point as a comparison, but at least in .NET land, F# seems like a good choice of language, if you can just find a set of conventions that keep your code nice and tidy when you so often have multiple approaches to choose from.


Filed under Uncategorized

Flash Boys

Flash Boys is a book by Michael Lewis about high frequency trading and exploits of the [US] stock market. Lewis writes great books, and seems to have a knack for finding the interesting people to write about in any given scenario. I’d recommend all the other books of his I’ve read, especially Moneyball and The Big Short. He usually writes about finance, but Flash Boys is about the intersection of finance and computing, which is why I’m highlighting it here. From my point of view, Flash Boys is about what happens when a human system becomes an algorithmic computer system.

Way back when, the stock market was a totally human process, with clients instructing stockbrokers to buy or sell, and deals carried out between people on the trading floor. Gradually, technology was used to support this process, until now the transactions are carried out purely electronically, with stockbroker machines passing orders to the stock exchange machine, which puts together buyers and sellers into transactions. This has various consequences, which are covered in Flash Boys, which I wanted to highlight here.

Too Fast For Humans

The market now moves so fast that there is no opportunity for microscopic human oversight. Machines are programmed to execute client orders, or to use an algorithm to try to make profit in the market, but with microseconds (or less) per trade, even the machine’s controllers are at the mercy of its actions. Knight Capital famously lost 440 million dollars in 45 minutes due to computer error, and that could be seen as a generously long time over which the error occurred. With little opportunity to correct matters after the fact (unless you have sufficient influence to get the stock exchange to rollback your mistake), it is increasingly important for your machines and algorithms to be correct.

Having said that, one interesting detail seemed to be that on a microscopic level, the market is actually very inactive. As I understand, the machines don’t generally sit there trading continuously in stocks against each other. The market settles to an equilibrium, and it’s only when new information is received that it makes sense to trade. Thus the activity is still driven by human input: when a buyer wants to buy, probably because a person somewhere has issued an order, the trading activity kicks in before (see speed, below) and after the human-triggered order, then stops.

Speed Matters…

One of the themes in Flash Boys is how being the fastest earns you money. It is less about being fast to access a single stock exchange, but more about being fastest between two sources of information. If you know a large order for, say, Apple shares is incoming from a buyer, but you can get to the stock exchange before them, you can buy some Apple shares at the current price and then sell them on to the buyer for slightly more. Similarly, if you see the price for a stock increase in New York, and you can get that information to Chicago fast enough, you can again buy in Chicago, and thus buy shares which are almost immediately worth more than you paid for them. There are tales in Flash Boys of building the straightest fibre optic link possible (time being distance over speed, and speed being limited by the speed of light, less distance is key) and moving around machine rooms to be closest to the exit. Lewis characterises these practices as a tax on the real buyers and sellers in the market: the high frequency traders who perform this racing between exchanges are making money at the direct expense of those slower than them in the market.

…But Doesn’t Have To

For many technological dodges, there are technological solutions. A group that Lewis focuses on try to build an exchange immune to many of these practices by increasing the latency to their exchange. By slowing everyone’s access down, they can eliminate the advantages gained by spotting price differences between that exchange and others.

Some elements of trading really are an information war. Certain regulations, or just certain programmed behaviours, mean that a small order for a share being placed in Chicago is likely to be a sign of a subsequent order arriving in New York (if you want to buy lots of one share, you may have to visit many exchanges to find a buyer). Deterministic behaviours offer opportunities for exploitation by other players in the market; but also offer guarantees of reproducible behaviour.

A Black Box

One running theme is how little many of the players in the stock market seemed to know about the implications of the electronic system underlying it. Some stockbrokers and investors were having their orders exploited by high frequency traders for months or years before they figured out what was going on. There was a complete lack of human oversight of how the system as a whole worked, partly due to regulations that prevented certain information being public, and partly because there was a lack of technical understanding of how the system worked. Lewis describes how the technical staff went from being subservient to the brokers, to being the ones highly sought after to provide a market advantage.

The Trial

This theme of technical ignorance permeates into the trial of one programmer, Sergey Aleynikov, accused of stealing program code. Whether or not you consider him guilty, it is troubling how little the investigator, prosecutor and expert witnesses seemed to know about relatively basic technical concepts. The investigator apparently only arrested the programmer on the suggestion/order of Goldman Sachs. The part where the investigator found it suspicious that the programmer had put the code in a subversion repository is a face palm moment. (If, like me, you have your technical hat on by default and don’t immediately see the problem, recall the dictionary definition of subversion to see the investigator’s view. Interestingly, even a search for define:subversion on google won’t offer the dictionary definition, and will only show you links to download Subversion.) The trial was still rumbling on in the past few months, with Aleynikov convicted a second time (having been convicted and acquitted once already), and then acquitted a second time.


Wikipedia points out several people claim that Lewis’s book gives an inaccurate and overly-negative view on high frequency trading; I don’t know enough details to judge. But as a book looking at what happens when a human system becomes a machine system, I found it fascinating. A recommended read if you want to consider the effects of computerisation on society. It’s also interesting to wonder what would be different in the book if all the people featured had had some computing education; what would have turned out differently? The trial might have reached a different verdict, the people may have sussed the stock exchange problems a little earlier, the stock trading rules may have been written slightly differently.

Leave a comment

Filed under Uncategorized

Blocks and Beyond Workshop

The Blocks and Beyond Workshop will be held in Atlanta in October, co-located with the VL/HCC conference. The workshop aims to look at research surrounding blocks programming (Scratch, Snap, etc), its effectiveness, and future developments improving or building on block based programming. The deadline, for 1-3 page position papers or <= 6 page short papers, is now two weeks away: Friday 24th July.

Here’s the overview of the workshop from the official site:

Blocks programming environments represent program syntax trees as compositions of visual blocks. This family of tools includes Scratch, Blockly,’s lessons, App Inventor, Snap!, Pencil Code, Alice/Looking Glass, AgentSheets/AgentCubes, etc. They have introduced programming and computational thinking to tens of millions, reaching people of all ages and backgrounds.

Despite their popularity, there has been remarkably little research on the usability, effectiveness, or generalizability of affordances from these environments. The goal of this workshop is to begin to distill testable hypotheses from the existing folk knowledge of blocks-based programming environments and identify research questions and partnerships that can legitimize, or discount, pieces of this knowledge. The workshop will bring together educators and researchers with experience in blocks languages, as well as members of the broader VL/HCC community who wish to examine this area more deeply. We seek participants with diverse expertise, including, but not limited to: design of programming environments, instruction with these environments, the learning sciences, data analytics, usability, and more.

I will be attending, and hope to see some interesting submissions and discussion around block-based programming and future derivatives. (I’m also on the program committee.) Submission details are on the site.

Leave a comment

Filed under Uncategorized