Greenfoot and Blackbox at SIGCSE 2016

This year, the SIGCSE (Special Interest Group in Computer Science Education) conference is in Memphis, Tennessee, at the beginning of March. With the early registration deadline of February 2nd fast approaching, I thought it’s worth quickly highlighting our events.

On Friday evening, we’ll be delivering a workshop on Greenfoot, using our new Stride language. Stride is a halfway house between blocks-based programming like in Scratch, Snap! & co, and full text programming in languages like Java or Python. We officially released Stride in Greenfoot 3.0 towards the end of last year, so we’re looking forward to having the opportunity to now deliver workshops using Stride. The workshop is at 7pm on Friday night. The committee like to see sign-ups in advance, so if you are thinking of attending, it would help us if you sign up now rather than once you arrive.

Stride Editor

Stride Editor

Also related to our work on Stride, I’ll be taking part in a panel discussion (proposal available here) on Friday morning at 10:45, entitled “Future Directions of Block-based Programming”. I’m very excited by the panel we’ve put together, where I’ll be joining: Jens Moenig, creator of Snap! and now working on GP, a new block-based language; David Anthony Bau, one of the leads of Pencil Code, an impressive blocks-based editor; and David Weintrop, who has done some excellent work looking at block-based programming (see 2015 section here). I’m looking forward to talking about the important details of designing and implement block-based and block-like editors, and how the design affects the way they are used in teaching.

Our other event is not focused on Greenfoot, but instead on our Blackbox data collection project. We’re holding a BoF (Birds of a Feather) event on Thursday evening at 5:30pm where we hope to get together with other researchers interested in educational data collection and analysis to talk about the data set and data collection platform that we provide in BlueJ.

I’m also looking forward to attending and seeing all the other interesting sessions. Glancing at the program, the paper sessions on CS Ed Research and Program Design look interesting for me. If you have any other suggestions on what to attend, feel free to put them in the comments.

Leave a comment

Filed under Uncategorized

Adventures in Graphing

One of my favourite parts of research, probably because I don’t get to do it often, is trying to come up with a graph or similar visualisation to display some results. You often have a lot of choice in how you could visualise some data; the art is in trying to find the best way to convey the interesting aspect of the results so that it’s obvious to the reader’s gaze. I thought it might be interesting to demonstrate this through an example. In one strand of our work, we look at frequency of different programming mistakes in our Blackbox data set. We’ve recently added a location API for the data so as a descriptive statistic, I wanted to see if the frequency differed by geography. To that end, I produced a table of mistake (A-R) versus continent:

The data is already normalised by total number of mistakes for each continent, so each column will sum to 1. One option would be simply to use the table to portray the data. This is often the best option for single-column data, or for small tables (e.g. 4×4). But it’s too difficult to get any sense of the pattern from that table. So we need a graph of some sort. The particular challenge with this data is that although it’s only a two-dimensional table, both dimensions are categorical, with no obvious ordering available.

Since each column sums to one, we could use a stacked bar chart:

I don’t like this option, though, for several reasons. 18 different stacked bars is too visually noisy, and we still need to add labels, too. Stacked bar charts also prevent comparison: is there much of a difference in the proportion of C (the light blue bar, third from bottom) between areas? It’s too hard to tell when stacked.

If stacked doesn’t work, it’s tempting to try a clustered bar chart, with the bars for each area alongside each other instead. That looks like this:

This time, you can see the difference between the proportion of C between areas. But… it’s still too noisy. It’s too difficult to take in a glance, and annoying fiddly to look at each cluster in turn and check the proportions. You actually get a reasonable sense of difference between mistakes (A-R) rather than between areas. You could thus try to cluster in the other dimension, giving you six clusters of eighteen rather than eighteen clusters of six:

A first obvious problem is that the colours repeat. It sounds like a simple fix to just pick more colours, but picking eighteen distinct colours would be very difficult. And even then, comparing one set of eighteen bars to another to look for differences is just too fiddly.

So maybe we should forget the bars. An alternative is to use line graphs. Now, technically, you shouldn’t use line graphs for this data because a line typically indicates continuous data rather than categorical. But I think this should be considered a strong guideline rather than absolute rule; sometimes with difficult data, it’s about finding the most reasonable (or “least worst”) approach when nothing is ideal. So here’s a line graph with mistakes on the X and proportions on the Y:

Not really very clear; any differences at a given point are obscured by the lines rapidly merging again before and after. Also, I don’t like the mistakes on the X axis like this, as it does strongly suggest some sort of relation between adjacent items. Maybe it would help to make it radial:

Ahem. Let’s just forget I considered it making it radial, alright? Looking back at the previous line graph, I said one issue was that the lines jump around too much, merging after a difference. We could try to alleviate this by sorting the mistakes by overall frequency (note the changed X axis):

Once the labels get added, it’s probably the one I like most so far. You at least get a sense, for the most frequent mistakes, of some difference in the pattern between the continents (one line per continent).

At this point, we’ve exhausted the bar and line graph possibilities, with either mistake or continent on the X axis, a different bar/line for the other variable, and proportion always on the Y axis. A final possibility I’m going to consider is a scatter graph. In this case, we can use continent as the X axis, and use the mistake label as the point itself on the scatter graph, with the proportion again on the Y:

At first glance, it’s a bit of a mess. Several of the letters overlap. But as it happens, I’m less interested in these. Minor differences in the low frequency mistakes are probably not that interesting for this particular data. The difference between the most frequent mistakes is, I think, more clearly displayed in this graph than in any of the others.

There is one finishing touch to add. One issue with normalising the data by total frequency for that continent is that it obscures the fact that we have quite different amounts of data for each continent. Happily, this scatter graph gives us the opportunity to scale the size of the points to match how much data we have for that continent. (There are two ways to do this: scale the height of each character by amount of data, or the area of each character by amount of data. Having tried both, the latter seemed to be a better choice.) That leaves us with this:

Easy to see where most of our data comes from! I may yet add some colour, but really the final decision is: is this interesting enough to include in a paper, or should I just scrap my work on this? Such is research.

Leave a comment

Filed under Uncategorized

Frame-Based Editing: Easing the Transition from Blocks to Text-Based Programming

The Workshop in Primary and Secondary Computing Education (WiPSCE) 2015 was held this week at King’s College London. My colleagues Michael Kölling and Amjad Altadmri presented our paper, entitled “Frame-Based Editing: Easing the Transition from Blocks to Text-Based Programming”. The paper is now available freely online.

The gist of the paper is as follows. Blocks-based programming is usually used to teach programming to younger age groups, but text-based programming is used for older age groups. This creates a necessary transition period inbetween, wherein learners must transition from blocks- to text-based programming, but experience of teachers is showing that this can be a difficult transition. The paper first enumerates all the issues involved in making this transition, and then goes on to discuss how frame-based editing (as now available in the latest Greenfoot release) divides these issues into two smaller sets, and thus may make a suitable stepping stone in the gap between blocks and text — see the diagram below, or the paper for a larger version and all the details. If you’re interested, take a read and/or try the software.

Leave a comment

Filed under Stride

Greenfoot: Frames, Pi, and Events

Lots of positive Greenfoot developments recently, which seemed worth summarising. Firstly, we released preview2 of Greenfoot 3.0.0, our new version of Greenfoot which includes our new frame-based editor (a hybrid of block- and text-based programming). The download of 3.0.0-preview2 is now available. Our intention is that this is a feature-complete preview, and we’re now just fixing bugs and optimising ahead of the final release (likely to be November or so).

Our other big development is that BlueJ and Greenfoot are now included in the default image of the official Raspbian Linux distribution for Raspberry Pi. A few weeks ago we made it into the main repository, which means that on any Raspbian, you can now run: sudo apt-get update && sudo apt-get install bluej greenfoot and have both tools available in the Programming menu in the GUI. But being included in the default image means that anyone installing Raspbian from now on will automatically already have BlueJ and Greenfoot installed. We’re really pleased to have our software so easily available of course. Thanks must go to the Raspberry Pi folks who supported this move, and my colleague Fabio Hedayioglu who worked on some optimisations to help our performance on the Raspberry Pi. BlueJ in particular comes bundled with libraries for interfacing with hardware via the GPIO pins.

We also have a set of events coming up which packs all my travel for the second half of the year into three weeks. I will be attending the Blocks and Beyond workshop on 22 October in Atlanta, which leads into a trip to the JavaOne conference in San Francisco where we will be teaching Greenfoot at a kids event (open to the public) on 24 October and talking about Greenfoot 3 on 26 October in the main conference.

After that there is the CAS Scotland conference on 7 November in Dundee where I’ll be doing a session on Greenfoot 3, and we will also be at the WiPSCE conference in London on 9–11 November, presenting our paper “Frame-Based Editing: Easing the Transition from Blocks to Text-Based Programming” on the Tuesday — when there is a cheaper day rate available for teachers. If you’re a UK teacher, it’s well worth considering attending if you can get out of school that day.

Leave a comment

Filed under Uncategorized

Improving Blocks with Keyboard Support

The Blocks and Beyond workshop will take place in Atlanta on October 22. It’s shaping up to be an interesting workshop, based around discussion of the submitted papers. Our own contribution is a position paper, entitled “Lack of Keyboard Support Cripples Block-Based Programming“. It’s a brief summary of the design philosophy which has informed our design of Greenfoot 3 (which we’re busy working on): the mouse-centric nature of existing block-based programming systems (Scratch, Snap!, etc) hampers their ability to scale up to larger programs, which in turn puts a ceiling on their use. Our gambit is that if we can remove this restriction, there might no longer be any reason to switch back to using text-based programming languages.

Greenfoot 3 Editor

The Greenfoot 3 Editor — public preview available

The paper (accepted version, copyright IEEE) is freely available, and only 3 pages, so rather than reproduce the full argument here, I suggest you take a look at the paper — and if you want to discuss it, or see similar work, then why not attend Blocks and Beyond?


Filed under Uncategorized

Better Mathematics Through Types

I’m a huge fan of strong static types; it’s one of the reasons I like Haskell so much. Recently I’ve been toying with some F#, which has a typing feature I’m really enjoying: units of measure. I want to explain a few examples of where units of measure can prevent bugs. To do this, let’s start off with some simple F# code for a car moving around the screen. It has a direction and a speed, and you can steer and speed up/slow down using the keys. Here’s some basic code to do this:

type Key = Up | Down | Left | Right
let isKeyDown (key : Key) : bool = ...

type Position = {x : float32; y : float32}

let mutable pos = {x = 0.0f; y = 0.0f}
let mutable direction = 0.0f
let mutable curSpeed = 1.0f
let acc = 0.5f
let turnSpeed = 4.0f

let update (frameTime : float32) =
  if isKeyDown Up then
    curSpeed <- curSpeed + acc
  elif isKeyDown Down then
    curSpeed <- max 0.0f (curSpeed - acc)

  if isKeyDown Left then
    direction <- direction - turnSpeed
  elif isKeyDown Right then
    direction <- direction + turnSpeed

  pos <- {x = pos.x + cos(direction) * curSpeed * frameTime;
          y = pos.y + direction * curSpeed * frameTime}

This code looks fairly reasonable, and it compiles and runs. It’s also riddled with mistakes, which we can eliminate through the use of units of measure. To start with, we need to define a few units:

type [<Measure>] second
type [<Measure>] pixel
type [<Measure>] radian

Units of measure allow easy tagging of numeric types with units. The units are really just arbitrary names. The compiler doesn’t know second means, it just knows that second is distinct from radian. I think most people use single letters for brevity, but I like full names for clarity. You can use them to type a variable like so:

let mutable direction : float32<radian> = 0.0f

But actually this is insufficient; the compiler will complain that you are assigning 0.0f (a literal with no units) to direction, which has units. Much easier is to only type the literal and let the type inference do the rest:

let mutable direction = 0.0f<radian>

Typing our initial variables is pretty straightforward. Positions are in pixels, and we know from school that speed is distance over time, acceleration is distance over time squared:

type Position = { x : float32<pixel>; y : float32<pixel>}

let mutable pos = {x = 0.0f<pixel>; y = 0.0f<pixel>}
let mutable direction = 0.0f<radian>
let mutable curSpeed = 1.0f<pixel/second>
let acc = 0.5f<pixel/second^2>
let turnSpeed = 4.0f<radian/second>

This alone is enough to show up all the errors in our code. Let’s start with:

  if isKeyDown Up then
    curSpeed <- curSpeed + acc

The units of curSpeed is pixel/second, and we’re trying to add acc[eleration], which has units pixel/second^2. This is a subtle bug. To explain: as you may know, frames in a game or simulation don’t realistically run at completely fixed frame rates. They tend to vary between machines because different machines can maintain different maximum frame rates, and they even vary on the same machine while the program is running because other processes may interrupt to use the CPU. If I add acc to my speed every frame, then on a desktop doing roughly 60 frames per second (FPS) I’ll have a car that accelerates twice as fast as on a laptop doing 30FPS, which would make for quite a different game. The units of measure system has pointed this bug out. We shouldn’t just add the acceleration, we need to multiply the acceleration by the frame duration:

  if isKeyDown Up then
    curSpeed <- curSpeed + acc * frameTime

The same type of bug occurs in a few other places, such as here:

  if isKeyDown Left then
    direction <- direction - turnSpeed

Similar to above, turnSpeed is radians per second, but directions is radians, so we must again multiply by frameTime.

Standard Functions

Here’s another bug, where I have missed out a call to the sin function:

  pos <- {x = pos.x + cos(direction) * curSpeed * frameTime;
          y = pos.y + direction * curSpeed * frameTime}

Previously, the compiler was happy because all the variables were plain float32. Now, I’m trying to add pos.y [pixel] to direction * curSpeed * frameTime [radian*pixel]. Looking at it, the problem is the missing sin call. So we can add that in:

  pos <- {x = pos.x + cos(direction) * curSpeed * frameTime;
          y = pos.y + sin(direction) * curSpeed * frameTime}

However, we still get an error on these lines, complaining that we’ve attempted to pass float32<radian> to a function which takes plain float32. Hmmm. The problem here is that the standard library functions don’t have any units specified on their parameters or return values, so cannot pass a quantity with units to any standard library function. This is a bit of an ugly effect of units of measure arriving after the language’s initial release. One fix is that each time you call sin/cos/tan/atan2/sqrt/oh-my-god-it-goes-on-and-on you have to cast away the units, like so:

  pos <- {x = pos.x + cos(float32 direction) * curSpeed * frameTime;
          y = pos.y + sin(float32 direction) * curSpeed * frameTime}

Eugh. Firstly: this is annoying to have to do every time, especially if you use a particular standard library function a lot. Secondly: this is losing type safety. What if direction wasn’t in radians — which should be an error — but we’re masking the error by casting away the units? Much better is to add your own typed thin wrappers around the functions:

let cosr (t : float32<radian>) : float32 = cos (float32 t)
let sinr (t : float32<radian>) : float32 = sin (float32 t)

(Is there already an F# library that does this for all the standard numeric functions? Seems like it would be a sensible thing to do.) Now we can fix our code in a type-safe manner:

  pos <- {x = pos.x + cosr(direction) * curSpeed * frameTime;
          y = pos.y + sinr(direction) * curSpeed * frameTime}

You may question why you can’t pass a float32 to a function expecting float32<radian>. My understanding is that the designers chose float32 to represent a unitless measure, which is not the same as unit-indifferent. Think atheist vs agnostic: float32 is not unit-agnostic in modern F#, it’s unit-atheist. The output of my cosr function really is unitless, so it is specifically [unitless] float32, not [I was just too lazy to give a unit] float32. Even if you are using units of measure in your code, you will still have functions and variables that are unitless, and using units in their place is an error. For example, an acosr (i.e. inverse cosine) function would take a (unitless) float32 and give back a float32<radian>. It should, and would, be an error to pass in a float32<radian> as the parameter.

This also means units of measure aren’t a subtype: much of their advantage comes from the fact that you can’t substitute a plain float32 and a float32<whatever> without using an explicit cast. They’re more like an easy way to clone numeric types.

Units of Measure: Summary

I really like units of measure. I’ve used types for similar purpose in Haskell in the past, but the way that units of measure effortlessly wrap the existing numerical types make it very easy in F#. If you are doing anything with any arithmetic, not just geometry, units of measure can spot a lot of bugs and provide a lot of automatic safety. Units of measure really come into their own because they track units through arithmetic expressions. Units of measure are not just about avoiding confusing feet with metres; it’s more about not accidentally adding an office space rent cost per day (pounds/metre^2/day) to a cost per day (pounds/day) without remembering to multiply the first one by the desired floor area. Think how much easier your secondary school maths/physics (and later calculus) would have been if you had had a automated system like this, checking the units in all your equations.

One final note: this post is more computing than education, but you may be interested in this previous post (and its comments) on the role of types in programming education.


Filed under Uncategorized

Thoughts on F Sharp

F# is an attempt to bring a functional programming language (based on ML) to the .NET framework. It seems to have Microsoft blessing, with support out of the box in Visual Studio 2015 (now free for personal use). I thought I’d give some initial impressions of the language, mainly coming from a Haskell perspective. I am happy to be corrected, as I’m still quite new to the language.

I quite like F#, but the main impression I get is that whatever I want to do, there’s two or more separate ways to do it, and no obvious way to choose between them. F# is, very clearly, the result of stitching together two programming systems: the Object-Oriented Programming [OOP] of .NET and the Functional Programming [FP] of the ML language. Much like Java post-generics (with its “int” and “Integer”) or Windows 8 with its tablets-meets-desktop, systems like this often have quite glaring joins. And especially if you come from neither a .NET nor an ML background, you are just left a bit bamboozled as to which you should be using. Where there is a reasonable set of rules to choose between int and Integer in Java, in F# it often just seems to come down to preference or convention. This multiplicity of approaches is especially evident in the type system.


Let’s start with lists. You can specify a list’s type in F# using the postfix syntax “int list” or the prefix syntax “List<int>”. Both mean the same thing and there’s no obvious reason to choose one or the other (this StackOverflow answer mentions a suggestion to use prefix for all except four types, but with no clear justification). Lists have some properties which can be accessed directly, like “myList.Head”. But there’s also a function you can call using “List.head myList” for the same purpose. The property is probably an OOP thing, the function is FP, but as a programmer who just wants to write some code, what’s the difference? The dividing line seems to vary depending on the origin of what you’re using: a C# class is going to have lots of member functions, an F# library will have lots of external functions, and your code will seemingly always end up with a mess of the two. When writing your own code, it’s often not clear which way you should lean, either.

The Class System

F# allows you to write classes which override other .NET classes, which makes for a potentially great marriage of functional programming with existing [OOP] .NET libraries. In the classes you define in F#, there’s multiple ways to define fields:

type MyClass2 =
  let readOnlyPrivateValue = 1
  let mutable readWritePrivateValue = 2
  [<DefaultValue>] val mutable private readWritePrivateValueNoInit : int
  [<DefaultValue>] val mutable readWritePublicValueNoInit : int
  member this.ReadOnlyPublicValue = 5
  member private this.ReadOnlyPrivateValue = 6
  member val ReadWritePublicValue = 7 with get, set
  member val private ReadWritePrivateValue = 8 with get, set

This StackOverflow answer suggests when to use each, but I was still left puzzled. I’m slowly getting the hang of it: let when you have an initial value, val when you don’t, mutable when you want to change it, but I’m not much wiser as to when to use member vs let/val — are they semantically equivalent if I’m not customising the get/set? For example, I found this code in another StackOverflow answer which does a “memberwise clone”. Will that clone my “let mutable” fields or just my “member” fields?

To define a member method you use a similar member syntax, but with brackets for any parameters. You also need an object reference, which you can see is also required for some field declarations but not others. It took me a while to understand the syntax:

type MyClass() =
  member MyClass.GetHundredA() = 100
  member x.GetHundredB() = 100
  member y.GetHundredC() = 100
  member whateveryoulikeiguess.GetHundredD() = 100

I thought the part before the dot was some reference to the class I was making the method for, hence I ended up using the class name (like the first member, above) because that made sense to me, and it compiled fine (not sure it should!). I saw people using “x” instead and thought it was a keyword. It turns out the first part, before the dot, is a name declaration for the “this” concept. So in the last method above, “whateveryoulikeiguess” defines a name to refer to what you would use “this” for in Java or C++. I’m not sure yet why they didn’t just swallow a keyword and always use “this”, and it still strikes me as a pretty weird syntax.

Data kinds

There’s many ways to define a data structure with a couple of members. If you want an int and string value pair, you could use:

type aPair = int * string
type anADT = Combo of int * string
type aRecord = {theInt : int; theString : string}
type aClass(i : int, s : string) =
  member this.TheInt = i
  member this.TheString =s
type aStruct =
      val theInt : int
      val theString: string

Quite the variety! Sometimes there’s a good reason to choose one or the other, but quite often you just stick your finger in the air and guess. (I tend towards the second one because it’s what I’d use in Haskell, but I suspect the third is the best choice.) Suddenly Java with its everything-is-a-class philosophy seems sensibly restrained.

Ignorance is Dangerous Bliss

F# has a feature shared with Haskell where it tries to warn you if you are discarding a return value. Let’s say you have some function to write a file, which takes a path, some content and returns a boolean for success:

// F#:
let writeFile (path : string) (content : string) : bool = ...
-- Haskell:
writeFile :: String -> String -> IO Bool

If you call this function without assigning the return value or otherwise using it, F# and Haskell will both give you a warning (I think Haskell may require a compiler flag to turn the warning on) that the return value is discarded:

// F#, gives warning:
writeFile "C:/log.txt" "Bad stuff happened" 
-- Haskell, gives warning:
writeFile "C:/log.txt" "Bad stuff happened" 

You can suppress this using the ignore/void functions respectively:

// F#, no warning:
ignore (writeFile "C:/log.txt" "Bad stuff happened")
-- Haskell, no warning:
void (writeFile "C:/log.txt" "Bad stuff happened")

But if you mess up and miss out a parameter, you get quite different results in each language:

// F#, missed parameter: compiles fine, no warning:
ignore (writeFile "C:/log.txt")
-- Haskell, missed parameter: compiler type error:
void (writeFile "C:/log.txt") 

I’ll avoid an in-depth discussion of type systems here, but Haskell’s type system (in particular, monads) saves you here from a mistake, while F# happily ignores the return value, which is a partially applied function that does nothing. Lesson: use ignore with great care!

Misc Notes

A few more brief notes:

  • I know this one comes from ML, but whoever came up with expressing a triple type as “int * string * float” needs to be put back in the box with all the other mathematicians. Haskell’s equivalent syntax “(Int, String, Float)” is much nicer and easier to remember.
  • I got caught out at one point by bracketing pattern matches. I had a type “type Position = Position (int * int)”. I wanted to pull the values out using:

    let Position (x, y) = calculatePos()
    printfn "Position: %d, %d" x y
    // Error above: x and y not found

    I’m not sure what F# thinks that code is doing, but it doesn’t give a compile error on the let binding. Turns out you need more brackets to achieve what I was aiming for:

    let (Position (x, y)) = calculatePos()
    printfn "Position: %d, %d" x y
  • I remember, many years back, when I learnt Java after using C++, it was very refreshing to not worry about declaration locations. No header files, no needing to have forward declarations of classes before using them — you just put each type in its own file and off you go. Haskell cannot have cycles in its module dependencies; if module A imports B then B cannot import A, even indirectly — but within each module, declaration order doesn’t matter. Thus it felt like a bit of a backwards step in F# to start worrying about declaration order within a file; you can’t use any function or value before you have declared it in the file.
  • So far, I find writing F# unexpectedly similar to writing modern Java. A lot of the streams or lambda code I’d now write in Java 8 is very similar to what I find myself writing in F# with Seq and fun. If Java were to add algebraic data types (aka discriminated unions) with pattern matching, and syntactic sugar for currying, the core of two languages don’t seem like they would be that far apart. I guess both are part of the recent convergence/smashing together of OOP and FP, but coming at the problem from different sides.

The Good

I’ve mainly talked about the bad and the ugly instead of the good here, but I do actually like F#! There’s several aspects that improve on Haskell. The convention to use the |> operator is good, meaning you write “source |> operation1 |> operation2 …” which I find more readable than Haskell’s conventional “operation2 . operation1 $ source”. Similarly, I prefer <<, or more often >>, as function composition rather than “.”. Records are nice and easy to use; if you know Haskell, you’ll understand what a relief writing myRecord.myField is. Similarly, the ability to stick printf/trace statements anywhere in your code is very welcome.

I’d still take Haskell over F# as a functional language given a free choice (for its purity and type system), but as a way to bridge the gap between FP and OOP (with a recognised VM and thus lots of libraries behind it), F# is a good language. I’ve complained a lot about the joins in the FP/.NET aspects here, but with most of them I can see the language designers’ quandaries which led to the compromise. I suspect you can’t get much better than this when trying to bolt FP on to an OOP framework and still retain easy interoperability. I should look at Scala again at some point as a comparison, but at least in .NET land, F# seems like a good choice of language, if you can just find a set of conventions that keep your code nice and tidy when you so often have multiple approaches to choose from.


Filed under Uncategorized