Inference Group


Chris Ball

From mid-2003 to mid-2005 I was a Research Associate in Cambridge University's Inference Group, working on the Dasher project. These days, I work at One Laptop per Child and have a more recent blog and webpage.



Games::Go::Image2SGF is a Perl module that takes a photograph depicting a position on a go board, and returns a machine-readable SGF description of the position.

But why?

Keeping a record of a Go game is difficult! Unlike a game of Western chess with a maximum of 32 pieces and 64 locations, a Go board has 361 intersections and can have pieces present on any of them; transcribing a position can take a long time. With this program, a photo from a mobile phone's camera is adequate to obtain a machine-readable representation of the position -- from there you can investigate variations, do immediate and accurate scoring, and share the game with friends without needing to send them a photo.

That's not all, though. Once we have a method of finding out where stones are, we also have a method for recording a game, by keeping track of the change in presence of stones to infer where moves are being played, in real-time. This has very useful implications; games can automatically be broadcast over the Internet, for example, without need for a dedicated transcriber.

Sample images

For an idea of how the program works, you might first like to see some annotated images showing an analysis of a position. The program is using reverse video to show which colour it thinks a stone on the board is, so a black dot means the stone underneath the dot has been determined as being white:


I gave a talk on the project -- slides and also a video (DivX, 22M or Quicktime, 37M) of the demo part of the talk are available.


Here's a sample run through the algorithm, using the photograph below:


Our method is going to look at colour in order to find out which state each grid intersection is in -- mostly seeing black or white means that we've found a stone, and mostly board colour suggests an empty intersection. We first quantize the image, taking each pixel in the photograph and comparing it to our expectation of the colour for a white stone, black stone and go board, and then assigning the colour of those three that the pixel is nearest to in RGB-space back to itself. This leaves us with an image containing only three colours.

Grid corners

As well as a photograph, the module takes the image coordinates of the four corners of the grid as input. Once we have these coordinates, we can infer a grid in two-point perspective, using Madeleine Price's description of how it's done. Now we know the centrepoint of each intersection on our 19x19 grid, shown with a red dot below.


Next we visit each intersection in the grid, and whichever one of our three image colours is most prevalent there gives us a good indicator of which state that intersection has. We draw a circle (of radius given by the sample_radius parameter) around the intersection, and keep a tally of pixel colour counts inside that circle. The colour with the highest count determines the state of the current square, and we record it and move on to the next one.

Creating the SGF

Now that we know the position, we can describe it using the Smart Go Format; the SGF for the image above can be seen here. An editor such as cgoban2 will allow you to view and modify the SGF, making it look like this:


Since the system works robustly, the major improvements I'm interested in are in reducing the number of arguments that the user needs to supply to the program.
  • Grid-finding: It'd be great if we didn't need to get the user to supply the coordinates for the corners of the grid. Thoughts on grid-finding techniques would be welcome. (My current thoughts include: a convolution matrix for edge-detection, Hough transforms, colour histograms, a Fourier transform to find the periodicity of the grid lines.)

    Two reasons why this particular grid-finding isn't trivial are that the grid becomes covered with stones during the game -- so that by the time we reach an endgame, we begin to wish we were finding stones and then extrapolating to the position of intersections, rather than the inverse -- and also that the edge of the go board looks much like the edge of the grid on the board, in terms of edge-detection and colour.

  • Sampling: At the moment the radius to sample for each intersection is user-supplied (with a sensible default), but it should be trivial to determine the radius of each stone, since we know the area of each square on the grid and are aware that a stone can't be big enough to stop stones being placed on the intersections surrounding it.

    As for the sampling method itself, I've yet to find an image that the simple winner-takes-all approach to counting pixels fails at. One possible improvement to sampling, though, would be to "explain away" the image and set a stone's colour back to the board colour once we've established which intersection it's present on. This would help in situations where a stone is mostly on one intersection but partly on another, and we can discount the possibility of the same stone being on both intersections at once.

ICFP Programming Contest 2002

This is an outline of the bot that myself and Leon Brocard came up with for the entry to the ICFP Programming Contest 2002.

So, how did we do it?

Well, poorly. We didn't start work until half way through Saturday; almost a day after the competition started. We didn't get to work on it as much as we'd have liked on Monday, because we were both at work. And we actually went to sleep during the competition, which I'm told is incredibly poor form.


We coded Marvin in Perl, perhaps a controversial choice. We're both Perl coders (and members of Perl Mongers) so it was the obvious choice for us. It was also good for the network and data munging. We did worry about the speed issue, but it turned out not to be a problem at all: we didn't even worry about optimising or caching our path finding.

Basic Client

We read up around the problem and built a client library which encapsulated all the client-server communication and returned changes back to us as data structures. We also built a Curses frontend so that we could check what was going on. Once we had the basic framework, we had the tools to build and test the rest.


Leon found a Java implementation of A* and got to work porting it to Perl, while I wrote up a class to read the map from the server and return a data structure from it - these ended up as Maze::AStar and Maze::Map respectively. In a very short amount of time (around two hours from our start time) we had a decent traversal algorithm going, and moved on to looking at packages.


Here's an overview of how packages in the contest worked. You get told the location of 'home bases' on each map by the server. These may or may not contain package(s). On top of that, robots on the map may drop packages when they're pushed. In both cases, you know where the packages are, and when you're on top of them you know their weight, and the co-ordinates of their destination.

The problems of choosing which packages to pick up from a base are representative of two larger problems, which seem to be the real guts of the competition - the Travelling Salesman and Knapsack problems. Travelling Salesman is involved because the order of which packages you pick up has an effect on the route you're taking, and whether it's an optimal one; how long you stay in the game depends on how well you use your fuel, and so there's a correlation to your score. As each package has a weight, and you have a maximum capacity, the Knapsack problem involves choosing which packages to take to maximise your use of your capacity. Both of these problems are in NP-space, and the differences we see in competition bots will involve different heuristics being used to simplify the problems.

As for Marvin, we went the very simple way. Our choice of which packages to deliver is limited to picking up packages going to the same place if multiple destinations are present in the package pile. In the best case, this involves a number of trips equivalent to the number of destinations in the package pile, rather than equivalent to the number of packages.

Deciding how to fill our capacity with packages bound for the same place was a much harder problem. We looked at using a Monte Carlo Simulation before having an epiphany over what we really wanted from package selection. Here's what we decided:

Having a small number of 'heavy' packages that optimally fill your capacity is not as valuable as having a large number of less heavy packages.

The rationale is that you can get pushed by other robots during the game. When this happens, you drop one random package. We'd rather drop a light package than a heavy one, since the weight is proportional to points.

We also pick up packages we run over while on a route somewhere if we have extra capacity.


Bidding was a late entry in our bot. Marvin bids "1" most of the time. Marvin bids higher when dropping a package on its destination. If (and only if) there's another robot in the four squares surrounding us, we bid to the value of the packages we're carrying. We also bid high if there is water and a robot around us.

ASCII Art Movies

So you want to see how Marvin works but you can't be bothered to download the server and try it yourself? Well, through the wonders of Perl, the Imager module, netpbm, and gifsicle, we can present animated GIFs of Marvin running.

The first movie (11K, 290 frames) shows Marvin's A*-based pathfinding. Marvin starts at the bottom right and attempts to find a path to the "T". We put a 'v' for each location that the path finding algorithm has visited.

The second movie (86K, 296 frames) shows Marvin trundling around towards home bases, picking up packages, going to their destination, dropping them, and starting all over again. Note the score go up when we drop packages and the fact that Marvin can carry more than one package at a time.

In the final movie (168K, 222 frames) I've pitted Marvin against some other bots from the competition: entries by Bernd Paysan, Berkeley State Machines Incorporated and Simon Funk. Marvin is 'M', other robots are 'R'. They have to pick packages up in the middle and deposit them at the bottom left and top right while avoiding the water '~'. Note that when a robot gets bumped it can drop a package which another robot may pick up. Also note that some of the other robots seems to have less effective path finding and that sometimes robots get stuck in a loop. One robot has overbid by the end, run out of fuel and is out of the game. Cute, huh?


If we'd had more time or more people on our team there are a couple of more things we would have added. These include: keeping track of dropped packages, keeping track of home bases with packages, more intelligent pathfinding (avoid water edges or other robots), push robots into the water if we have the chance, minimax searching to guess what robots around us would do, and more. The whole point of Marvin is to be simple, and not waste fuel trying to be clever.

Why We Suck

I've been told I didn't make this clear enough. The reason our bot sucks is that we wimped out of the real challenge - working out particularly clever solutions to the two main problems of base-selection and object packing. Another reason is that we just ignore combat completely, and there's plenty of room for strategy; not going near water when there's danger of you being pushed into it, for example, or just avoidance techniques in general.

The code

The code comes in two versions - the light version that was submitted, and our development version, which includes a UI. To use the development version, you'll need Perl's Storable module and Curses wrapper installed. Be warned that they unpack in the current directory, since that was the judges' preferred archive layout. If that all sounds like too much work, I have a screenshot of an early version of marvin running, and some of the code. In the shot, the v characters represent A*'s search path, and the P characters represent the critical path found.

I think that's about everything. Our bot represents a combined number of man-hours of somewhere under 20, so it's well prepared to be beaten by just about everyone. It's been a fun experience all the same.


I migrated the chess information from my main page here:

  • 2003-12-15: I wrote a script for following a player's chess rating over time given a PGN database of games -- PGNGrapher. Here's some example output.
  • 2003-11-10: I played for Cambridge in the November 2003 Oxford-Cambridge Freshers' Varsity chess match; I won my game, but we lost the match by a point.
  • 2003-10-05: My two favourite chess short stories: The Last Round and Master Jacobson.
  • Chesswiki is hopefully going to be the Sensei's Library of chess.
  • I captained the Manchester Universities Chess Club through the 2002-2003 season; I still play chess regularly. The club had a great season, winning a county-wide cup and league promotion.
  • I'm working on a page for the 4. Qxd4 Sicilian, and why 4.. e5 should bring a smile to the face of your white pieces. Update: I've written it, and it's in the June 2005 issue of Dragon. I've uploaded a copy of the article.


PGNGrapher is a simple tool to graph the chess rating of a single player over time, using the Date and WhiteELO/BlackELO tags of a PGN file. A common use is with game logs from a server such as FICS or ICC, since these logs feature all the data needed to make a graph.

The Perl source code is available, along with some example output.

Paste the PGN file here:

Enter the player to graph, exactly as written in all of the games in the PGN file:

Favourite Unicode Codepoints

U+2603SNOWMANTim BagotImage
U+20ABDONG SIGNDavid RicherbyName
U+0E5BTHAI CHARACTER KHOMUTJon ChinSatisfying swirliness
U+2620SKULL AND CROSSBONESDavid MalcolmAvast! A fine code point it be!
U+3020POSTAL MARK FACENick KrempelBizarreness factor
U+203DINTERROBANGMonroe WilliamsSounds like an attack in a bad kung-fu movie
U+2042ASTERISMMonroe WilliamsSounds like a debilitating condition

Site last modified Mon Jun 15 23:12:35 BST 2009