Urban Hafner

Ruby, Ruby on Rails, JavaScript programmer. Dabbling with Rust and agile methodologies on the side.

Iomrascálaí 0.3.0 Released

| Comments

It’s been a while since I wrote about Iomrascálaí the engine for the game of Go I’m writing in Rust. I will try to do it a bit more often from now on as I’ve finally found the motivation to work on it again.

So today I’d like to announce version 0.3.0! It’s been in the works since September and included two big improvements:

  1. We’re now using the RAVE heuristic in selecting which tree leaf to investigate next.
  2. We use a set of 3x3 patterns to guide both the tree exploration and the move selection in the playouts.

These two changes together lead to a strength increase when playing against GnuGo of ~20% on 9x9 and ~25% on 13x13! See the release notes and the change log for detailed listings of what actually changed between 0.2.4 and 0.3.0.

The plan for 0.4

The main goal for 0.4 is to finally get close to equal strength with GnuGo on 19x19. A bit task but where’s the fun in picking easy tasks? ;) To achieve this goal I’m planning to work on the following issues:

  1. Speed up the playouts! Just 100 playouts per second on 19x19 is really slow and it’s no wonder that the engine has no chance against GnuGo.
  2. Add criticality to tree selection algorithm. Apparently both Pachi and CrazyStone have had success with adding this as an additional term to the formula.
  3. Tune the parameters using CLOP. I’ve moved the parameters into a config file so at least technically it’s now easy to run experiments and optimize the parameters.
  4. Continue searching when the results are unclear. Various engines have had success with searching longer than the allocated time when the best move isn’t clear (i.e. close to the second best move).
  5. Use larger patterns. Until now the engine only uses 3x3 patterns. It seems worthwhile to investigate if using larger patterns can help.
  6. Use a DCNN to guide the search. There’s a pre-trained neural network that’s in use by several engines to guide the search and it has improved the results significantly for them. It may be a good idea to investigate this, too.


  1. The main challenge is computation power! Running 500 games for 9x9 and 13x13 each already takes a few days. And adding 19x19 to the mix will mean that changes will take a long time to benchmark.
  2. The libraries to efficiently run the DCNN code (like Caffe of Tensorflow) have quite a lot of dependencies and it’s not clear how easily they can be integrated with Rust. It will at least make compiling the bot more difficult for newcomers.

Like I said, quite a challenging plan! But I’m sure it will be a lot of fun. I will leave you with a link to talk by Tobias Pfeiffer about computer Go.

Why Every Agile Team Should Include a Tester

| Comments

A recent post on Jim Grey’s blog about his job hunt as a QA manager made me think about what my ideal test setup for an agile (SCRUM like) team would be.

The thing is, having QA as a completely separate team that tests everything once the development team has “finished” the features and bug fixes for the next release is very much out of line with every agile methodology. Agile processes are (to me at least) about faster feedback and the possibility to change direction quickly. So for example, if you were doing SCRUM with one week sprints, do a feature freeze every month (you know, management won’t let you release each week), and only then start testing all the features and bug fixes there’s quite a lot of overhead. The QA process may take a while as everything produced in a month needs to be tested, the developers have already moved on to new features and now have to switch back to fixing their old code (which is quite a mental overhead) and once everything has been tested, fixed, and tested again it’s already 2-3 weeks later.

Using Launchd to Manage Long Running Processes on Mac OS X

| Comments

I recently had the need to have a long running, user defined process on my Mac. At first I thought about using Monit or Inspeqtor, but then Jérémy Lecour pointed out to me that I could just use the built in launchd.

Lauchd can automatically start processes on startup and it can monitor them and restart them should they abort. Adding one yourself is rather easy. You create a file in ~/Library/LaunchAgents in a certain format. Here’s one of mine:

Learning Rust: Tasks and Messages Part 2

| Comments

The code examples of this blog post are available in the Git repository tasks-and-messages.

In part 1 of this series we started implementing our Pi calculation using the Monte Carlo method. We ended with code that works, but that still doesn’t return a value after exactly 10 seconds. In this part we’ll finish the implementation.

The problem with the previous implementation was that the worker() function had to wait for montecarlopi() to return, before it could react to the message from main(). The solution to this should now be obvious: Let’s put the montecarlopi() calculation in a separate task. Then worker() can listen to messages from both main() and montecarlopi() at the same time.

Learning Rust: Tasks and Messages Part 1

| Comments

The code examples of this blog post are available in the Git repository tasks-and-messages.

In the previous learning rust blog post I promised to talk about runtime polymorphism next. Instead I’m starting what is probably going to become a multi part series about concurrency. I’m doing this as I just happen to need this stuff for Iomrascálaí, my main Rust project. Iomrascálaí is an AI for the game of Go. Go is a two player game, and like Chess, it is played with a time limit during tournaments. So I need a way to tell the AI to search for the best move for the next N seconds and then return the result immediately.

Learning Rust: Compile Time Polymorphism

| Comments

Coming from Ruby, polymorphism is a big part of the language. After all Ruby is a (mostly) object oriented language. Going to a language like Rust which is compiled and has an emphasis on being fast, run time polymorphism isn’t that nice as it slows down the code. This is because there’s the overhead of selecting the right implementation of a method at runtime and also because there’s no way these calls can be inlined.

This is where compile time polymorphism comes in. Many times it is clear at compile time which concrete type we’re going to use in the program. We could write it down explicitly, but it is nicer (and more flexible) if the compiler can figure it out for us.

My Emacs Configuration

| Comments

As I currently work on a distributed team and we’re trying to do more and more pair programming I decided that it’s time to give Emacs a try again. Using tmate in combination with either Emacs or Vim seems to be the way to go due to the lower latency than a proper screen sharing solution.

Right now my Emacs configuration is rather basic, but I think it could be a good starting point for other people, too. This is why I made it available as a Github project.

If you have any problems with it let me know and more importantly (at least for me personally) if you notice anything that I should do differently I’d love to hear from you.

How to Test Rust on Travis CI

| Comments

Working with Ruby on Rails in my projects I’m used to running continuous integration on Travis CI. As this is free of charge for open source projects projects I wanted to set it up for my Rust project Iomrascálaí, too.

At first I used the setup provided by Rust CI, but as the project page doesn’t seem to be working 100% anymore and because the Debian package they provide of the rust nightly snapshot for some reason strips the Rust version number I decided to use the official nightly snapshots instead.