Urban Hafner

Ruby, Ruby on Rails, JavaScript programmer. Dabbling with Rust and agile methodologies on the side.

Stainless 0.1.5 Released

| Comments

Today version 0.1.5 of the stainless crate was released. It’s been in the making for a while now. Mainly because the original author (Jonathan Reem) has been busy with other things. But he graciously allowed me to help out as I like stainless as lot. So you can expect new releases more often from now on!

The highlights of this release are:

  1. The ability to disable tests by using ignore.
  2. Working benchmarks! They’ve been broken for a while now.
  3. Matching on the error message for failing tests
  4. Better documentation. The README was completely rewritten

Feel free to open new issues on GitHub if you think there’s stuff missing from this crate.

Rust Testing With Stainless

| Comments

A recent discussion in the issues of stainless prompted me to write a small blog post that explains the basics of testing in Rust using stainless. Note that stainless only works with the nightly Rust compiler as it requires compiler plugins.

First of all, let’s set up the project. We will build a library and write unit as well as integration tests for it (code by xetra11). Here’s the Cargo.toml file:

Cargo.toml
1
2
3
4
5
6
7
8
9
10
[package]
name = "renderay_rs"
version = "0.0.1"
authors = ["xetra11 <falke_88@hotmail.com>"]

[lib]
path = "src/renderay_core.rs"

[dependencies]
stainless = "*"

So now let’s look at the main entry point of the library. The code is just for illustration purposes and we don’t really care what it does. We do however care about the first three lines.

#![feature(plugin)] tells the Rust compiler to turn on support for compiler plugins. As stainless is a compiler plugin this is needed.

The line after that is a bit more complicated. It does the following: It first checks if we are currently compiling for testing (e.g. running cargo test) If that is the case then we add the line #![plugin(stainless)] which enables stainless. If we don’t compile for testing then we do nothing, i.e. we don’t enable stainless when compiling normally (e.g. when running cargo build) See this blog post for an in depth explanation if cfg_attr.

And then we define a submodule called test. This is where we will write our unit tests.

src/renderay_core.rs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
#![feature(plugin)]
#![cfg_attr(test, plugin(stainless))]

mod test;

pub struct Canvas {
    width: usize,
    height: usize,
    array: Vec<char>
}

impl Canvas {

    pub fn new(width: usize, height: usize, array: Vec<char>) -> Canvas {
        Canvas {
            width: width,
            height: height,
            array: array,
        }
    }

    pub fn array(&self) -> &Vec<char> {
        &self.array
    }

}

pub struct CanvasRenderer<'a> {
    canvas: &'a mut Canvas
}

impl <'a>CanvasRenderer<'a> {
    pub fn new(canvas: &'a mut Canvas) -> CanvasRenderer {
        CanvasRenderer {
            canvas: canvas
        }
    }

    pub fn render_point(&mut self, fill_symbol: char, pos_x: usize, pos_y: usize) {
        let canvas = &mut self.canvas;
        let mut array_to_fill = &mut canvas.array;
        let max_width: usize = canvas.width;
        let max_height: usize = canvas.height;
        let pos_to_draw = pos_x * pos_y;

        if pos_x > max_width || pos_y > max_height {
            panic!("Coordinates are out of array bounds")
        }

        array_to_fill[pos_to_draw] = fill_symbol;

    }
}

Alright, so let’s have a look at the unit tests. First we configure the module as a test module (doesn’t need to be compiled normally). Then we add our use declarations for the things we want to use in our tests. Due to implementation details of stainless we need to pub use. And they also need to be outside of the describe! blocks.

And then we come to the actual things added by stainless. describe!, before_each, and it. If you know rspec then this will look very familiar. it is used to define individual tests and describe! is used to group tests. And before_each is executed before each test in a group of tests.

If you look closely you will notice that due to the fact that the test module is a submodule of the code that we’re testing we have access to private functions and private struct fields.

src/test.rs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#![cfg(test)]

pub use super::CanvasRenderer;
pub use super::Canvas;

describe! canvas_renderer {

    before_each {
        let mut canvas = Canvas {
            width: 10,
            height: 10,
            array: vec!['x';10*10],
        };
    }

    it "should fill given char at given coords" {
        {
            let mut renderer: CanvasRenderer = CanvasRenderer::new(&mut canvas);
            renderer.render_point('x', 3,3);
        }
        assert_eq!('x', canvas.array[3*3]);
    }
}

Oh, and as we’re writing a library we of course should also write integration tests. These go into the tests/ folder of the project. It looks similar to our unit tests, but a few things are different:

  1. We can just use #![plugin(stainless)] as we will never compile this code outside of our tests.
  2. We need to add the library we’re building as an external crate (through extern crate renderay_rs;) as this is a separate executable.
  3. We cannot use private functions of struct fields here. So we need to use Canvas::new and a getter for the array.
tests/render_point.rs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#![feature(plugin)]
#![plugin(stainless)]

extern crate renderay_rs;

pub use renderay_rs::CanvasRenderer;
pub use renderay_rs::Canvas;

describe! integration_test {

    before_each {
        let mut canvas = Canvas::new(10, 10, vec!['x';10*10]);
    }

    it "should fill given char at given coords" {
        {
            let mut renderer: CanvasRenderer = CanvasRenderer::new(&mut canvas);
            renderer.render_point('x', 3,3);
        }
        assert_eq!('x', canvas.array()[3*3]);
    }

}

And running the tests looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
uh@macaron:~/renderay_rs$ cargo test
    Running target/debug/render_point-f60500163e82a187

running 1 test
test integration_test::should_fill_given_char_at_given_coords ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured

    Running target/debug/renderay_rs-42155898cc4eb950

running 1 test
test test::canvas_renderer::should_fill_given_char_at_given_coords ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured

    Doc-tests renderay_rs

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured

Iomrascálaí 0.3.0 Released

| Comments

It’s been a while since I wrote about Iomrascálaí the engine for the game of Go I’m writing in Rust. I will try to do it a bit more often from now on as I’ve finally found the motivation to work on it again.

So today I’d like to announce version 0.3.0! It’s been in the works since September and included two big improvements:

  1. We’re now using the RAVE heuristic in selecting which tree leaf to investigate next.
  2. We use a set of 3x3 patterns to guide both the tree exploration and the move selection in the playouts.

These two changes together lead to a strength increase when playing against GnuGo of ~20% on 9x9 and ~25% on 13x13! See the release notes and the change log for detailed listings of what actually changed between 0.2.4 and 0.3.0.

The plan for 0.4

The main goal for 0.4 is to finally get close to equal strength with GnuGo on 19x19. A bit task but where’s the fun in picking easy tasks? ;) To achieve this goal I’m planning to work on the following issues:

  1. Speed up the playouts! Just 100 playouts per second on 19x19 is really slow and it’s no wonder that the engine has no chance against GnuGo.
  2. Add criticality to tree selection algorithm. Apparently both Pachi and CrazyStone have had success with adding this as an additional term to the formula.
  3. Tune the parameters using CLOP. I’ve moved the parameters into a config file so at least technically it’s now easy to run experiments and optimize the parameters.
  4. Continue searching when the results are unclear. Various engines have had success with searching longer than the allocated time when the best move isn’t clear (i.e. close to the second best move).
  5. Use larger patterns. Until now the engine only uses 3x3 patterns. It seems worthwhile to investigate if using larger patterns can help.
  6. Use a DCNN to guide the search. There’s a pre-trained neural network that’s in use by several engines to guide the search and it has improved the results significantly for them. It may be a good idea to investigate this, too.

Challenges

  1. The main challenge is computation power! Running 500 games for 9x9 and 13x13 each already takes a few days. And adding 19x19 to the mix will mean that changes will take a long time to benchmark.
  2. The libraries to efficiently run the DCNN code (like Caffe of Tensorflow) have quite a lot of dependencies and it’s not clear how easily they can be integrated with Rust. It will at least make compiling the bot more difficult for newcomers.

Like I said, quite a challenging plan! But I’m sure it will be a lot of fun. I will leave you with a link to talk by Tobias Pfeiffer about computer Go.

Why Every Agile Team Should Include a Tester

| Comments

A recent post on Jim Grey’s blog about his job hunt as a QA manager made me think about what my ideal test setup for an agile (SCRUM like) team would be.

The thing is, having QA as a completely separate team that tests everything once the development team has “finished” the features and bug fixes for the next release is very much out of line with every agile methodology. Agile processes are (to me at least) about faster feedback and the possibility to change direction quickly. So for example, if you were doing SCRUM with one week sprints, do a feature freeze every month (you know, management won’t let you release each week), and only then start testing all the features and bug fixes there’s quite a lot of overhead. The QA process may take a while as everything produced in a month needs to be tested, the developers have already moved on to new features and now have to switch back to fixing their old code (which is quite a mental overhead) and once everything has been tested, fixed, and tested again it’s already 2-3 weeks later.

Using Launchd to Manage Long Running Processes on Mac OS X

| Comments

I recently had the need to have a long running, user defined process on my Mac. At first I thought about using Monit or Inspeqtor, but then Jérémy Lecour pointed out to me that I could just use the built in launchd.

Lauchd can automatically start processes on startup and it can monitor them and restart them should they abort. Adding one yourself is rather easy. You create a file in ~/Library/LaunchAgents in a certain format. Here’s one of mine:

Learning Rust: Tasks and Messages Part 2

| Comments

The code examples of this blog post are available in the Git repository tasks-and-messages.

In part 1 of this series we started implementing our Pi calculation using the Monte Carlo method. We ended with code that works, but that still doesn’t return a value after exactly 10 seconds. In this part we’ll finish the implementation.

The problem with the previous implementation was that the worker() function had to wait for montecarlopi() to return, before it could react to the message from main(). The solution to this should now be obvious: Let’s put the montecarlopi() calculation in a separate task. Then worker() can listen to messages from both main() and montecarlopi() at the same time.

Learning Rust: Tasks and Messages Part 1

| Comments

The code examples of this blog post are available in the Git repository tasks-and-messages.

In the previous learning rust blog post I promised to talk about runtime polymorphism next. Instead I’m starting what is probably going to become a multi part series about concurrency. I’m doing this as I just happen to need this stuff for Iomrascálaí, my main Rust project. Iomrascálaí is an AI for the game of Go. Go is a two player game, and like Chess, it is played with a time limit during tournaments. So I need a way to tell the AI to search for the best move for the next N seconds and then return the result immediately.

Learning Rust: Compile Time Polymorphism

| Comments

Coming from Ruby, polymorphism is a big part of the language. After all Ruby is a (mostly) object oriented language. Going to a language like Rust which is compiled and has an emphasis on being fast, run time polymorphism isn’t that nice as it slows down the code. This is because there’s the overhead of selecting the right implementation of a method at runtime and also because there’s no way these calls can be inlined.

This is where compile time polymorphism comes in. Many times it is clear at compile time which concrete type we’re going to use in the program. We could write it down explicitly, but it is nicer (and more flexible) if the compiler can figure it out for us.