Urban Hafner

Ruby, Ruby on Rails, JavaScript freelancer. Always looking for new projects.

Learning Rust: Tasks and Messages Part 2

| Comments

The code examples of this blog post are available in the Git repository tasks-and-messages.

In part 1 of this series we started implementing our Pi calculation using the Monte Carlo method. We ended with code that works, but that still doesn’t return a value after exactly 10 seconds. In this part we’ll finish the implementation.

The problem with the previous implementation was that the worker() function had to wait for montecarlopi() to return, before it could react to the message from main(). The solution to this should now be obvious: Let’s put the montecarlopi() calculation in a separate task. Then worker() can listen to messages from both main() and montecarlopi() at the same time.

Here’s the code:

tasks-and-messages-3.rs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
use std::io::Timer;
use std::rand::random;

fn montecarlopi(n: uint, sender: Sender<uint>) {
    println!("montecarlopi(): Starting calculation");
    let mut m = 0u;
    for _ in range(0u, n) {
        let x = random::<f32>();
        let y = random::<f32>();
        if (x*x + y*y) < 1.0 {
            m = m + 1;
        }
    }
    println!("montecarlopi(): Calculation done");
    sender.send_opt(m);
}

fn worker(receive_from_main: Receiver<uint>, send_to_main: Sender<f32>) {
    let mut m = 0u;
    let n = 10_000_000;
    let mut i = 0;
    let (sender, receive_from_montecarlo) = channel();
    let initial_sender = sender.clone();
    spawn(proc() {
        montecarlopi(n, initial_sender);
    });
    let mut timer = Timer::new().unwrap();
    loop {
        if receive_from_main.try_recv().is_ok() {
            println!("worker(): Aborting calculation due to signal from main");
            break;
        }
        let montecarlopi_result = receive_from_montecarlo.try_recv();
        if montecarlopi_result.is_ok() {
            m = m + montecarlopi_result.unwrap();
            i = i + 1;
            let sender_clone = sender.clone();
            spawn(proc() {
                montecarlopi(n, sender_clone);
            });
        }
        timer.sleep(50);
    }
    let val = 4.0 * m.to_f32().unwrap()/(n*i).to_f32().unwrap();
    send_to_main.send(val);
}

fn main() {
    let mut timer = Timer::new().unwrap();
    let (send_from_worker_to_main, receive_from_worker) = channel();
    let (send_from_main_to_worker, receive_from_main)   = channel();
    println!("main(): start calculation and wait 10s");
    spawn(proc() {
        worker(receive_from_main, send_from_worker_to_main);
    });
    timer.sleep(10_000);
    println!("main(): Sending abort to worker");
    send_from_main_to_worker.send(0);
    println!("main(): pi = {}", receive_from_worker.recv());
}

And here’s the output from running the program. As you can see from lines 12-15 it’s now working as intended. First main() sends the signal, then worker() reacts immediately by sending the latest result to main(), and montecarlopi() is left to finish its calculation (but the result is discarded).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ ./tasks-and-messages-3
main(): start calculation and wait 10s
montecarlopi(): Starting calculation
montecarlopi(): Calculation done
montecarlopi(): Starting calculation
montecarlopi(): Calculation done
montecarlopi(): Starting calculation
montecarlopi(): Calculation done
montecarlopi(): Starting calculation
montecarlopi(): Calculation done
montecarlopi(): Starting calculation
main(): Sending abort to worker
worker(): Aborting calculation due to signal from main
main(): pi = 3.141339
montecarlopi(): Calculation done

Now let’s go through the code and see what we had to change to make it work. First let’s look at montecarlopi():

1
2
3
4
5
6
7
8
9
10
11
12
13
fn montecarlopi(n: uint, sender: Sender<uint>) {
    println!("montecarlopi(): Starting calculation");
    let mut m = 0u;
    for _ in range(0u, n) {
        let x = random::<f32>();
        let y = random::<f32>();
        if (x*x + y*y) < 1.0 {
            m = m + 1;
        }
    }
    println!("montecarlopi(): Calculation done");
    sender.send_opt(m);
}

Now that it’s in its own task it has to communicate with the worker() function and send it the result of the calculation. This is as easy as passing in a Sender when calling it. The only interesting bit here is that we use send_opt() to send the result to the worker() instead of send(). This is because send() aborts the program when it can’t send the message (i.e. the receiver is gone). We need to handle this case as worker() may now return before montecarlopi() is done.

So far so good. Now we need to have a look at worker(). It needs to change to wire it up correctly with the new montecarlopi().

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
let (sender, receive_from_montecarlo) = channel();
let initial_sender = sender.clone();
spawn(proc() {
    montecarlopi(n, initial_sender);
});
let mut timer = Timer::new().unwrap();
loop {
    if receive_from_main.try_recv().is_ok() {
        println!("worker(): Aborting calculation due to signal from main");
        break;
    }
    let montecarlopi_result = receive_from_montecarlo.try_recv();
    if montecarlopi_result.is_ok() {
        m = m + montecarlopi_result.unwrap();
        i = i + 1;
        let sender_clone = sender.clone();
        spawn(proc() {
            montecarlopi(n, sender_clone);
        });
    }
    timer.sleep(50);
}

First we need a new channel to communicate between worker() and montecarlopi(). Then we start the first calculation in a new task. And after that we enter the endless loop. In it we check for both signals from main() (lines 8-11) and from montecarlopi() (lines 12-20). If there’s a message from main() it means we’re done and we exit the loop. If there’s a message from montecarlopi() it means that the calculation is done. We then update our best guess of Pi and start another calculation.

The concept used here in worker() isn’t that complex. What was the most difficult for me to get right was the setup of the channel. You can see here that we need to pass a copy of sender. This is due to the fact that not only does montecarlopi() take ownership of the sender, but also proc(). This is designed so that Rust can safely move the proc() and all the data associated with it to a different task. And we of course have to have the channel defined outside of the loop so that all tasks send their data back to the same task.

And this is it for this post! In the next part we’ll have a look at how we can simplify this design. I don’t know about you, but it took me quite a while to get this design right. I can’t imagine using it like this in production code.

Learning Rust: Tasks and Messages Part 1

| Comments

The code examples of this blog post are available in the Git repository tasks-and-messages.

In the previous learning rust blog post I promised to talk about runtime polymorphism next. Instead I’m starting what is probably going to become a multi part series about concurrency. I’m doing this as I just happen to need this stuff for Iomrascálaí, my main Rust project. Iomrascálaí is an AI for the game of Go. Go is a two player game, and like Chess, it is played with a time limit during tournaments. So I need a way to tell the AI to search for the best move for the next N seconds and then return the result immediately.

Explaining how the AI works is out of the scope of this blog post. The only thing you need to know here, is that it essentially is an endless loop that does some computation and the longer it can run, the better the result will be. Unfortunately each iteration of the loop is rather long, so we need to make sure we can return a result while we’re doing the computation of that iteration. This is where concurrency comes in handy. What if we could run the iteration in a separate Rust task? Then we could just return the result of the previous iteration if needed.

But enough theory, let’s get going. As we can’t just implement a whole Go AI for this blog post we need to find a simpler problem that has the property that it returns a better value the longer it runs. The simplest I could think of is calculating the value of Pi using the Monte Carlo method. Here’s a simple implementation of it:

tasks-and-messages-1.rs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
use std::rand::random;

fn montecarlopi(n: uint) -> f32 {
    let mut m = 0u;
    for _ in range(0u, n) {
        let x = random::<f32>();
        let y = random::<f32>();
        if (x*x + y*y) < 1.0 {
            m = m + 1;
        }
    }
    4.0 * m.to_f32().unwrap()/n.to_f32().unwrap()
}

fn main() {
    println!("For       1000 random drawings pi = {}", montecarlopi(1000));
    println!("For      10000 random drawings pi = {}", montecarlopi(10000));
    println!("For     100000 random drawings pi = {}", montecarlopi(100000));
    println!("For    1000000 random drawings pi = {}", montecarlopi(1000000));
    println!("For   10000000 random drawings pi = {}", montecarlopi(10000000));
}

If you run this you’ll see that the value of pi calculated by this function improves with the number of random drawings:

1
2
3
4
5
6
uh@croissant:~/Personal/rust$ ./tasks-and-messages-1
For       1000 random drawings pi = 3.132
For      10000 random drawings pi = 3.1428
For     100000 random drawings pi = 3.14416
For    1000000 random drawings pi = 3.141072
For   10000000 random drawings pi = 3.141082

Next, let’s rewrite this program so that it runs for 10 seconds and prints out the value of pi. To do this we’ll run the simulation in chunks of 10 million drawings (around 2.2s on my machine) in a separate task and we’ll let the main task wait for ten seconds. Once the 10 seconds are over we’ll send a signal to the worker task and ask it to return a result.

This is of course a bit contrived as we could just run the simulations in sync and regularly check if 10 seconds have passed. But we’re trying to learn about task here, remember?

Creating a new task in Rust is as easy as calling spawn(proc() { ... }) with some code. This however only creates a new task, but there’s no way to communicate with this task. That’s where channels come it. A channel is a pair of objects. One end can send data (the sender) and the other end (the receiver) can receive the data sent by the sender. Now let’s put it into action:

tasks-and-messages-2.rs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
use std::io::Timer;
use std::rand::random;

fn montecarlopi(n: uint) -> uint {
    let mut m = 0u;
    for _ in range(0u, n) {
        let x = random::<f32>();
        let y = random::<f32>();
        if (x*x + y*y) < 1.0 {
            m = m + 1;
        }
    }
    m
}

fn worker(receiver: Receiver<uint>, sender: Sender<f32>) {
    let mut m = 0u;
    let n = 10_000_000;
    let mut i = 0;
    loop {
        if receiver.try_recv().is_ok() {
            println!("worker(): Aborting calculation due to signal from main");
            break;
        }
        println!("worker(): Starting calculation");
        m = m + montecarlopi(n);
        println!("worker(): Calculation done");
        i = i + 1;
    }
    let val = 4.0 * m.to_f32().unwrap()/(n*i).to_f32().unwrap();
    sender.send(val);
}

fn main() {
    let mut timer = Timer::new().unwrap();
    let (send_from_worker_to_main, receive_from_worker) = channel();
    let (send_from_main_to_worker, receive_from_main)   = channel();
    println!("main(): start calculation and wait 10s");
    spawn(proc() {
        worker(receive_from_main, send_from_worker_to_main);
    });
    timer.sleep(10_000);
    println!("main(): Sending abort to worker");
    send_from_main_to_worker.send(0);
    println!("main(): pi = {}", receive_from_worker.recv());
}

What we do is as follows: We open two channels. One channel is for the worker() to send the value of pi to the main() function (send_from_worker_to_main and receive_from_worker). And another channel is to send a signal from main() to worker() to tell it to stop the calculation and return the result (send_from_main_to_worker and receive_from_main). To send something along a channel you just call send(VALUE) and to receive something you call recv(). It is important to note that recv() is blocking and waits for the next value to arrive. To either run a computation or abort we need to use the non-blocking version (try_recv()) in worker(). try_recv() returns a Result which can either be a wrapping of a real value (in this case is_ok() returns true) or and error (in which case is_ok() returns false).

Running this produces the following output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
uh@croissant:~/Personal/rust$ ./tasks-and-messages-2
main(): start calculation and wait 10s
worker(): Starting calculation
worker(): Calculation done
worker(): Starting calculation
worker(): Calculation done
worker(): Starting calculation
worker(): Calculation done
worker(): Starting calculation
worker(): Calculation done
worker(): Starting calculation
main(): Sending abort to worker
worker(): Calculation done
worker(): Aborting calculation due to signal from main
main(): pi = 3.141643

If you look closely at the result you will notice that we haven’t yet implemented everything as described. The worker() only returns a result to main() once it has finished the current run of montecarlopi(). But what I originally described was that it should be possible to return a result while the the computation is still running.

As this blog post has already gotten very long so we’ll end it here nevertheless. In the next installment, we’ll finish implementing the program and maybe even start cleaning up the code.

Learning Rust: Compile Time Polymorphism

| Comments

Coming from Ruby, polymorphism is a big part of the language. After all Ruby is a (mostly) object oriented language. Going to a language like Rust which is compiled and has an emphasis on being fast, run time polymorphism isn’t that nice as it slows down the code. This is because there’s the overhead of selecting the right implementation of a method at runtime and also because there’s no way these calls can be inlined.

This is where compile time polymorphism comes in. Many times it is clear at compile time which concrete type we’re going to use in the program. We could write it down explicitly, but it is nicer (and more flexible) if the compiler can figure it out for us.

Below is a small example of how this works. Implementer1 and Implementer2 are two structs that both implement the trait TheTrait. The third struct, Container, should be setup in such a way that it can store any struct that implements TheTrait.

Setting this up correctly in Rust is a tiny bit complicated. First, you need to let Rust know that you want to use a type variable when defining Container. To do this you write Container<T> and then use T wherever you want to refer to this type in the struct definition. You will notice that this never mentions the trait TheTrait. The place where you actually restrict this variable to the trait is in the concrete implementation of the Container struct. Note that the variable I’ve used in the definition of Container (called T) is different from the one I’ve used in the implementation (called X). Normally you wouldn’t do this as this makes the code much harder to understand, but I wanted to show that this is “just” a variable.

compile-time-polymorphic-structs.rs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#[deriving(Show)]
struct Implementer1;
#[deriving(Show)]
struct Implementer2;
#[deriving(Show)]
struct Container<T> { s: T }

trait TheTrait {}

impl TheTrait for Implementer1 {}
impl TheTrait for Implementer2 {}
impl<X: TheTrait> Container<X> {}

fn main() {
    let c1 = Container { s: Implementer1 };
    let c2 = Container { s: Implementer2 };
    println!("c1 = {}", c1);
    println!("c2 = {}", c2);
}

To prove that I haven’t told you any lies, let’s compile the program and run it. You’ll clearly see that c1 contains Implementer1 and c2 contains Implementer2.

1
2
3
4
$ rustc compile-time-polymorphic-struct.rs
$ ./compile-time-polymorphic-struct
c1 = Container { s: Implementer1 }
c2 = Container { s: Implementer2 }

Next time we’ll talk about how to do actual runtime polymorphism in Rust. After all it’s not always possible to know the type at compile time!

My Emacs Configuration

| Comments

As I currently work on a distributed team and we’re trying to do more and more pair programming I decided that it’s time to give Emacs a try again. Using tmate in combination with either Emacs or Vim seems to be the way to go due to the lower latency than a proper screen sharing solution.

Right now my Emacs configuration is rather basic, but I think it could be a good starting point for other people, too. This is why I made it available as a Github project.

If you have any problems with it let me know and more importantly (at least for me personally) if you notice anything that I should do differently I’d love to hear from you.

How to Test Rust on Travis CI

| Comments

Working with Ruby on Rails in my projects I’m used to running continuous integration on Travis CI. As this is free of charge for open source projects projects I wanted to set it up for my Rust project Iomrascálaí, too.

At first I used the setup provided by Rust CI, but as the project page doesn’t seem to be working 100% anymore and because the Debian package they provide of the rust nightly snapshot for some reason strips the Rust version number I decided to use the official nightly snapshots instead.

It was actually quite easy to do and if you want to test your Rust project on Travis CI yourself just drop that file into your project folder and adjust the last line to run your tests!

.travis.yml
1
2
3
4
5
6
7
8
language: c
install:
  - curl -O http://static.rust-lang.org/dist/rust-nightly-x86_64-unknown-linux-gnu.tar.gz
  - tar xfz rust-nightly-x86_64-unknown-linux-gnu.tar.gz
  - (cd rust-nightly-x86_64-unknown-linux-gnu/ && sudo ./install.sh)
script:
  - rustc --version
  - make

Introducting Iomrascálaí

| Comments

or “Help me learn Rust by pairing with me”

After years working in interpreted languages (Ruby, JavaScript) I recently discovered Rust. As I sadly don’t have the opportunity to use Rust directly in a client project, I decided to come up with a toy project to help me learn the language.

As Rust is supposedly good for programs that actually need raw speed (unlike the web apps I generally write) I decided to come back to one of my old time favourites: an artificial intelligence for the game of Go.

I had already tried writing a few of those in recent years (including helping Jason House with his bot written in D), but every time I had the problem that working alone wasn’t very motivating so I never managed to produce a working program.

To combat this I’m trying to pair program with people (i.e. you) to keep me going. So if you’re interested in either learning Rust with me or writing an AI for the game of Go, please get in touch!

Currently (as of April 2014) the project is still in it’s infancy (actually no code was written, yet), but please check out the repository and Trello board.

If you’re new to Rust I suggest the 30-minute Introduction to Rust, The Rust Language Tutorial, and Rust by Example as a start. If you’re new to the game of Go … well there’s a whole Wiki about it! And for computer go related content the best place is the computer-go mailing list.

So, please get in touch so that we can get this started!

Pair Program With Me!

| Comments

For most of my professional life as a programmer I’ve been either working alone as a freelancer or in small teams that didn’t practice pair programming. To improve my skills I want to start to pair program regularly with other people. If you’re interested please contact me or just schedule a session on my dedicated calendar.

I’m open to almost any topic and programming language but I know Ruby and JavaScript (both in a web development context) best so that would be good starting point for me.

As I live in Grenoble, France we’re probably going to pair program remotely. I don’t have much experience in doing that so please bear with me being a bit slow and having technical difficulties :)

My Reading List

| Comments

I always found it interesting to see what other people in our field are reading. In that vain I thought I’d share my reading list, too.

At this point it’s very much a work in progress and I’ll fill in more details as time goes on. Feel free to suggest new titles in the comments of this blog post or on the reading list page itself.

Book Review: Land of Lisp by Conrad Barski

| Comments

Land of Lisp by Conrad Barski is the second book in the Ruby Rogues book club. As I enjoyed Eloquent Ruby (which was the first book we read) very much I thought I’d give that one a try, too. And of course I hadn’t used Lisp for a long time so I thought it would be a good refresher.

The first thing you see when you pick up the book is the awesome cover. The somewhat poorly drawn Lisp Alien that Conrad Barski created some years ago as the mascot of Lisp sets the tone for the book: This isn’t just another dry textbook that explains everything that Lisp (Common Lisp in this case) does. It contains drawings and you learn not by writing yet another calculator but by writing games.

Of course he’s starting with the basic elements of Lisp so the first games are quite simple, but as the book progresses (he’s even covering macros and lazy programming) they get more complex and in the end we’re even presented with a board game with a GUI that you play in a browser against three computer players!

Ideally you should follow along by typing in the code and experimenting a bit to really understand all the concepts. But up to a certain point in the book it’s also OK to just read along. That’s what I did because I read the book in the evening on the couch or in the bed. Of course I didn’t understand everything with this approach, but it worked well enough for me as most code examples are explained in detail.

All in all it was a fun read and it was great so see how you program in a language where code and data are more or less equal. I’m not sure how much of that will translate into my Ruby or JavaScript programming but what the heck it was a fun read!