Urban Hafner

Ruby, Ruby on Rails, JavaScript freelancer. Always looking for new projects.

Learning Rust: Tasks and Messages Part 1

| Comments

The code examples of this blog post are available in the Git repository tasks-and-messages.

In the previous learning rust blog post I promised to talk about runtime polymorphism next. Instead I’m starting what is probably going to become a multi part series about concurrency. I’m doing this as I just happen to need this stuff for Iomrascálaí, my main Rust project. Iomrascálaí is an AI for the game of Go. Go is a two player game, and like Chess, it is played with a time limit during tournaments. So I need a way to tell the AI to search for the best move for the next N seconds and then return the result immediately.

Explaining how the AI works is out of the scope of this blog post. The only thing you need to know here, is that it essentially is an endless loop that does some computation and the longer it can run, the better the result will be. Unfortunately each iteration of the loop is rather long, so we need to make sure we can return a result while we’re doing the computation of that iteration. This is where concurrency comes in handy. What if we could run the iteration in a separate Rust task? Then we could just return the result of the previous iteration if needed.

But enough theory, let’s get going. As we can’t just implement a whole Go AI for this blog post we need to find a simpler problem that has the property that it returns a better value the longer it runs. The simplest I could think of is calculating the value of Pi using the Monte Carlo method. Here’s a simple implementation of it:

tasks-and-messages-1.rs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
use std::rand::random;

fn montecarlopi(n: uint) -> f32 {
    let mut m = 0u;
    for _ in range(0u, n) {
        let x = random::<f32>();
        let y = random::<f32>();
        if (x*x + y*y) < 1.0 {
            m = m + 1;
        }
    }
    4.0 * m.to_f32().unwrap()/n.to_f32().unwrap()
}

fn main() {
    println!("For       1000 random drawings pi = {}", montecarlopi(1000));
    println!("For      10000 random drawings pi = {}", montecarlopi(10000));
    println!("For     100000 random drawings pi = {}", montecarlopi(100000));
    println!("For    1000000 random drawings pi = {}", montecarlopi(1000000));
    println!("For   10000000 random drawings pi = {}", montecarlopi(10000000));
}

If you run this you’ll see that the value of pi calculated by this function improves with the number of random drawings:

1
2
3
4
5
6
uh@croissant:~/Personal/rust$ ./tasks-and-messages-1
For       1000 random drawings pi = 3.132
For      10000 random drawings pi = 3.1428
For     100000 random drawings pi = 3.14416
For    1000000 random drawings pi = 3.141072
For   10000000 random drawings pi = 3.141082

Next, let’s rewrite this program so that it runs for 10 seconds and prints out the value of pi. To do this we’ll run the simulation in chunks of 10 million drawings (around 2.2s on my machine) in a separate task and we’ll let the main task wait for ten seconds. Once the 10 seconds are over we’ll send a signal to the worker task and ask it to return a result.

This is of course a bit contrived as we could just run the simulations in sync and regularly check if 10 seconds have passed. But we’re trying to learn about task here, remember?

Creating a new task in Rust is as easy as calling spawn(proc() { ... }) with some code. This however only creates a new task, but there’s no way to communicate with this task. That’s where channels come it. A channel is a pair of objects. One end can send data (the sender) and the other end (the receiver) can receive the data sent by the sender. Now let’s put it into action:

tasks-and-messages-2.rs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
use std::io::Timer;
use std::rand::random;

fn montecarlopi(n: uint) -> uint {
    let mut m = 0u;
    for _ in range(0u, n) {
        let x = random::<f32>();
        let y = random::<f32>();
        if (x*x + y*y) < 1.0 {
            m = m + 1;
        }
    }
    m
}

fn worker(receiver: Receiver<uint>, sender: Sender<f32>) {
    let mut m = 0u;
    let n = 10_000_000;
    let mut i = 0;
    loop {
        if receiver.try_recv().is_ok() {
            println!("worker(): Aborting calculation due to signal from main");
            break;
        }
        println!("worker(): Starting calculation");
        m = m + montecarlopi(n);
        println!("worker(): Calculation done");
        i = i + 1;
    }
    let val = 4.0 * m.to_f32().unwrap()/(n*i).to_f32().unwrap();
    sender.send(val);
}

fn main() {
    let mut timer = Timer::new().unwrap();
    let (send_from_worker_to_main, receive_from_worker) = channel();
    let (send_from_main_to_worker, receive_from_main)   = channel();
    println!("main(): start calculation and wait 10s");
    spawn(proc() {
        worker(receive_from_main, send_from_worker_to_main);
    });
    timer.sleep(10_000);
    println!("main(): Sending abort to worker");
    send_from_main_to_worker.send(0);
    println!("main(): pi = {}", receive_from_worker.recv());
}

What we do is as follows: We open two channels. One channel is for the worker() to send the value of pi to the main() function (send_from_worker_to_main and receive_from_worker). And another channel is to send a signal from main() to worker() to tell it to stop the calculation and return the result (send_from_main_to_worker and receive_from_main). To send something along a channel you just call send(VALUE) and to receive something you call recv(). It is important to note that recv() is blocking and waits for the next value to arrive. To either run a computation or abort we need to use the non-blocking version (try_recv()) in worker(). try_recv() returns a Result which can either be a wrapping of a real value (in this case is_ok() returns true) or and error (in which case is_ok() returns false).

Running this produces the following output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
uh@croissant:~/Personal/rust$ ./tasks-and-messages-2
main(): start calculation and wait 10s
worker(): Starting calculation
worker(): Calculation done
worker(): Starting calculation
worker(): Calculation done
worker(): Starting calculation
worker(): Calculation done
worker(): Starting calculation
worker(): Calculation done
worker(): Starting calculation
main(): Sending abort to worker
worker(): Calculation done
worker(): Aborting calculation due to signal from main
main(): pi = 3.141643

If you look closely at the result you will notice that we haven’t yet implemented everything as described. The worker() only returns a result to main() once it has finished the current run of montecarlopi(). But what I originally described was that it should be possible to return a result while the the computation is still running.

As this blog post has already gotten very long so we’ll end it here nevertheless. In the next installment, we’ll finish implementing the program and maybe even start cleaning up the code.

Learning Rust: Compile Time Polymorphism

| Comments

Coming from Ruby, polymorphism is a big part of the language. After all Ruby is a (mostly) object oriented language. Going to a language like Rust which is compiled and has an emphasis on being fast, run time polymorphism isn’t that nice as it slows down the code. This is because there’s the overhead of selecting the right implementation of a method at runtime and also because there’s no way these calls can be inlined.

This is where compile time polymorphism comes in. Many times it is clear at compile time which concrete type we’re going to use in the program. We could write it down explicitly, but it is nicer (and more flexible) if the compiler can figure it out for us.

Below is a small example of how this works. Implementer1 and Implementer2 are two structs that both implement the trait TheTrait. The third struct, Container, should be setup in such a way that it can store any struct that implements TheTrait.

Setting this up correctly in Rust is a tiny bit complicated. First, you need to let Rust know that you want to use a type variable when defining Container. To do this you write Container<T> and then use T wherever you want to refer to this type in the struct definition. You will notice that this never mentions the trait TheTrait. The place where you actually restrict this variable to the trait is in the concrete implementation of the Container struct. Note that the variable I’ve used in the definition of Container (called T) is different from the one I’ve used in the implementation (called X). Normally you wouldn’t do this as this makes the code much harder to understand, but I wanted to show that this is “just” a variable.

compile-time-polymorphic-structs.rs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#[deriving(Show)]
struct Implementer1;
#[deriving(Show)]
struct Implementer2;
#[deriving(Show)]
struct Container<T> { s: T }

trait TheTrait {}

impl TheTrait for Implementer1 {}
impl TheTrait for Implementer2 {}
impl<X: TheTrait> Container<X> {}

fn main() {
    let c1 = Container { s: Implementer1 };
    let c2 = Container { s: Implementer2 };
    println!("c1 = {}", c1);
    println!("c2 = {}", c2);
}

To prove that I haven’t told you any lies, let’s compile the program and run it. You’ll clearly see that c1 contains Implementer1 and c2 contains Implementer2.

1
2
3
4
$ rustc compile-time-polymorphic-struct.rs
$ ./compile-time-polymorphic-struct
c1 = Container { s: Implementer1 }
c2 = Container { s: Implementer2 }

Next time we’ll talk about how to do actual runtime polymorphism in Rust. After all it’s not always possible to know the type at compile time!

My Emacs Configuration

| Comments

As I currently work on a distributed team and we’re trying to do more and more pair programming I decided that it’s time to give Emacs a try again. Using tmate in combination with either Emacs or Vim seems to be the way to go due to the lower latency than a proper screen sharing solution.

Right now my Emacs configuration is rather basic, but I think it could be a good starting point for other people, too. This is why I made it available as a Github project.

If you have any problems with it let me know and more importantly (at least for me personally) if you notice anything that I should do differently I’d love to hear from you.

How to Test Rust on Travis CI

| Comments

Working with Ruby on Rails in my projects I’m used to running continuous integration on Travis CI. As this is free of charge for open source projects projects I wanted to set it up for my Rust project Iomrascálaí, too.

At first I used the setup provided by Rust CI, but as the project page doesn’t seem to be working 100% anymore and because the Debian package they provide of the rust nightly snapshot for some reason strips the Rust version number I decided to use the official nightly snapshots instead.

It was actually quite easy to do and if you want to test your Rust project on Travis CI yourself just drop that file into your project folder and adjust the last line to run your tests!

.travis.yml
1
2
3
4
5
6
7
8
language: c
install:
  - curl -O http://static.rust-lang.org/dist/rust-nightly-x86_64-unknown-linux-gnu.tar.gz
  - tar xfz rust-nightly-x86_64-unknown-linux-gnu.tar.gz
  - (cd rust-nightly-x86_64-unknown-linux-gnu/ && sudo ./install.sh)
script:
  - rustc --version
  - make

Introducting Iomrascálaí

| Comments

or “Help me learn Rust by pairing with me”

After years working in interpreted languages (Ruby, JavaScript) I recently discovered Rust. As I sadly don’t have the opportunity to use Rust directly in a client project, I decided to come up with a toy project to help me learn the language.

As Rust is supposedly good for programs that actually need raw speed (unlike the web apps I generally write) I decided to come back to one of my old time favourites: an artificial intelligence for the game of Go.

I had already tried writing a few of those in recent years (including helping Jason House with his bot written in D), but every time I had the problem that working alone wasn’t very motivating so I never managed to produce a working program.

To combat this I’m trying to pair program with people (i.e. you) to keep me going. So if you’re interested in either learning Rust with me or writing an AI for the game of Go, please get in touch!

Currently (as of April 2014) the project is still in it’s infancy (actually no code was written, yet), but please check out the repository and Trello board.

If you’re new to Rust I suggest the 30-minute Introduction to Rust, The Rust Language Tutorial, and Rust by Example as a start. If you’re new to the game of Go … well there’s a whole Wiki about it! And for computer go related content the best place is the computer-go mailing list.

So, please get in touch so that we can get this started!

Pair Program With Me!

| Comments

For most of my professional life as a programmer I’ve been either working alone as a freelancer or in small teams that didn’t practice pair programming. To improve my skills I want to start to pair program regularly with other people. If you’re interested please contact me or just schedule a session on my dedicated calendar.

I’m open to almost any topic and programming language but I know Ruby and JavaScript (both in a web development context) best so that would be good starting point for me.

As I live in Grenoble, France we’re probably going to pair program remotely. I don’t have much experience in doing that so please bear with me being a bit slow and having technical difficulties :)

My Reading List

| Comments

I always found it interesting to see what other people in our field are reading. In that vain I thought I’d share my reading list, too.

At this point it’s very much a work in progress and I’ll fill in more details as time goes on. Feel free to suggest new titles in the comments of this blog post or on the reading list page itself.

Book Review: Land of Lisp by Conrad Barski

| Comments

Land of Lisp by Conrad Barski is the second book in the Ruby Rogues book club. As I enjoyed Eloquent Ruby (which was the first book we read) very much I thought I’d give that one a try, too. And of course I hadn’t used Lisp for a long time so I thought it would be a good refresher.

The first thing you see when you pick up the book is the awesome cover. The somewhat poorly drawn Lisp Alien that Conrad Barski created some years ago as the mascot of Lisp sets the tone for the book: This isn’t just another dry textbook that explains everything that Lisp (Common Lisp in this case) does. It contains drawings and you learn not by writing yet another calculator but by writing games.

Of course he’s starting with the basic elements of Lisp so the first games are quite simple, but as the book progresses (he’s even covering macros and lazy programming) they get more complex and in the end we’re even presented with a board game with a GUI that you play in a browser against three computer players!

Ideally you should follow along by typing in the code and experimenting a bit to really understand all the concepts. But up to a certain point in the book it’s also OK to just read along. That’s what I did because I read the book in the evening on the couch or in the bed. Of course I didn’t understand everything with this approach, but it worked well enough for me as most code examples are explained in detail.

All in all it was a fun read and it was great so see how you program in a language where code and data are more or less equal. I’m not sure how much of that will translate into my Ruby or JavaScript programming but what the heck it was a fun read!

Using Sass-rails 3.1.5 Without the Asset Pipeline on Rails 3.1.4

| Comments

I’m currently upgrading one of the Ruby on Rails apps from Rails 3.0 to Rails 3.1. As it so happens we’re using ActiveAdmin with it which requires sass-rails on Rails 3.1. At the time of writing the latest version of sass-rails is 3.1.5 and it requires the asset pipeline to be enabled. But I don’t want to upgrade from jammit at this time so I have to disable the asset pipeline. But with the asset pipeline disabled the app can’t start due to sass-rails. So here’s what I needed to do to make it work.

config/environment.rb

I had to change it so that it looks like the code snippet below. Basically this fakes the asset pipeline for the benefit of sass-rails.

1
2
3
4
5
6
7
8
9
# Load the rails application
require File.expand_path('../application', __FILE__)

Webanalyzer::Application.assets = Struct.new(:context_class) do
  def append_path(*args); end
end.new(Class.new)

# Initialize the rails application
Webanalyzer::Application.initialize!

config/application.rb

Disable compilation of the assets alongside disabling the asset pipeline as a whole.

1
2
config.assets.enabled = false
config.assets.compile = false

Done!

The two small changes fixed sass-rails without the asset pipeline for now. Hopefully pull request #84 will be merged into sass-rails soon and a new version will be released so that this hack won’t be necessary. Until then this is the most basic fix I could come up with.

Eloquent Ruby – the Final Verdict

| Comments

This will be my final post on Eloquent Ruby by Russ Olsen. All in all I really liked the book and I think that if you’re serious about being a Ruby developer this book should be in your library.

It is a valuable book for several reasons. First of all it is as close as humanly possible to a definite style guide for programming in Ruby. This doesn’t just cover formatting your code and if or when to use camel case but more importantly when and how to use certain parts of the language. Secondly, it contains a lot of best practices, especially on the use of modules, classes and meta-programming. And thirdly it explains some of the more advances parts of Ruby extremely well. For example I never knew the difference between lambda and Proc.new and what hooks Ruby provides to help you with meta-programming.

So please do yourself a favor and read the book. More than once!