The code examples of this blog post are available in the Git repository tasks-and-messages.
In the previous learning rust blog post I promised to talk about runtime polymorphism next. Instead I’m starting what is probably going to become a multi part series about concurrency. I’m doing this as I just happen to need this stuff for Iomrascálaí, my main Rust project. Iomrascálaí is an AI for the game of Go. Go is a two player game, and like Chess, it is played with a time limit during tournaments. So I need a way to tell the AI to search for the best move for the next N seconds and then return the result immediately.
Explaining how the AI works is out of the scope of this blog post. The only thing you need to know here, is that it essentially is an endless loop that does some computation and the longer it can run, the better the result will be. Unfortunately each iteration of the loop is rather long, so we need to make sure we can return a result while we’re doing the computation of that iteration. This is where concurrency comes in handy. What if we could run the iteration in a separate Rust task? Then we could just return the result of the previous iteration if needed.
But enough theory, let’s get going. As we can’t just implement a whole Go AI for this blog post we need to find a simpler problem that has the property that it returns a better value the longer it runs. The simplest I could think of is calculating the value of Pi using the Monte Carlo method. Here’s a simple implementation of it:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
If you run this you’ll see that the value of pi calculated by this function improves with the number of random drawings:
1 2 3 4 5 6 |
|
Next, let’s rewrite this program so that it runs for 10 seconds and prints out the value of pi. To do this we’ll run the simulation in chunks of 10 million drawings (around 2.2s on my machine) in a separate task and we’ll let the main task wait for ten seconds. Once the 10 seconds are over we’ll send a signal to the worker task and ask it to return a result.
This is of course a bit contrived as we could just run the simulations in sync and regularly check if 10 seconds have passed. But we’re trying to learn about task here, remember?
Creating a new task in Rust is as easy as calling spawn(proc() { ... })
with some
code. This however only creates a new task, but there’s no way to
communicate with this task. That’s where channels come it. A channel
is a pair of objects. One end can send data (the sender) and the other
end (the receiver) can receive the data sent by the sender. Now let’s
put it into action:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
|
What we do is as follows: We open two channels. One channel is for the
worker()
to send the value of pi to the main()
function
(send_from_worker_to_main
and receive_from_worker
). And
another channel is to send a signal from main()
to worker()
to
tell it to stop the calculation and return the result
(send_from_main_to_worker
and receive_from_main
). To send
something along a channel you just call send(VALUE)
and to receive
something you call recv()
. It is important to note that recv()
is
blocking and waits for the next value to arrive. To either run a
computation or abort we need to use the non-blocking version
(try_recv()
) in worker()
. try_recv()
returns a Result
which
can either be a wrapping of a real value (in this case is_ok()
returns true) or and error (in which case is_ok()
returns false).
Running this produces the following output:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
If you look closely at the result you will notice that we haven’t yet
implemented everything as described. The worker()
only returns a
result to main()
once it has finished the current run of
montecarlopi()
. But what I originally described was that it should
be possible to return a result while the the computation is still
running.
As this blog post has already gotten very long so we’ll end it here nevertheless. In the next installment, we’ll finish implementing the program and maybe even start cleaning up the code.