The highlights of this release are:
ignore
.Feel free to open new issues on GitHub if you think there’s stuff missing from this crate.
]]>First of all, let’s set up the project. We will build a library and write unit as well as integration tests for it (code by xetra11). Here’s the Cargo.toml
file:
1 2 3 4 5 6 7 8 9 10 |
|
So now let’s look at the main entry point of the library. The code is just for illustration purposes and we don’t really care what it does. We do however care about the first three lines.
#![feature(plugin)]
tells the Rust compiler to turn on support for compiler plugins. As stainless is a compiler plugin this is needed.
The line after that is a bit more complicated. It does the following: It first checks if we are currently compiling for testing (e.g. running cargo test
) If that is the case then we add the line #![plugin(stainless)]
which enables stainless. If we don’t compile for testing then we do nothing, i.e. we don’t enable stainless when compiling normally (e.g. when running cargo build
) See this blog post for an in depth explanation if cfg_attr
.
And then we define a submodule called test
. This is where we will write our unit tests.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
|
Alright, so let’s have a look at the unit tests. First we configure the module as a test module (doesn’t need to be compiled normally). Then we add our use
declarations for the things we want to use in our tests. Due to implementation details of stainless we need to pub use
. And they also need to be outside of the describe!
blocks.
And then we come to the actual things added by stainless. describe!
, before_each
, and it
. If you know rspec then this will look very familiar. it
is used to define individual tests and describe!
is used to group tests. And before_each
is executed before each test in a group of tests.
If you look closely you will notice that due to the fact that the test module is a submodule of the code that we’re testing we have access to private functions and private struct fields.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
Oh, and as we’re writing a library we of course should also write integration tests. These go into the tests/
folder of the project. It looks similar to our unit tests, but a few things are different:
#![plugin(stainless)]
as we will never compile this code outside of our tests.extern crate renderay_rs;
) as this is a separate executable.Canvas::new
and a getter for the array.1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
And running the tests looks like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
So today I’d like to announce version 0.3.0! It’s been in the works since September and included two big improvements:
These two changes together lead to a strength increase when playing against GnuGo of ~20% on 9x9 and ~25% on 13x13! See the release notes and the change log for detailed listings of what actually changed between 0.2.4 and 0.3.0.
The main goal for 0.4 is to finally get close to equal strength with GnuGo on 19x19. A bit task but where’s the fun in picking easy tasks? ;) To achieve this goal I’m planning to work on the following issues:
Like I said, quite a challenging plan! But I’m sure it will be a lot of fun. I will leave you with a link to talk by Tobias Pfeiffer about computer Go.
]]>The thing is, having QA as a completely separate team that tests everything once the development team has “finished” the features and bug fixes for the next release is very much out of line with every agile methodology. Agile processes are (to me at least) about faster feedback and the possibility to change direction quickly. So for example, if you were doing SCRUM with one week sprints, do a feature freeze every month (you know, management won’t let you release each week), and only then start testing all the features and bug fixes there’s quite a lot of overhead. The QA process may take a while as everything produced in a month needs to be tested, the developers have already moved on to new features and now have to switch back to fixing their old code (which is quite a mental overhead) and once everything has been tested, fixed, and tested again it’s already 2-3 weeks later.
A better approach I found is to have the testing being done right after the feature or bug fix is finished. Assuming you have automated tests and an automated deployment process (I’m assuming that you’re developing a web app) you can just have your continuous integration server run the tests and once they pass deploy the latest version of the code to your staging server and notify the tester. That way the tester can do the checking right away and send feedback within hours or even minutes. After such a short amount of time the developer in charge probably still knows enough about the code so that he can quickly fix the issues the tester found.
Obviously a final round of QA before getting a release out the door is still necessary, but as it can be assumed that all features and bug fixes are correctly implemented this can now be much shorter and needs to be less thorough. That way the release can be shipping much faster, any you know maybe you can even ship more often than once a month.
]]>Lauchd can automatically start processes on startup and it can monitor
them and restart them should they abort. Adding one yourself is rather
easy. You create a file in ~/Library/LaunchAgents
in a certain
format. Here’s one of mine:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
Then you notify launchd of your new file by running lauchnctl load
~/Library/LaunchAgents/gnugo13x13.plist
and you should see a new line
in your system.log
(accessible through
Console.app). If
all goes well then that’s all you will see there, but if starting the
log didn’t work you will see that mentioned in the system.log
, too.
Now let’s go through the interesting parts of that file. As you may
have already guessed we essentially setup key value pairs here. An XML
element key
defines the key name and the next element defines the
value.
Label
is the name of your job. It needs to be unique and it
is used in the system.log
whenever there is something happening
(stop, start, crash, …) with your job.
ProgramArguments
is an array of strings that make up your system
call. The first one is the path to the executable you want to run, and
the others are command line arguments. If you don’t have any command
line arguments you can just use Program
. So, I probably should have
used Program
in my example file, but that’s the actual file from my
system and it works, so why change it, right? ;)
KeepAlive
is optional and means that launchd will restart your job
should it terminate. RunAtLoad
is necessary to automatically start
your job when you turn on your computer.
The last two, StandardOutPath
and StandardErrorPath
should be self
explanatory. They are paths to files that will be used to log the
stdout and stderr of your job. There’s just one thing you need to keep
in mind. The folder where these files reside needs to exist before you
start the job. It will be created by launchd for you, but it will be
owned by root and therefore the job won’t be able to write in there
and the job will fail.
Detailed information on everything that you can do with launchd can be found at launchd.info.
]]>Now that things have slightly settled down I’m ready to continue with this project. And this is why I’m writing this blog post. If there’s anyone out there who either wants to learn Rust or learn about artificial intelligence then you’re welcome to help out with this project. I knew nothing about Rust when I started this project, but that didn’t stop Thomas P from joining and essentially teaching me Rust. I’m very greatful and I’d like to pay it forward by doing the same. So just have a look at the Github issue tracker, and ask what do work on either in the chat or in the Google Group.
And if you’re interested in artificial intelligence then this could be interesting for you, too. After all, the goal is to write a program that is good a playing this game!
]]>This the led me to the Hexagonal Rails talk by Matt Wynne:
Now, I’m off to finally read Growing Object-Oriented Software Guided by Tests and try to give decoupling my logic from the Ruby on Rails guts a real try!
]]>In part 1 of this series we started implementing our Pi calculation using the Monte Carlo method. We ended with code that works, but that still doesn’t return a value after exactly 10 seconds. In this part we’ll finish the implementation.
The problem with the previous implementation was that the worker()
function had to wait for montecarlopi()
to return, before it could
react to the message from main()
. The solution to this should now be
obvious: Let’s put the montecarlopi()
calculation in a separate
task. Then worker()
can listen to messages from both main()
and
montecarlopi()
at the same time.
Here’s the code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
|
And here’s the output from running the program. As you can see from
lines 12-15 it’s now working as intended. First main()
sends the
signal, then worker()
reacts immediately by sending the latest result to
main()
, and montecarlopi()
is left to finish its calculation (but
the result is discarded).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
Now let’s go through the code and see what we had to change to make it
work. First let’s look at montecarlopi()
:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Now that it’s in its own task it has to communicate with the
worker()
function and send it the result of the calculation. This is
as easy as passing in a Sender
when calling it. The only interesting
bit here is that we use send_opt()
to send the result to the
worker()
instead of send()
. This is because send()
aborts the
program when it can’t send the message (i.e. the receiver is gone). We
need to handle this case as worker()
may now return before
montecarlopi()
is done.
So far so good. Now we need to have a look at worker()
. It needs to
change to wire it up correctly with the new montecarlopi()
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
First we need a new channel to communicate between worker()
and
montecarlopi()
. Then we start the first calculation in a new task.
And after that we enter the endless loop. In it we check for both
signals from main()
(lines 8-11) and from montecarlopi()
(lines
12-20). If there’s a message from main()
it means we’re done and we
exit the loop. If there’s a message from montecarlopi()
it means
that the calculation is done. We then update our best guess of Pi and
start another calculation.
The concept used here in worker()
isn’t that complex. What was the
most difficult for me to get right was the setup of the channel. You
can see here that we need to pass a copy of sender. This is due to the
fact that not only does montecarlopi()
take ownership of the sender,
but also proc()
.
This is designed so that Rust can safely move the proc()
and all the
data associated with it to a different task. And we of course have to
have the channel defined outside of the loop so that all tasks send
their data back to the same task.
And this is it for this post! In the next part we’ll have a look at how we can simplify this design. I don’t know about you, but it took me quite a while to get this design right. I can’t imagine using it like this in production code.
]]>In the previous learning rust blog post I promised to talk about runtime polymorphism next. Instead I’m starting what is probably going to become a multi part series about concurrency. I’m doing this as I just happen to need this stuff for Iomrascálaí, my main Rust project. Iomrascálaí is an AI for the game of Go. Go is a two player game, and like Chess, it is played with a time limit during tournaments. So I need a way to tell the AI to search for the best move for the next N seconds and then return the result immediately.
Explaining how the AI works is out of the scope of this blog post. The only thing you need to know here, is that it essentially is an endless loop that does some computation and the longer it can run, the better the result will be. Unfortunately each iteration of the loop is rather long, so we need to make sure we can return a result while we’re doing the computation of that iteration. This is where concurrency comes in handy. What if we could run the iteration in a separate Rust task? Then we could just return the result of the previous iteration if needed.
But enough theory, let’s get going. As we can’t just implement a whole Go AI for this blog post we need to find a simpler problem that has the property that it returns a better value the longer it runs. The simplest I could think of is calculating the value of Pi using the Monte Carlo method. Here’s a simple implementation of it:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
If you run this you’ll see that the value of pi calculated by this function improves with the number of random drawings:
1 2 3 4 5 6 |
|
Next, let’s rewrite this program so that it runs for 10 seconds and prints out the value of pi. To do this we’ll run the simulation in chunks of 10 million drawings (around 2.2s on my machine) in a separate task and we’ll let the main task wait for ten seconds. Once the 10 seconds are over we’ll send a signal to the worker task and ask it to return a result.
This is of course a bit contrived as we could just run the simulations in sync and regularly check if 10 seconds have passed. But we’re trying to learn about task here, remember?
Creating a new task in Rust is as easy as calling spawn(proc() { ... })
with some
code. This however only creates a new task, but there’s no way to
communicate with this task. That’s where channels come it. A channel
is a pair of objects. One end can send data (the sender) and the other
end (the receiver) can receive the data sent by the sender. Now let’s
put it into action:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
|
What we do is as follows: We open two channels. One channel is for the
worker()
to send the value of pi to the main()
function
(send_from_worker_to_main
and receive_from_worker
). And
another channel is to send a signal from main()
to worker()
to
tell it to stop the calculation and return the result
(send_from_main_to_worker
and receive_from_main
). To send
something along a channel you just call send(VALUE)
and to receive
something you call recv()
. It is important to note that recv()
is
blocking and waits for the next value to arrive. To either run a
computation or abort we need to use the non-blocking version
(try_recv()
) in worker()
. try_recv()
returns a Result
which
can either be a wrapping of a real value (in this case is_ok()
returns true) or and error (in which case is_ok()
returns false).
Running this produces the following output:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
If you look closely at the result you will notice that we haven’t yet
implemented everything as described. The worker()
only returns a
result to main()
once it has finished the current run of
montecarlopi()
. But what I originally described was that it should
be possible to return a result while the the computation is still
running.
As this blog post has already gotten very long so we’ll end it here nevertheless. In the next installment, we’ll finish implementing the program and maybe even start cleaning up the code.
]]>This is where compile time polymorphism comes in. Many times it is clear at compile time which concrete type we’re going to use in the program. We could write it down explicitly, but it is nicer (and more flexible) if the compiler can figure it out for us.
Below is a small example of how this works. Implementer1
and
Implementer2
are two structs that both implement the trait
TheTrait
. The third struct, Container
, should be setup in such a
way that it can store any struct that implements TheTrait
.
Setting this up correctly in Rust is a tiny bit complicated. First,
you need to let Rust know that you want to use a type variable when
defining Container
. To do this you write Container<T>
and then use
T
wherever you want to refer to this type in the struct definition.
You will notice that this never mentions the trait TheTrait
. The
place where you actually restrict this variable to the trait is in the
concrete implementation of the Container
struct. Note that the
variable I’ve used in the definition of Container
(called T
) is
different from the one I’ve used in the implementation (called X
).
Normally you wouldn’t do this as this makes the code much harder to
understand, but I wanted to show that this is “just” a variable.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
To prove that I haven’t told you any lies, let’s compile the program
and run it. You’ll clearly see that c1
contains Implementer1
and
c2
contains Implementer2
.
1 2 3 4 |
|
Next time we’ll talk about how to do actual runtime polymorphism in Rust. After all it’s not always possible to know the type at compile time!
]]>Right now my Emacs configuration is rather basic, but I think it could be a good starting point for other people, too. This is why I made it available as a Github project.
If you have any problems with it let me know and more importantly (at least for me personally) if you notice anything that I should do differently I’d love to hear from you.
]]>At first I used the setup provided by Rust CI, but as the project page doesn’t seem to be working 100% anymore and because the Debian package they provide of the rust nightly snapshot for some reason strips the Rust version number I decided to use the official nightly snapshots instead.
It was actually quite easy to do and if you want to test your Rust project on Travis CI yourself just drop that file into your project folder and adjust the last line to run your tests!
1 2 3 4 5 6 7 8 |
|
After years working in interpreted languages (Ruby, JavaScript) I recently discovered Rust. As I sadly don’t have the opportunity to use Rust directly in a client project, I decided to come up with a toy project to help me learn the language.
As Rust is supposedly good for programs that actually need raw speed (unlike the web apps I generally write) I decided to come back to one of my old time favourites: an artificial intelligence for the game of Go.
I had already tried writing a few of those in recent years (including helping Jason House with his bot written in D), but every time I had the problem that working alone wasn’t very motivating so I never managed to produce a working program.
To combat this I’m trying to pair program with people (i.e. you) to keep me going. So if you’re interested in either learning Rust with me or writing an AI for the game of Go, please get in touch!
Currently (as of April 2014) the project is still in it’s infancy (actually no code was written, yet), but please check out the repository and Trello board.
If you’re new to Rust I suggest the 30-minute Introduction to Rust, The Rust Language Tutorial, and Rust by Example as a start. If you’re new to the game of Go … well there’s a whole Wiki about it! And for computer go related content the best place is the computer-go mailing list.
So, please get in touch so that we can get this started!
]]>I’m open to almost any topic and programming language but I know Ruby and JavaScript (both in a web development context) best so that would be good starting point for me.
As I live in Grenoble, France we’re probably going to pair program remotely. I don’t have much experience in doing that so please bear with me being a bit slow and having technical difficulties :)
]]>At this point it’s very much a work in progress and I’ll fill in more details as time goes on. Feel free to suggest new titles in the comments of this blog post or on the reading list page itself.
]]>The first thing you see when you pick up the book is the awesome cover. The somewhat poorly drawn Lisp Alien that Conrad Barski created some years ago as the mascot of Lisp sets the tone for the book: This isn’t just another dry textbook that explains everything that Lisp (Common Lisp in this case) does. It contains drawings and you learn not by writing yet another calculator but by writing games.
Of course he’s starting with the basic elements of Lisp so the first games are quite simple, but as the book progresses (he’s even covering macros and lazy programming) they get more complex and in the end we’re even presented with a board game with a GUI that you play in a browser against three computer players!
Ideally you should follow along by typing in the code and experimenting a bit to really understand all the concepts. But up to a certain point in the book it’s also OK to just read along. That’s what I did because I read the book in the evening on the couch or in the bed. Of course I didn’t understand everything with this approach, but it worked well enough for me as most code examples are explained in detail.
All in all it was a fun read and it was great so see how you program in a language where code and data are more or less equal. I’m not sure how much of that will translate into my Ruby or JavaScript programming but what the heck it was a fun read!
]]>I had to change it so that it looks like the code snippet below. Basically this fakes the asset pipeline for the benefit of sass-rails.
1 2 3 4 5 6 7 8 9 |
|
Disable compilation of the assets alongside disabling the asset pipeline as a whole.
1 2 |
|
The two small changes fixed sass-rails without the asset pipeline for now. Hopefully pull request #84 will be merged into sass-rails soon and a new version will be released so that this hack won’t be necessary. Until then this is the most basic fix I could come up with.
]]>It is a valuable book for several reasons. First of all it is as close as humanly possible to a definite style guide for programming in Ruby. This doesn’t just cover formatting your code and if or when to use camel case but more importantly when and how to use certain parts of the language. Secondly, it contains a lot of best practices, especially on the use of modules, classes and meta-programming. And thirdly it explains some of the more advances parts of Ruby extremely well. For example I never knew the difference between lambda
and Proc.new
and what hooks Ruby provides to help you with meta-programming.
So please do yourself a favor and read the book. More than once!
]]>So we all know that we can define class methods like this:
1 2 3 4 5 |
|
And we also know that this is equivalent to the following:
1 2 3 4 5 6 |
|
Now let’s compare this to defining a method on an object instead of a class (aka defining a singleton method):
1 2 3 4 |
|
Quite similar that code, isn’t it? This could just be a coincidence, but of course being a nicely designed language it isn’t.
Quite easily actually. As you know Ruby is an object oriented language down to it’s core. So it comes naturally that classes are just objects, too. So when you are defining class methods you are actually defining singleton methods on the instance of Class that defines your class (DennisMoore in this case) which is no different from defining singleton methods on “normal” objects!
Now that we know that we are just defining singleton methods the only question remaining is where these methods are stored? Remember, when we are calling a method on an object, Ruby searches for the method in the class of the object, then the super class and so on until it finds it. But of course we can’t put our singleton methods into the class as we only want these methods to be defined for that single object. So we could come up with some hack that stores the methods somehow in the instance. Or, we could do the elegant thing and add a special class to each object that sits between it and the “real” class of that object. This way we can use the normal rules of inheritance for singleton methods. This is of course what Ruby does. It calls it the singleton class even.
So to summarize: Each Ruby object has a singleton class in the method lookup path between itself and its class. That’s where singleton methods are defined. And class methods are just a special case of this as classes themselves are just objects (and instances of class Class).
]]>Adhering to these rules gives you nicely structured methods that should be easily understandable and as a side benefit they are also easily testable as each method only does one small thing and therefore there’s no need for extensive mocking and setting up context.
]]>