Brians Functional Brain
If you’ve always wanted to implement a purely functional 2D rendered cellular automaton which runs in parallel on all your cores, this post is for you! I’ll walk you through how to render a 2d model of Brians brain in just 67 lines of Clojure.
My last post sparked an overwhelming interest from the coding communities, so I figured it was only fair to show another side of what you can visualize using Clojure. Since the early 1950’s mathematicians have been going nuts over cellular automatons, partly because they’re systems who seem to take on a life of their own. Brians Brain is a set of rules for a cellular automaton, which are applied to a 2d grid of cells. Iterating these rules causes new patterns to emerge, ‘simulating life’. Here’s an example of what the it looks like
Without motion it looks like a Pacman during a segfault, but once we start iterating, new shapes/patterns form seemingly at random. The rules for Brians Brain are as follows:
- Cells are either On, Off or Dying
- Dying cells die
- On cells start dying
- Off cells reignite if they have exactly 2 neighbors who are On
By neighbor you understand that I mean the 8 surrounding cells in a 3x3 grid.
Todays exercise is to implement this algorithm in an idiomatic, purely functional way. That won’t give us blazing performance, but it hopefully give us some code which is easy to reason about, easy to parallelize and of course, elegant.
Firstly we need to describe the world. If we were gunning for efficiency, we would use a single vector but since we want this first take to be readable, lets go with this:
(def dim-board [ 90 90]) (def board (for [x (range (dim-board 0))] (for [y (range (dim-board 1))] [(if (> 50 (rand-int 100)) :on :off) x y])))
First we set the dimensions of the board to 90×90 cells. Then for every value of X I walk the values in Y giving me at the top level 90 columns, each column containing 90 rows.
The content of each cell is 3 things
- X coordinate
- Y coordinate
The reason I choose to store coordinates in the vector is to make it easier to render later on, but we’ll get to that.
Next up, we need a way of examining our world. After an inspirational talk with cgrand, I decided to go with a windowed approach. Everytime we are applying rules to a cell we need to consider 9 cells in total: The target and his 8 neighbors. To do that we split our data into torus windows – torus as in, you can’t fall off the edge. Here’s an example of what we want, I’ve drawn 2 separate windows: green and blue
So in plain english, we could achieve this the following way: We get handed a collection and from this we concatenate the last item, the collection itself and the first item, so that
(1 2 3) = (3 1 2 3 1)
Then we partition this in chunks of 3, incrementing our offset by 1 on every step.
(3 1 2 3 1) = (3 1 2) (1 2 3) (2 3 1)
That seems like an insightful way of looking at our data, so now let’s code it:
(defn torus-window [coll] (partition 3 1 (concat [(last coll)] coll [(first coll)])))
Note that the above will work with each line in our Vector-in-Vector, but also with entire lines giving us the window in the picture above.
The 3 simple rules you saw me list above needs to be applied to each of these windows. Since we’ve now determined that all of our data comes in these chunks of 3, we will need the chunk above the chunk we’re in, the chunk below and our own chunk, that will give us the 9 relevant cells. So we imagine getting that passed to us and then apply the rules:
(defn rules [above current below] (let [[self x y] (second current)] (cond (= :on self) [:dying x y] (= :dying self) [:off x y] (= 2 (active-neighbors above current below)) [:on x y] :else [:off x y])))
If you read my last post, you’re comfortable seeing the destructuring done in the let statement, I’m simply pulling my own chunk apart and giving it names. From here on out, it’s a simple task of doing a conditional asking “Am I on?” => Dying, “Am I dying?” => Off and finally “Do I have exactly 2 neighbors who are On?”. There’s just one catch, how do we get the neighbors?
Well again, it always helps me to verbalize what I want before coding: I want to concatenate the 3 chunks I’m now dealing with, filter out every one of them who’s on and count the items of that sequence. Let’s code:
(defn active-neighbors [above [left _ right] below] (count (filter #(= :on (% 0)) (concat above [left right] below))))
So this again reads like plain english, perhaps with 2 exceptions. In the argument list, you see another destructuring, taking a chunk and giving it names. The filters predicate
#(= :on (% 0)) ;expands to (defn lambda [%] (= :on (% 0))
Which is read: An anonymous function taking 1 argument called ‘%’ and asking if the first item of that argument (% 0) is equal to :on. If that looks weird to you, hold off on the syntactic sugar – to me it reads like plain english.
Working the board
Ok, so we now have defined our world and developed tools to query it. The imperative approach would be to adress each chunk, apply the rules, mutate the state of the cell. But we’re doing this functionally, so we need to walk over the entire board, apply the rules and return an entire board. Our return is a totally seperate entity from the board we started walking.
So, in english: We want to turn our board into torus windows, giving us chunks with 3 columns in each. We want to apply ‘torus-window’ to all the elements of those chunks, giving us the 3×3 cells and the apply the rules to each of those.
(defn step [board] (map (fn [window] (apply #(apply map rules %&) (map torus-window window))) (torus-window board)))
The %& notation, is nothing special. It works in the same way as [a b & c] would in an arglist, allowing for an abitrary amount of arguments to be passed. This above code will perform one iteration – 1 step on the board it’s given as an argument. Similar to the Ikeda map from my last post, we could now do (iterate step board) to generate an infinite amount of iterations, but this time we’ll do it in another way.
Ok, now there are 2 things we must not forget before leaving ‘step’ alone. Firstly I promised you that this would be a parallelized implementation, heating up your duo/quad core machine! Secondly map is lazy and for a finite board which needs to be fully realized on every iteration, we might as well force that computation straight off using ‘dorun’. Example:
; Fully lazy (map inc (range 10))
Or we can force evaluation
; Eager: (use ‘dorun’ if you don’t need the head) (doall (map inc (range 10)))
Ok, so we wrap all maps in ‘doall’ and we’re set. But now for the more complicated matter of parallelizing our code. What we have now is a sequential traversal of our board running in a single thread. And somehow we need to distribute that out over as many threads as make sense, so let’s make a todo list:
- Distribute the labor across multiple threads
- Ensure all computations are performed with integrity
- Finish somewhat simultaneously
- Avoid bugs
Ok, so let’s code. Example:
; Sequential computation (map #(apply * %) (range 10000))
And now for the multicore version
; Parallelized computation (pmap #(apply * %) (range 10000))
Yay! I feel confident that I’ve not introduced any bugs :) I almost wish it was harder, but the truth is that in Clojure pmap is a parallelized version of map and it will handle all the scheduling for me.
Now all we need to do, is generate a loop which will perform the iterations on our board. Since we’re functional, we pass an entire board to this loop and it will then do following iterations:
(defn activity-loop [surface stage] (while true (swap! stage step) (.repaint surface)))
I get an atom ‘stage’ which contains our current iteration and I apply ‘step’ to that, giving me the next iteration. Swap then atomically updates ‘stage’ to the next iteration. I call the surface and ask for a repaint and the cycle continues.
Lastly we need to render our simulation and this turns out to be quite easy since we have all the data we need straight on the board. I’ll introduce a little helper, because we want to force all our maps again:
(defn fmap [f coll] (doall (map f coll)))
Next up it’s just a matter of mapping some render-function on all our cells:
(defn render [g img bg stage] (.setColor bg Color/BLACK) (.fillRect bg 0 0 (dim-screen 0) (dim-screen 1)) (fmap (fn [col] (fmap #(when (not= :off (% 0)) (render-cell bg %)) col)) stage) (.drawImage g img 0 0 nil))
Firstly we set the brush color to black and erase everything on screen. If we didn’t do this, we had to explicitly paint all the Off cells, which wouldn’t be any faster. Secondly notice I’m not allocating anything for painting this time, unlike the Ikeda-map renderer. This is because I don’t want to waste any horse powers so I allocate my BufferedImage, Graphics context etc once and for all when I start the program and then pass these in as args. Stage represents the current iteration.
I pass stage to fmap, so fmap is now working on all the columns. I pass each column to another fmap, which then works on the cells and if the current cell isn’t :off it gets sent to our painter:
(defn render-cell [#^Graphics g cell] (let [[state x y] cell x (inc (* x (dim-scale 0))) y (inc (* y (dim-scale 1)))] (doto g (.setColor (if (= state :dying) Color/GRAY Color/WHITE)) (.fillRect x y (dec (dim-scale 0)) (dec (dim-scale 1))))))
The painter first calculates where on the screen we need to paint and that’s just a matter of multiplying by some scaling factor. Secondly, we set the color of our brush to either GRAY or WHITE. Since there are only 2 possibilities I only test if we’re :dying, if not I assume we’re :on. Secondly, I fill a little rectangle with that color.
So all we need to do now, is construct a JFrame, override the render method, allocate my graphics object (img, bg) show the frame and…..
(let [stage (atom board) frame (JFrame.) img (BufferedImage. (dim-screen 0) (dim-screen 1) (BufferedImage/TYPE_INT_ARGB)) bg (.getGraphics img) panel (doto (proxy [JPanel]  (paint [g] (render g img bg @stage))))] (doto frame (.add panel) .pack (.setSize (dim-screen 0) (dim-screen 1)) .show (.setDefaultCloseOperation JFrame/EXIT_ON_CLOSE)) (future (activity-loop panel stage)))
The last line is interesting because it’s a nice feature of Clojure. I want to take a call to my activity loop and have it done later in another thread – ‘future’ gives me that threading quite painlessly!
Now that was a walkthrough of 67 lines of Clojure, to get it running we call java -cp clojure.jar as usual, but I’ve dug up a few itsy bitsy tweaks to give us a few more % of performance, execute like so:
$ java -Xmx512m -Xms512m -XX:MaxNewSize=24m -XX:NewSize=24m -XX:+UseParNewGC -:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC -cp ~/clojure.jar clojure.main ca.clj
Please don’t ask me to explain all those options to you because… I won’t 8) But it will give you something similar to this, but never quite the same:
For the beginner, thinking in sequence abstractions can be mind bending – For me it’s still a challenge. When you adopt this approach, you gain readability of code and thus understanding. You can implement algorithms that look true to form and although the functional approach in some cases (this being one such) comes with a performance penalty the benefits outweigh the drawbacks:
- Easy to reason about
- Incredibly concise code, fewer lines = fewer possibilities for bugs
- No state = Fewer bugs = Geared for concurrency.
Oh, and about concurrency, did you see the effect of pmap? Check it out:
Both CPU’s are hammering away at ~85% activity. Compare that to an imperative language like Python where one CPU would be going at 50% and the other sleeping.
Hope you enjoyed the read,
(import '(javax.swing JFrame JPanel) '(java.awt Color Graphics) '(java.awt.image BufferedImage)) (def dim-board [ 90 90]) (def dim-screen [600 600]) (def dim-scale (vec (map / dim-screen dim-board))) (defn fmap [f coll] (doall (map f coll))) (defn render-cell [#^Graphics g cell] (let [[state x y] cell x (inc (* x (dim-scale 0))) y (inc (* y (dim-scale 1)))] (doto g (.setColor (if (= state :dying) Color/GRAY Color/WHITE)) (.fillRect x y (dec (dim-scale 0)) (dec (dim-scale 1)))))) (defn render [g img bg stage] (.setColor bg Color/BLACK) (.fillRect bg 0 0 (dim-screen 0) (dim-screen 1)) (fmap (fn [col] (fmap #(when (not= :off (% 0)) (render-cell bg %)) col)) stage) (.drawImage g img 0 0 nil)) (def board (for [x (range (dim-board 0))] (for [y (range (dim-board 1))] [(if (< 50 (rand-int 100)) :on :off) x y]))) (defn active-neighbors [above [left _ right] below] (count (filter #(= :on (% 0)) (concat above [left right] below)))) (defn torus-window [coll] (partition 3 1 (concat [(last coll)] coll [(first coll)]))) (defn rules [above current below] (let [[self x y] (second current)] (cond (= :on self) [:dying x y] (= :dying self) [:off x y] (= 2 (active-neighbors above current below)) [:on x y] :else [:off x y]))) (defn step [board] (doall (pmap (fn [window] (apply #(doall (apply map rules %&)) (doall (map torus-window window)))) (torus-window board)))) (defn activity-loop [surface stage] (while true (swap! stage step) (.repaint surface))) (let [stage (atom board) frame (JFrame.) img (BufferedImage. (dim-screen 0) (dim-screen 1) (BufferedImage/TYPE_INT_ARGB)) bg (.getGraphics img) panel (doto (proxy [JPanel]  (paint [g] (render g img bg @stage))))] (doto frame (.add panel) .pack (.setSize (dim-screen 0) (dim-screen 1)) .show (.setDefaultCloseOperation JFrame/EXIT_ON_CLOSE)) (future (activity-loop panel stage)))