<p>There was an amazing atmospehere that evening. Many ad hoc discussions before, during and after the talk and that is exactly the way I like it. That is why I prefer speaking at smaller events. Speaker and audience seem to be more connected.</p>
<p><iframe src="http://born2code.net//www.slideshare.net/slideshow/embedcode/key/t5lYcqUKjGz8AQ" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" allowfullscreen> </iframe> <div style="margin-bottom:5px"> <strong> <a href="http://born2code.net//www.slideshare.net/pjvdsande/microservices-55573987" title="Microservices" target="blank">Microservices</a> </strong> from <strong><a href="http://born2code.net//www.slideshare.net/pjvdsande" target="_blank">Pieter Joost van de Sande</a></strong> </div></p>
]]><h2>Move commits to a new feature branch</h2>
<p>Make sure you have checked out the branch that contains the commits you like to move and execute the following:</p>
<ol> <li><code>git branch feature</code> will create the feature branch called feature.</li> <li><code>git reset --hard origin/master</code> will reset the current local master branch to the same commit as the remote master branch.</li> <li><code>git checkout feature</code> will simply switch to the feature branch which still contains the 4 commits.</li> <li><code>git push origin feature</code> will push it to the remote repository.</li> </ol>
<h2>Here is what happened</h2>
<p>The following ASCII drawing represents the situation I'm in when I discoved I have working on the master instead of a feature branch.</p> <div class="highlight"><pre><code class="text"> master ↓ commits A--B--C--D--E ↑ origin/master </code></pre></div> <p>Commit <code>A</code> is where <code>origin/master</code> the remote master branch. Commit <code>B</code>, <code>C</code>, <code>D</code> and <code>E</code> are the commits that should be moved to a new feature branch.</p>
<p>I start by creating the new feature branch and call it <code>feature</code>. This should set the state of the <code>feature</code> branch to the same state as the one currently checked out, in my case master.</p> <div class="highlight"><pre><code class="text">git branch feature </code></pre></div> <p>Now I have the following situation where <code>master</code> and <code>feature</code> point to the same commit <code>E</code>.</p> <div class="highlight"><pre><code class="text"> feature master ↓ commits A--B--C--D--E ↑ origin/master </code></pre></div> <p>I do not want commits from <code>B</code> to <code>E</code> to be on the <code>master</code> branch, so I reset to commit <code>A</code> with the <code>git reset</code> command. The easiest way to to reset to <code>origin/master</code>:</p> <div class="highlight"><pre><code class="text">git reset --hard origin/master </code></pre></div> <p>Alternatively I could reset it <em>n</em> possitions back. I use that approuch when it is just a single commit (<code>HEAD^</code>), or not more than a hand full (<code>HEAD~5</code>).</p> <div class="highlight"><pre><code class="text">git reset --hard HEAD~4 </code></pre></div> <p>I rarely reset to a commit sha like the following. But if you know the sha from commit <code>A</code> you can use it to reset to there.</p> <div class="highlight"><pre><code class="text">git reset --hard fd83c2 </code></pre></div> <p>The above resets the index and directory content the local <code>master</code> branch to point to commit <code>A</code>.</p> <div class="highlight"><pre><code class="text"> master feature ↓ ↓ commits A--B--C--D--E ↑ origin/master </code></pre></div> <p>Now I can checkout the <code>feature</code> branch to continue working in it.</p> <div class="highlight"><pre><code class="text">git checkout feature </code></pre></div> <p>Every commit we do now adds to the <code>feature</code> branch.</p> <div class="highlight"><pre><code class="text">echo "foobar" >> file.txt git add file.txt git commit -m 'Adds file.txt' </code></pre></div> <p>And our git repository will look like the following.</p> <div class="highlight"><pre><code class="text"> master feature ↓ ↓ commits A--B--C--D--E--F ↑ origin/master </code></pre></div> <p>The feature branch can be shared by pushing it to the remote.</p> <div class="highlight"><pre><code class="text">git push origin feature </code></pre></div> <p>This closes there circle and the repository looks like the following.</p> <div class="highlight"><pre><code class="text"> master feature ↓ ↓ commits A--B--C--D--E--F ↑ ↑ origin/master origin/feature </code></pre></div> <p>Happy git'ng!</p>
]]><p>What if I could bring this technique to the development of my docker containers? I expect that it will improve at least something to the ssh-into-a-container-and-start-trail-and-erroring-while-putting-the-succesfull-commands-in-a-dockerfile way I currently work.</h4></p>
<h2>The test environment</h2>
<p>Docker doesn't come with a test environment, nor are there specific test tools for docker. But this doesn't mean we cannot test our containers. We just a good test runner and something that can interact with the docker environment. I decided to use ruby with <a href="http://rspec.info">rspec</a> as a test runner and the <a href="https://github.com/swipely/docker-api/">docker-api</a> gem to interact with the docker environment.</p>
<p>Here is a list of docker client libraries for other platforms:</p>
<ul> <li>Erlang: <a href="https://github.com/proger/erldocker">erldocker</a></li> <li>Go <a href="https://github.com/fsouza/go-dockerclient">go-dockerclient</a></li> <li>Java <a href="https://github.com/kpelykh/docker-java">docker-java</a></li> <li>Nodejs: <a href="https://github.com/appersonlabs/docker.io">docker.io</a></li> <li>PHP <a href="http://pear.alvine.io/">Alvine</a></li> <li>PHP: <a href="https://github.com/mikemilano/docker-php">docker-php</a></li> <li>Python: <a href="https://github.com/dotcloud/docker-py">docker-py</a></li> <li>Ruby: <a href="https://github.com/swipely/docker-api/">docker-api</a></li> </ul>
<h2>Gemfile</h2>
<p>To setup the environment I create a new folder and put the following Gemfile into it:</p> <div class="highlight"><pre><code class="ruby"><span class="n">source</span> <span class="s1">'https://rubygems.org'</span>
<span class="n">gem</span> <span class="s1">'rspec'</span> <span class="n">gem</span> <span class="s1">'docker-api'</span> </code></pre></div> <p>A simple <code>bundle install</code> will retrieve all the dependencies.</p>
<h2>Test Driven Development</h2>
<p>Here is the process that I have in mind:</p>
<ol> <li>Start with a failing test</li> <li>Verify the test fails</li> <li>Implement the fix</li> <li>Run tests again to see verify it works and doesn't break anything else</li> <li>Repeat</li> </ol>
<h2>What I want to develop</h2>
<p>My goal is to develop a docker image that can be used to run as the database service for my application. It needs to run postgres 9.3 and must have a user in place for my application that has an empty database present.</p>
<h2>Writing the first test</h2>
<p>It is time to write the first test. It should just guide me to the next step in my development process, and not any further. It also must have a single and clear goal. A good one to start with is to verify if there is an image present in the docker environment. I don't care about the details of the image yet, just that is has the correct name <code>pjvds/postgres</code>. So I start by creating a file called <code>specs.rb</code>, require docker and write down the first spec:</p> <div class="highlight"><pre><code class="ruby"><span class="nb">require</span> <span class="s1">'docker'</span>
<span class="n">describe</span> <span class="s2">"Postgres image"</span> <span class="k">do</span> <span class="n">before</span><span class="p">(</span><span class="ss">:all</span><span class="p">)</span> <span class="p">{</span> <span class="vi">@image</span> <span class="o">=</span> <span class="ss">Docker</span><span class="p">:</span><span class="ss">:Image</span><span class="o">.</span><span class="n">all</span><span class="p">()</span><span class="o">.</span><span class="n">detect</span><span class="p">{</span><span class="o">|</span><span class="n">i</span><span class="o">|</span> <span class="n">i</span><span class="o">.</span><span class="n">info</span><span class="o">[</span><span class="s1">'Repository'</span><span class="o">]</span> <span class="o">==</span> <span class="s1">'pjvds/postgres'</span><span class="p">}</span> <span class="p">}</span>
<span class="n">it</span> <span class="s2">&quot;should be availble&quot;</span> <span class="k">do</span>
<span class="n">expect</span><span class="p">(</span><span class="vi">@image</span><span class="p">)</span><span class="o">.</span><span class="n">to_not</span> <span class="n">be_nil</span>
<span class="k">end</span>
<span class="k">end</span> </code></pre></div> <p>Running this spec will fail as expected:</p> <div class="highlight"><pre><code class="bash"><span class="nv">$ </span>rspec specs.rb F
Failures:
1<span class="o">)</span> Postgres image should be availble Failure/Error: expect<span class="o">(</span>image<span class="o">)</span>.tonot benil expected: not nil got: nil <span class="c"># ./specs.rb:9:in `block (2 levels) in <top (required)>'</span>
Finished in 0.00282 seconds 1 example, 1 failure
Failed examples:
rspec ./specs.rb:8 <span class="c"># Postgres image should be availble</span> </code></pre></div> <h2>Implementing the first test</h2>
<p>To satisfy the test I create a docker image with the name <code>pjvds/postgres</code>. So I create a very simple Dockerfile that just inherits from ubuntu.</p> <div class="highlight"><pre><code class="text">FROM ubuntu MAINTAINER Pieter Joost van de Sande <pj@wercker.com> </code></pre></div> <p>I use the <code>docker build</code> command to build an image based on the Dockerfile and give it the repository name that corresponds with the test:</p> <div class="highlight"><pre><code class="console"><span class="gp">$</span> docker build -t<span class="o">=</span>pjvds/postgres . <span class="go">Uploading context 61.44 kB</span> <span class="go">Step 1 : FROM ubuntu</span> <span class="go"> ---> 8dbd9e392a96</span> <span class="go">Successfully built 8dbd9e392a96</span> </code></pre></div> <h2>Green</h2>
<p>When I now run the specs again, it succeeds:</p> <div class="highlight"><pre><code class="console"><span class="gp">$</span> rspec specs.rb <span class="go">.</span>
<span class="go">Finished in 0.00278 seconds</span> <span class="go">1 example, 0 failures</span> </code></pre></div> <h2>Automate</h2>
<p>But, I don't want to type in the commands each time I want to build and run the tests. I create a file called <code>build</code> with execution permissions and add the steps I just took:</p> <div class="highlight"><pre><code class="bash"><span class="c">#!/bin/bash</span> <span class="nb">echo</span> <span class="s2">"Building docker image:"</span> docker build -t<span class="o">=</span>pjvds/postgres . <span class="nb">echo</span> <span class="nb">echo</span> <span class="s2">"Executing tests:"</span> rspec specs.rb </code></pre></div> <h2>Driving the next step</h2>
<p>I must write another failing test to drive the next step in my development process. Since I want to run postgres, the image should expose the postgres default tcp port 5432. This is docker's way to make a port inside a container available to the outside. This information is stored in the image container configuration and can easily be accessed with the docker-api gem. So, I write the following test:</p> <div class="highlight"><pre><code class="ruby"><span class="n">it</span> <span class="s2">"should expose the default tcp port"</span> <span class="k">do</span> <span class="n">expect</span><span class="p">(</span><span class="n">image</span><span class="o">.</span><span class="n">json</span><span class="o">[</span><span class="s2">"container_config"</span><span class="o">][</span><span class="s2">"ExposedPorts"</span><span class="o">]</span><span class="p">)</span><span class="o">.</span><span class="n">to</span> <span class="kp">include</span><span class="p">(</span><span class="s2">"5432/tcp"</span><span class="p">)</span> <span class="k">end</span> </code></pre></div> <h2>See it fail again</h2>
<p>I run the tests again to see one example fail:</p> <div class="highlight"><pre><code class="console"><span class="gp">$</span> ./build <span class="go">Finished in 0.007 seconds</span> <span class="go">2 examples, 1 failure</span>
<span class="go">Failed examples:</span>
<span class="go">rspec ./specs.rb:12 # Postgres image should expose the default tcp port</span> </code></pre></div> <h2>Implementing the test</h2>
<p>A small addition to the Dockerfile should satisfy the failing test:</p> <div class="highlight"><pre><code class="text">FROM ubuntu MAINTAINER Pieter Joost van de Sande <pj@wercker.com> EXPOSE 5432 </code></pre></div> <h2>Green</h2>
<p>If I now run the specs again, it succeeds:</p> <div class="highlight"><pre><code class="console"><span class="gp">$</span> ./build <span class="go">.</span>
<span class="go">Finished in 0.00278 seconds</span> <span class="go">3 example, 0 failures</span> </code></pre></div> <h2>Starting the container</h2>
<p>In the previous tests I asserted the docker environment and the image configuration. For the next test I want to test if the container accepts postgres connections and need a running container instance for this. In short I want to do the following:</p>
<ol> <li>Start a container</li> <li>Execute tests</li> <li>Stop it</li> </ol>
<p>I introduce a new describe level where I start the container based on the image:</p>
<div class="highlight"><pre><code class="ruby"><span class="n">describe</span> <span class="s2">"running it as a container"</span> <span class="k">do</span>
<span class="n">before</span><span class="p">(</span><span class="ss">:all</span><span class="p">)</span> <span class="k">do</span>
<span class="nb">id</span> <span class="o">=</span> <span class="sb">docker run -d -p 5432:5432 </span><span class="si">#{</span><span class="vi">@image</span><span class="o">.</span><span class="n">id</span><span class="si">}</span><span class="sb">
</span><span class="o">.</span><span class="n">chomp</span>
<span class="vi">@container</span> <span class="o">=</span> <span class="ss">Docker</span><span class="p">:</span><span class="ss">:Container</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="nb">id</span><span class="p">)</span>
<span class="k">end</span>
<span class="n">after</span><span class="p">(</span><span class="ss">:all</span><span class="p">)</span> <span class="k">do</span>
<span class="vi">@container</span><span class="o">.</span><span class="n">kill</span>
<span class="k">end</span>
<span class="k">end</span> </code></pre></div> <h2>Test postgres accepts connections</h2>
<p>Now that I have a context where the container is running, I write a small test to make sure it does not refuse connections to postgres.</p> <div class="highlight"><pre><code class="ruby"><span class="n">it</span> <span class="s2">"should accept connection to the default port"</span> <span class="k">do</span> <span class="n">expect</span><span class="p">{</span> <span class="no">PG</span><span class="o">.</span><span class="n">connect</span><span class="p">(</span><span class="s1">'host=127.0.0.1'</span><span class="p">)</span> <span class="p">}</span><span class="o">.</span><span class="n">tonot</span> <span class="n">raiseerror</span><span class="p">(</span><span class="ss">PG</span><span class="p">:</span><span class="ss">:ConnectionBad</span><span class="p">,</span> <span class="sr">/Connection refused/</span><span class="p">)</span> <span class="k">end</span> </code></pre></div> <p>I run the build script again to see it fail.</p>
<h2>Adding Postgres</h2>
<p>Now it is time to add postgres to the container. Installing it is easy with the postgresql apt repository. Here is the updated <code>Dockerfile</code> with detailed comments:</p> <div class="highlight"><pre><code class="text">FROM ubuntu MAINTAINER Pieter Joost van de Sande <pj@born2code.net>
EXPOSE 5432
ENV DATADIR /var/lib/postgresql/9.3/main ENV BINDIR /usr/lib/postgresql/9.3/bin ENV CONF_DIR /etc/postgresql/9.3/main
RUN apt-get update RUN apt-get install wget -y
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main" | tee -a /etc/apt/sources.list.d/postgresql.list RUN wget --quiet --no-check-certificate -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - RUN apt-get update
RUN apt-get install postgresql-9.3 postgresql-contrib-9.3 -y
RUN echo "host all all 0.0.0.0/0 md5" | tee -a $CONFDIR/pghba.conf RUN echo "listenaddresses='*'" | tee -a $CONFDIR/postgresql.conf
CMD ["/bin/su", "postgres", "-c", "$BINDIR/postgres -D $DATADIR -c configfile=$CONFIGDIR/postgresql.conf"] </code></pre></div> <h2>Green!</h2>
<p>If we now run the build again, we see it succeed:</p> <div class="highlight"><pre><code class="console"><span class="gp">$</span> ./build <span class="go">.</span>
<span class="go">Finished in 0.04833 seconds</span> <span class="go">2 examples, 0 failures</span> </code></pre></div> <h2>Test for the super user</h2>
<p>It doesn't make sense to have a database running without the power to make changes to it. Let's write a spec to drive the development of adding a super user to the postgres environment.</p> <div class="highlight"><pre><code class="ruby"><span class="n">it</span> <span class="s2">"should accept connections from superuser"</span> <span class="k">do</span> <span class="n">expect</span><span class="p">{</span> <span class="no">PG</span><span class="o">.</span><span class="n">connect</span><span class="p">(</span><span class="ss">:host</span> <span class="o">=></span> <span class="s1">'127.0.0.1'</span><span class="p">,</span> <span class="ss">:user</span> <span class="o">=></span> <span class="s1">'root'</span><span class="p">,</span> <span class="ss">:password</span> <span class="o">=></span> <span class="s1">'h^oAYozk&rC&'</span><span class="p">,</span> <span class="ss">:dbname</span> <span class="o">=></span> <span class="s1">'postgres'</span><span class="p">)}</span><span class="o">.</span><span class="n">tonot</span> <span class="n">raiseerror</span><span class="p">()</span> <span class="k">end</span> </code></pre></div> <p>Running the tests will now have one failure as expected:</p> <div class="highlight"><pre><code class="console"><span class="gp">$</span> ./build <span class="go">.</span>
<span class="go">Finished in 0.05978 seconds</span> <span class="go">3 examples, 1 failures</span> </code></pre></div> <h2>Adding a super user</h2>
<p>Here is a tricky part, I can't add a user with the <code>createuser</code> tool that ships with postgres. This requires postgres to be running, and since we are running postgres in an environment that doesn't have upstart available it isn't running after the installation. I could spend a lot of time getting it up and running in the background, or could start it in the foreground and pipe a command to it via stin. I opt for the later and create a small script that does exactly that:</p> <div class="highlight"><pre><code class="bash"><span class="c">#!/bin/bash</span> <span class="k">if</span> <span class="o">[[</span> ! <span class="nv">$BINDIR</span> <span class="o">]]</span>; <span class="k">then </span><span class="nb">echo</span> <span class="s2">"BINDIR not set, exiting"</span>; <span class="nb">exit</span> -1; <span class="k">fi</span> <span class="k">if</span> <span class="o">[[</span> ! <span class="nv">$DATADIR</span> <span class="o">]]</span>; <span class="k">then </span><span class="nb">echo</span> <span class="s2">"DATADIR not set, exiting"</span>; <span class="nb">exit</span> -1; <span class="k">fi</span> <span class="k">if</span> <span class="o">[[</span> ! <span class="nv">$CONFDIR</span> <span class="o">]]</span>; <span class="k">then </span><span class="nb">echo</span> <span class="s2">"CONFDIR not set, exiting"</span>; <span class="nb">exit</span> -1; <span class="k">fi</span> <span class="k">if</span> <span class="o">[[</span> ! <span class="nv">$1</span> <span class="o">]]</span>; <span class="k">then </span><span class="nb">echo</span> <span class="s2">"Missing query parameter, exiting"</span>; <span class="nb">exit</span> -2; <span class="k">fi</span>
su postgres sh -c <span class="s2">"$BINDIR/postgres --single -D $DATADIR -c configfile=$CONFDIR/postgresql.conf"</span> <span class="o"><<<</span> <span class="s2">"$1"</span> </code></pre></div> <p>I save this file as <code>psql</code> and add the following details to the Dockerfile to add a super user to the database:</p> <div class="highlight"><pre><code class="text"># Bootstrap postgres user and db ADD psql / RUN chmod +x /psql RUN /psql "CREATE USER root WITH SUPERUSER PASSWORD 'h^oAYozk&rC&';" </code></pre></div> <p>The <code>ADD</code> command copies the psql file into the container. Then it will give the file execution permissions and uses it to create a superuser with the name root.</p>
<h2>Green!</h2>
<p>If we now run the spec again, we see it succeed:</p> <div class="highlight"><pre><code class="console"><span class="gp">$</span> build <span class="go">.</span>
<span class="go">Finished in 0.05831 seconds</span> <span class="go">4 examples, 0 failures</span> </code></pre></div> <h2>What is next</h2>
<p>I can repeat this loop until I've added all the features requirements. Some tests that would follow could be;</p>
<ul> <li>does it have a user for our application?</li> <li>does this user also have a database?</li> <li>is this database empty?</li> <li>is the postgis extension available?</li> </ul>
<p>Taking this to the next level I could add this to a CI service, like <a href="http://wercker.com">wercker</a> and execute the tests on every push. This also makes it possible to do automated deployments to a docker index. But thats a scenario to cover in another post.</p>
<h2>Conclusion</h2>
<p>Using tests to drive the development of a docker container is pretty easy. There are a lot of client api's that enables almost any major programming environment to become a docker test environment. The biggest difference that I see compared with testing software applications that the docker tests come in the form of integration tests. This could be a problem if some aspect will take more time, but my current container tests are executing very fast. Also rebuilding the container is quite fast because of the way dockers caching works. You can even leverage this further by creating a base image for the stable prerequisites.</p>
<p>In short, it's a great addition to the ssh-into-a-container-and-start-trial-and-erroring-while-putting-the-succesfull-commands-in-a-dockerfile way of working and we will definitely explore this route even further.</p>
]]><h2>Prerequisites</h2>
<p>First of all, to use dokku with wercker (as described here) you need:</p>
<ul> <li>a <a href="http://github.com">github</a> or <a href="http://bitbucket.org">bitbucket</a> account,</li> <li>a <a href="https://app.wercker.com/sessions/new">wercker account</a>,</li> <li>a <a href="https://www.digitalocean.com/login">digital ocean account</a>,</li> <li>the <a href="http://devcenter.wercker.com/articles/gettingstarted/cli.html">wercker cli installed</a>.</li> </ul>
<h2>Add app to wercker</h2>
<p><a href="https://github.com/pjvds/getting-started-nodejs">Fork</a> the <a href="https://github.com/wercker/getting-started-nodejs">getting-started-nodejs sample application</a> and clone it on a local machine.</p>
<div class="highlight"><pre><code class="bash"><span class="nv">$ </span>git clone git@github.com:pjvds/getting-started-nodejs.git Cloning into <span class="s1">'getting-started-nodejs'</span>... remote: Counting objects: 24, <span class="k">done</span>. remote: Compressing objects: 100% <span class="o">(</span>19/19<span class="o">)</span>, <span class="k">done</span>. remote: Total 24 <span class="o">(</span>delta 5<span class="o">)</span>, reused 17 <span class="o">(</span>delta 1<span class="o">)</span> Receiving objects: 100% <span class="o">(</span>24/24<span class="o">)</span>, <span class="k">done</span>. Resolving deltas: 100% <span class="o">(</span>5/5<span class="o">)</span>, <span class="k">done</span>. Checking connectivity... <span class="k">done</span> </code></pre></div>
<p>With the <a href="http://devcenter.wercker.com/articles/gettingstarted/cli.html">wercker cli installed</a> add the project to wercker using the <code>wercker create</code> command (you can use the default options with any questions it will ask you).</p>
<div class="highlight"><pre><code class="bash"><span class="nv">$ </span><span class="nb">cd </span>getting-started-nodejs <span class="nv">$ </span>wercker create </code></pre></div>
<p>The wercker command should finish with something that looks like:</p>
<div class="highlight"><pre><code class="bash">Triggering build A new build has been created
You are all <span class="nb">set </span>up to <span class="k">for </span>using wercker. You can trigger new builds by committing and pushing your latest changes.
Happy coding! </code></pre></div>
<h2>Generate an SSH key</h2>
<p>Run <code>wercker open</code> to open the newly added project on wercker. You should see a successfull build that was triggered during the project creation via the wercker cli. Go to the settings tab and scroll down to 'Key management'. Click the <strong>generate new key pair</strong> button and enter a meaningful name, I named it "DOKKU".</p>
<p><img src="http://born2code.net/assets/posts/deploy-to-dokku/add-key.png" alt="add ssh key"></p>
<h2>Create a Dokku Droplet</h2>
<p>Now that we have an application in place and have generated an SSH key that will be used in deployment pipeline, it is time to get a dokku environment. Although you can run dokku virtually on every place that runs Linux, we'll use Digital Ocean to get the environment up and running within a minute.</p>
<p>After logging in to Digital Ocean, create a new droplet. Enter the details of your liking. The important part is to pick <strong>Dokku on Ubuntu 13.04</strong> in the applications tab.</p>
<p><img src="http://born2code.net/assets/posts/deploy-to-dokku/dokku_image.png" alt="dokku droplet"></p>
<h2>Get the ip</h2>
<p>After the droplet is created, you'll see a small dashboard with the details of that droplet. Next, <strong>replace</strong> the public <em>SSH key</em> in the dokku setup with the one from wercker. You can find it in the settings tab of your project. Copy the public key from the key management section and replace the existing key. Next, copy the ip address from the dokku setup(you can find the ip address of it in the left top corner), we'll use it later. You can now click 'Finish setup'.</p>
<p><img src="http://born2code.net/assets/posts/deploy-to-dokku/config-dokku.png" alt="configure dokku"></p>
<h2>Create a deploy target</h2>
<p>Go to the settings tab of the project on wercker, click on <strong>add deploy target</strong> and choose <strong>custom deploy target</strong>. Let's name it production and add two environment variables by clicking the <strong>add new variable</strong> button. The first one is the server host name: name it SERVER_HOSTNAME and set the value to the ip address of your newly created digital ocean droplet. Add another with the name DOKKU and choose SSH Key pair as a type. Now select the previously created ssh key from the dropdown and hit <strong>ok</strong>.</p>
<p>Don't forget to save the deploy target by clicking the <strong>save</strong> button!</p>
<h2>Add the wercker.yml</h2>
<p>We're ready for the last step which is setting up our deployment pipeline using the <a href="http://devcenter.wercker.com/articles/werckeryml/">wercker.yml file</a>. All we need to do now is tell wercker which steps to perform during a deploy. Create a file called <code>wercker.yml</code> in the root of your repository with the following content:</p>
<div class="highlight"><pre><code class="yaml"><span class="l-Scalar-Plain">box</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">wercker/nodejs</span> <span class="l-Scalar-Plain">build</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">steps</span><span class="p-Indicator">:</span> <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">npm-install</span> <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">npm-test</span> <span class="l-Scalar-Plain">deploy</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">steps</span><span class="p-Indicator">:</span> <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">add-to-knownhosts</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">hostname</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">$SERVERHOSTNAME</span> <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">add-ssh-key</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">keyname</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">DOKKU</span> <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">script</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">Initialize new repository</span> <span class="l-Scalar-Plain">code</span><span class="p-Indicator">:</span> <span class="p-Indicator">|</span> <span class="no">rm -rf .git</span> <span class="l-Scalar-Plain">git init</span> <span class="l-Scalar-Plain">git config --global user.name "wercker"</span> <span class="l-Scalar-Plain">git config --global user.email "pleasemailus@wercker.com"</span> <span class="l-Scalar-Plain">git remote add dokku dokku@$SERVERHOSTNAME:getting-started-nodejs</span> <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">script</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">Add everything to the repository</span> <span class="l-Scalar-Plain">code</span><span class="p-Indicator">:</span> <span class="p-Indicator">|</span> <span class="no">git add .</span> <span class="no">git commit -m "Result of deploy $WERCKERGIT_COMMIT"</span> <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">script</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">Push to dokku</span> <span class="l-Scalar-Plain">code</span><span class="p-Indicator">:</span> <span class="p-Indicator">|</span> <span class="no">git push dokku master</span> </code></pre></div>
<p>Add the file to the git repository and push it.</p>
<div class="highlight"><pre><code class="bash"><span class="nv">$ </span>git add wercker.yml <span class="nv">$ </span>git commit -m <span class="s1">'wercker.yml added'</span> <span class="nv">$ </span>git push origin master </code></pre></div>
<h2>Deploy</h2>
<p>Go to your project on wercker and open the latest build, wait until it is finished (and green). You can now click the <strong>Deploy to</strong> button and select the deploy target we created earlier. A new deploy will be queued and you'll be redirected to it. Wait untill the deploy is finished and enjoy your first successfull deploy to a Digital Ocean droplet running dokku!</p>
]]><div class="highlight"><pre><code class="go"><span class="nx">data</span> <span class="o">:=</span> <span class="p">[]</span><span class="kt">int</span><span class="p">{</span> <span class="mi">1</span><span class="p">,</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span> <span class="p">}</span> <span class="nx">data</span> <span class="p">=</span> <span class="nb">append</span><span class="p">(</span><span class="nx">data</span><span class="p">,</span> <span class="mi">4</span><span class="p">)</span>
<span class="nx">fmt</span><span class="p">.</span><span class="nx">Println</span><span class="p">(</span><span class="nx">data</span><span class="p">)</span> </code></pre></div>
<p>This will print the following:</p>
<div class="highlight"><pre><code class="bash"><span class="o">[</span>1 2 3 4<span class="o">]</span> </code></pre></div>
<p>Now I was browsing to some Go code and found something like this:</p>
<div class="highlight"><pre><code class="go"><span class="nx">data</span> <span class="o">:=</span> <span class="p">[]</span><span class="kt">int</span><span class="p">{</span> <span class="mi">1</span><span class="p">,</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span> <span class="p">}</span> <span class="nx">next</span> <span class="o">:=</span> <span class="p">[]</span><span class="kt">int</span><span class="p">{</span> <span class="mi">4</span><span class="p">,</span><span class="mi">5</span><span class="p">,</span><span class="mi">6</span> <span class="p">}</span>
<span class="k">for</span> <span class="nx">_</span> <span class="p">,</span> <span class="nx">v</span> <span class="o">:=</span> <span class="k">range</span> <span class="nx">next</span> <span class="p">{</span> <span class="nx">data</span> <span class="p">=</span> <span class="nb">append</span><span class="p">(</span><span class="nx">data</span><span class="p">,</span> <span class="nx">v</span><span class="p">)</span> <span class="p">}</span> </code></pre></div>
<p>This is not a very good way to append a slice to another slice. Actually it is bad and here is why. The append function appends elements to the end of a slice. If it has sufficient capacity, the destination is resliced to accommodate the new elements. If it does not, a new underlying array will be allocated.</p>
<p>The important part here is that it will allocate a new underlying array everytime there is insufficient capacity. Because the code above only appends a single value at the time, the runtime does not know what to expect, otherwise that it should grow to make room for an single extra value. The whole idea behind slices is to prevent unnecessary allocations.</p>
<p>An easy way to fix this is to pass all the data that needs to be appended at once. This way the runtime has all the information needed to grow only once, if needed.</p>
<div class="highlight"><pre><code class="go"><span class="nx">data</span> <span class="o">:=</span> <span class="p">[]</span><span class="kt">int</span><span class="p">{</span> <span class="mi">1</span><span class="p">,</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span> <span class="p">}</span> <span class="nx">next</span> <span class="o">:=</span> <span class="p">[]</span><span class="kt">int</span><span class="p">{</span> <span class="mi">4</span><span class="p">,</span><span class="mi">5</span><span class="p">,</span><span class="mi">6</span> <span class="p">}</span>
<span class="nx">data</span> <span class="p">=</span> <span class="nb">append</span><span class="p">(</span><span class="nx">data</span><span class="p">,</span> <span class="nx">next</span><span class="o">...</span><span class="p">)</span> </code></pre></div>
<p>Notice the suffix <code>...</code> added to <code>next</code>. This is Go's way of telling to a variadict that we want to pass multiple arguments instead of a single value. If we remove the <code>...</code> we will receive an compile error that <code>next</code> an invalid type to append to <code>data</code>. This makes sense because we can't append an <code>[]int</code> value to an <code>[]int</code>, only <code>int</code> values are allowed.</p>
<p>I guess that the programmer used an <code>for</code> loop to resolve this compile error instead of passing multiple arguments at once.</p>
]]><p>Redshift adjusts the color temperature your screen based on the position of the sun. When there is no sun because it is too early or too late it tries to match the color temperature of the lamps there room.</p>
<p>The thing that surprised me is that not only it alleviates the strain on my eyes, but that the slow transition of color temperature make me more aware of the time of the day. When my screen is pretty bright - normal color temperature - I know it is time to lunch. I know it is almost time to get ready for my trip to home when everything starts to look warm. Also the deep red tone during the evening gets makes me aware that I am hacking at night, rather than the I am just hacking. It is hard to explain this, but when I talked to one of my colleagues today, that was his experience as well.</p>
<p>It makes you better aware of time without the explicit numbers a clock offers. And a clock jumps from time to time. We have all been there where you look at the clock, and the next time you look a few hours past. I guess that Redshift is some sort of clock that you can't miss. Maybe that is the reason that is <a href="http://edition.cnn.com/2010/TECH/05/13/sleep.gadgets.ipad/index.html">can also help you sleep better</a>.</p>
<p>A pleasant surprise from a tool that I thought would <strong>just</strong> save my eyes.</p>
<p>Although <a href="http://jonls.dk/redshift/">Redshift</a> works on Linux and Windows, I think people that want to try this on Windows or Mac OS are better of with <a href="http://justgetflux.com/">f.lux</a>.</p>
]]><h2>The problem</h2>
<p>Here is a screenshot of a setup that I use pretty frequently when I am watching a video.</p>
<p><img src="http://born2code.net/assets/posts/awesomewm-and-full-screen-video/awesome-video-setup.png" alt="awesome video setup"></p>
<p>A video on the right, a twitter client - especially when watching a live stream -, and vim for taking notes.</p>
<p>But this setup breaks in awesomewm as soon as I switch the video to fullscreen.</p>
<p><img src="http://born2code.net/assets/posts/awesomewm-and-full-screen-video/awesome-broken-fullscreen.png" alt="awesome broken fullscreen"></p>
<h2>The solution</h2>
<p>The fix is pretty easy. We need to tell awesomewm how to handle <code>plugin-container</code> instances in a different way. It must not try to arrange them like other windows (the whole idea behind a tiling window manager), but just let it float on top of everything in the size it wants (fullscreen). To do so, add the following rule to your <code>rc.lua</code>.</p>
<div class="highlight"><pre><code class="lua"><span class="p">{</span> <span class="n">rule</span> <span class="o">=</span> <span class="p">{</span> <span class="n">instance</span> <span class="o">=</span> <span class="s2">"</span><span class="s">plugin-container"</span> <span class="p">},</span> <span class="n">properties</span> <span class="o">=</span> <span class="p">{</span> <span class="n">floating</span> <span class="o">=</span> <span class="kc">true</span><span class="p">,</span> <span class="n">focus</span> <span class="o">=</span> <span class="n">yes</span> <span class="p">}</span> <span class="p">},</span> </code></pre></div>
<p>After the change your can restart awesomewm by pressing <code>modkey+control+r</code>, or if that doesn't work, just logout and login again.</p>
<p>From now on fullscreen video will just work like you would expect it to work, fullscreen.</p>
<p><img src="http://born2code.net/assets/posts/awesomewm-and-full-screen-video/awesome-fullscreen.png" alt="awesome fullscreen video"></p>
<p>New instances will be floating fullscreen and get focus as they spawn. Just press <code>ESC</code> to exit them.</p>
<p><em>ps: The video is: <a href="http://vimeo.com/63690418">Everything I Know About Fast Databases I Learned at the Dog Track</a>.</em></p>
]]><div class="highlight"><pre><code class="bash">docker rm <span class="k">$(</span>docker ps -a -q<span class="k">)</span> </code></pre></div>
]]><p>Since building is very fast with Go, the easiest thing to do is to remove the compiled packages.</p> <div class="highlight"><pre><code class="text">rm -rf $GOPATH/{bin,pkg} </code></pre></div> <p>Happy coding!</p>
]]>