Move commits from master to feature branch

I am a big fan of the feature branching model. Working in an isolated branch created especially for the feature you are working on has its advantages. But, there is one thing I keep forgetting: creating the actual feature branch. This means I'm commiting directly to the master branch. Most of the times I notice this just before pushing. When this is the case, I quickly create a new feature branch and move my commits to it. In this post I'd like to share how I do this, how I move my commits from the master to a new feature branch.

Move commits to a new feature branch

Make sure you have checked out the branch that contains the commits you like to move and execute the following:

  1. git branch feature will create the feature branch called feature.
  2. git reset --hard origin/master will reset the current local master branch to the same commit as the remote master branch.
  3. git checkout feature will simply switch to the feature branch which still contains the 4 commits.
  4. git push origin feature will push it to the remote repository.

Here is what happened

The following ASCII drawing represents the situation I'm in when I discoved I have working on the master instead of a feature branch.

                    master
                      ↓
commits   A--B--C--D--E
          ↑
    origin/master

Commit A is where origin/master the remote master branch. Commit B, C, D and E are the commits that should be moved to a new feature branch.

I start by creating the new feature branch and call it feature. This should set the state of the feature branch to the same state as the one currently checked out, in my case master.

git branch feature

Now I have the following situation where master and feature point to the same commit E.

                     feature
                      master
                        ↓
commits     A--B--C--D--E
            ↑
      origin/master

I do not want commits from B to E to be on the master branch, so I reset to commit A with the git reset command. The easiest way to to reset to origin/master:

git reset --hard origin/master

Alternatively I could reset it n possitions back. I use that approuch when it is just a single commit (HEAD^), or not more than a hand full (HEAD~5).

git reset --hard HEAD~4

I rarely reset to a commit sha like the following. But if you know the sha from commit A you can use it to reset to there.

git reset --hard fd83c2

The above resets the index and directory content the local master branch to point to commit A.

          master     feature
            ↓           ↓
commits     A--B--C--D--E
            ↑
      origin/master

Now I can checkout the feature branch to continue working in it.

git checkout feature

Every commit we do now adds to the feature branch.

echo "foobar" >> file.txt
git add file.txt
git commit -m 'Adds file.txt'

And our git repository will look like the following.

          master        feature
            ↓              ↓
commits     A--B--C--D--E--F
            ↑
      origin/master

The feature branch can be shared by pushing it to the remote.

git push origin feature

This closes there circle and the repository looks like the following.

          master        feature
            ↓              ↓
commits     A--B--C--D--E--F
            ↑              ↑
      origin/master  origin/feature

Happy git'ng!

Experimenting with Test Driven Development for Docker

One of the programming practices that had the biggest impact on the way I write code is Test Driven Development. It significantly shortened the development feedback loop and helps to break down development into small steps with each have a clear goal. The test suite acts as a safety net that enables me to refactor with confidence. It is also a fun way to document a project in an executable form.

What if I could bring this technique to the development of my docker containers? I expect that it will improve at least something to the ssh-into-a-container-and-start-trail-and-erroring-while-putting-the-succesfull-commands-in-a-dockerfile way I currently work.

The test environment

Docker doesn't come with a test environment, nor are there specific test tools for docker. But this doesn't mean we cannot test our containers. We just a good test runner and something that can interact with the docker environment. I decided to use ruby with rspec as a test runner and the docker-api gem to interact with the docker environment.

Here is a list of docker client libraries for other platforms:

Gemfile

To setup the environment I create a new folder and put the following Gemfile into it:

source 'https://rubygems.org'

gem 'rspec'
gem 'docker-api'

A simple bundle install will retrieve all the dependencies.

Test Driven Development

Here is the process that I have in mind:

  1. Start with a failing test
  2. Verify the test fails
  3. Implement the fix
  4. Run tests again to see verify it works and doesn't break anything else
  5. Repeat

What I want to develop

My goal is to develop a docker image that can be used to run as the database service for my application. It needs to run postgres 9.3 and must have a user in place for my application that has an empty database present.

Writing the first test

It is time to write the first test. It should just guide me to the next step in my development process, and not any further. It also must have a single and clear goal. A good one to start with is to verify if there is an image present in the docker environment. I don't care about the details of the image yet, just that is has the correct name pjvds/postgres. So I start by creating a file called specs.rb, require docker and write down the first spec:

require 'docker'

describe "Postgres image" do
    before(:all) {
        @image = Docker::Image.all().detect{|i| i.info['Repository'] == 'pjvds/postgres'}
    }

    it "should be availble" do
        expect(@image).to_not be_nil
    end
end

Running this spec will fail as expected:

$ rspec specs.rb
F

Failures:

  1) Postgres image should be availble
     Failure/Error: expect(image).to_not be_nil
       expected: not nil
            got: nil
     # ./specs.rb:9:in `block (2 levels) in <top (required)>'

Finished in 0.00282 seconds
1 example, 1 failure

Failed examples:

rspec ./specs.rb:8 # Postgres image should be availble

Implementing the first test

To satisfy the test I create a docker image with the name pjvds/postgres. So I create a very simple Dockerfile that just inherits from ubuntu.

FROM ubuntu
MAINTAINER Pieter Joost van de Sande <pj@wercker.com>

I use the docker build command to build an image based on the Dockerfile and give it the repository name that corresponds with the test:

$ docker build -t=pjvds/postgres .
Uploading context 61.44 kB
Step 1 : FROM ubuntu
 ---> 8dbd9e392a96
Successfully built 8dbd9e392a96

Green

When I now run the specs again, it succeeds:

$ rspec specs.rb
.

Finished in 0.00278 seconds
1 example, 0 failures

Automate

But, I don't want to type in the commands each time I want to build and run the tests. I create a file called build with execution permissions and add the steps I just took:

#!/bin/bash
echo "Building docker image:"
docker build -t=pjvds/postgres .
echo
echo "Executing tests:"
rspec specs.rb

Driving the next step

I must write another failing test to drive the next step in my development process. Since I want to run postgres, the image should expose the postgres default tcp port 5432. This is docker's way to make a port inside a container available to the outside. This information is stored in the image container configuration and can easily be accessed with the docker-api gem. So, I write the following test:

it "should expose the default tcp port" do
    expect(image.json["container_config"]["ExposedPorts"]).to include("5432/tcp")
end

See it fail again

I run the tests again to see one example fail:

$ ./build
Finished in 0.007 seconds
2 examples, 1 failure

Failed examples:

rspec ./specs.rb:12 # Postgres image should expose the default tcp port

Implementing the test

A small addition to the Dockerfile should satisfy the failing test:

FROM ubuntu
MAINTAINER Pieter Joost van de Sande <pj@wercker.com>
EXPOSE 5432

Green

If I now run the specs again, it succeeds:

$ ./build
.

Finished in 0.00278 seconds
3 example, 0 failures

Starting the container

In the previous tests I asserted the docker environment and the image configuration. For the next test I want to test if the container accepts postgres connections and need a running container instance for this. In short I want to do the following:

  1. Start a container
  2. Execute tests
  3. Stop it

I introduce a new describe level where I start the container based on the image:

describe "running it as a container" do
    before(:all) do
        id = `docker run -d -p 5432:5432 #{@image.id}`.chomp
        @container = Docker::Container.get(id)
    end

    after(:all) do
        @container.kill
    end
end

Test postgres accepts connections

Now that I have a context where the container is running, I write a small test to make sure it does not refuse connections to postgres.

it "should accept connection to the default port" do
    expect{ PG.connect('host=127.0.0.1') }.to_not raise_error(PG::ConnectionBad, /Connection refused/)
end

I run the build script again to see it fail.

Adding Postgres

Now it is time to add postgres to the container. Installing it is easy with the postgresql apt repository. Here is the updated Dockerfile with detailed comments:

FROM ubuntu
MAINTAINER Pieter Joost van de Sande <pj@born2code.net>

# Allow incomming connection on default postgres port
EXPOSE 5432

# Store postgres directories as environment variables
ENV DATA_DIR /var/lib/postgresql/9.3/main
ENV BIN_DIR /usr/lib/postgresql/9.3/bin
ENV CONF_DIR /etc/postgresql/9.3/main

# Install required packages for setup
RUN apt-get update
RUN apt-get install wget -y

# Adds postgresql apt repository.
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main" | tee -a /etc/apt/sources.list.d/postgresql.list
RUN wget --quiet --no-check-certificate -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN apt-get update

# Install the postgresql packages
RUN apt-get install postgresql-9.3 postgresql-contrib-9.3 -y

# Configure postgres to accept connections from everywhere
# and let it listen to all addresses of the container.
RUN echo "host all all 0.0.0.0/0 md5" | tee -a $CONF_DIR/pg_hba.conf
RUN echo "listen_addresses='*'" | tee -a $CONF_DIR/postgresql.conf

# Set defaults for container execution. This will run the postgres process with
# the postgres user account and  specifies the data directory and configuration file location.
CMD ["/bin/su", "postgres", "-c", "$BIN_DIR/postgres -D $DATA_DIR -c config_file=$CONFIG_DIR/postgresql.conf"]

Green!

If we now run the build again, we see it succeed:

$ ./build
.

Finished in 0.04833 seconds
2 examples, 0 failures

Test for the super user

It doesn't make sense to have a database running without the power to make changes to it. Let's write a spec to drive the development of adding a super user to the postgres environment.

it "should accept connections from superuser" do
    expect{ PG.connect(:host => '127.0.0.1', :user => 'root', :password => 'h^oAYozk&rC&', :dbname => 'postgres')}.to_not raise_error()
end

Running the tests will now have one failure as expected:

$ ./build
.

Finished in 0.05978 seconds
3 examples, 1 failures

Adding a super user

Here is a tricky part, I can't add a user with the createuser tool that ships with postgres. This requires postgres to be running, and since we are running postgres in an environment that doesn't have upstart available it isn't running after the installation. I could spend a lot of time getting it up and running in the background, or could start it in the foreground and pipe a command to it via stin. I opt for the later and create a small script that does exactly that:

#!/bin/bash
if [[ ! $BIN_DIR ]]; then echo "BIN_DIR not set, exiting"; exit -1; fi
if [[ ! $DATA_DIR ]]; then echo "DATA_DIR not set, exiting"; exit -1; fi
if [[ ! $CONF_DIR ]]; then echo "CONF_DIR not set, exiting"; exit -1; fi
if [[ ! $1 ]]; then echo "Missing query parameter, exiting"; exit -2; fi

su postgres sh -c "$BIN_DIR/postgres --single -D $DATA_DIR -c config_file=$CONF_DIR/postgresql.conf" <<< "$1"

I save this file as psql and add the following details to the Dockerfile to add a super user to the database:

# Bootstrap postgres user and db
ADD psql /
RUN chmod +x /psql
RUN /psql "CREATE USER root WITH SUPERUSER PASSWORD 'h^oAYozk&rC&';"

The ADD command copies the psql file into the container. Then it will give the file execution permissions and uses it to create a superuser with the name root.

Green!

If we now run the spec again, we see it succeed:

$ build
.

Finished in 0.05831 seconds
4 examples, 0 failures

What is next

I can repeat this loop until I've added all the features requirements. Some tests that would follow could be;

  • does it have a user for our application?
  • does this user also have a database?
  • is this database empty?
  • is the postgis extension available?

Taking this to the next level I could add this to a CI service, like wercker and execute the tests on every push. This also makes it possible to do automated deployments to a docker index. But thats a scenario to cover in another post.

Conclusion

Using tests to drive the development of a docker container is pretty easy. There are a lot of client api's that enables almost any major programming environment to become a docker test environment. The biggest difference that I see compared with testing software applications that the docker tests come in the form of integration tests. This could be a problem if some aspect will take more time, but my current container tests are executing very fast. Also rebuilding the container is quite fast because of the way dockers caching works. You can even leverage this further by creating a base image for the stable prerequisites.

In short, it's a great addition to the ssh-into-a-container-and-start-trial-and-erroring-while-putting-the-succesfull-commands-in-a-dockerfile way of working and we will definitely explore this route even further.

Deploying to Dokku

Jeff Lindsay created Dokku, the smallest PaaS implementation you've ever seen. It is powered by Docker and written in less than 100 lines of Bash code. I wanted to play with ever since it was released. This weekend I finally did and successfully deployed my application to Dokku running on an Digital Ocean droplet. In this post I share how you can do this as well. Of course I used to wercker to automate everything.

Prerequisites

First of all, to use dokku with wercker (as described here) you need:

Add app to wercker

Fork the getting-started-nodejs sample application and clone it on a local machine.

$ git clone git@github.com:pjvds/getting-started-nodejs.git
Cloning into 'getting-started-nodejs'...
remote: Counting objects: 24, done.
remote: Compressing objects: 100% (19/19), done.
remote: Total 24 (delta 5), reused 17 (delta 1)
Receiving objects: 100% (24/24), done.
Resolving deltas: 100% (5/5), done.
Checking connectivity... done

With the wercker cli installed add the project to wercker using the wercker create command (you can use the default options with any questions it will ask you).

$ cd getting-started-nodejs
$ wercker create

The wercker command should finish with something that looks like:

Triggering build
A new build has been created

Done.
-------------

You are all set up to for using wercker. You can trigger new builds by
committing and pushing your latest changes.

Happy coding!

Generate an SSH key

Run wercker open to open the newly added project on wercker. You should see a successfull build that was triggered during the project creation via the wercker cli. Go to the settings tab and scroll down to 'Key management'. Click the generate new key pair button and enter a meaningful name, I named it "DOKKU".

add ssh key

Create a Dokku Droplet

Now that we have an application in place and have generated an SSH key that will be used in deployment pipeline, it is time to get a dokku environment. Although you can run dokku virtually on every place that runs Linux, we'll use Digital Ocean to get the environment up and running within a minute.

After logging in to Digital Ocean, create a new droplet. Enter the details of your liking. The important part is to pick Dokku on Ubuntu 13.04 in the applications tab.

dokku droplet

Get the ip

After the droplet is created, you'll see a small dashboard with the details of that droplet. Next, replace the public SSH key in the dokku setup with the one from wercker. You can find it in the settings tab of your project. Copy the public key from the key management section and replace the existing key. Next, copy the ip address from the dokku setup(you can find the ip address of it in the left top corner), we'll use it later. You can now click 'Finish setup'.

configure dokku

Create a deploy target

Go to the settings tab of the project on wercker, click on add deploy target and choose custom deploy target. Let's name it production and add two environment variables by clicking the add new variable button. The first one is the server host name: name it SERVER_HOSTNAME and set the value to the ip address of your newly created digital ocean droplet. Add another with the name DOKKU and choose SSH Key pair as a type. Now select the previously created ssh key from the dropdown and hit ok.

Don't forget to save the deploy target by clicking the save button!

Add the wercker.yml

We're ready for the last step which is setting up our deployment pipeline using the wercker.yml file. All we need to do now is tell wercker which steps to perform during a deploy. Create a file called wercker.yml in the root of your repository with the following content:

box: wercker/nodejs
build:
  steps:
    - npm-install
    - npm-test
deploy:
  steps:
    - add-to-known_hosts:
        hostname: $SERVER_HOSTNAME
    - add-ssh-key:
        keyname: DOKKU
    - script:
        name: Initialize new repository
        code: |
            rm -rf .git
          git init
          git config --global user.name "wercker"
          git config --global user.email "pleasemailus@wercker.com"
          git remote add dokku dokku@$SERVER_HOSTNAME:getting-started-nodejs
    - script:
        name: Add everything to the repository
        code: |
          git add .
          git commit -m "Result of deploy $WERCKER_GIT_COMMIT"
    - script:
        name: Push to dokku
        code: |
          git push dokku master

Add the file to the git repository and push it.

$ git add wercker.yml
$ git commit -m 'wercker.yml added'
$ git push origin master

Deploy

Go to your project on wercker and open the latest build, wait until it is finished (and green). You can now click the Deploy to button and select the deploy target we created earlier. A new deploy will be queued and you'll be redirected to it. Wait untill the deploy is finished and enjoy your first successfull deploy to a Digital Ocean droplet running dokku!