Docker for your dev environment

21. February 2014. Tagged work, development, virtualization, docker.

As I recently got my new working device (a macbook air :)) I decided that it was about time for a fresh start. So I did just set my new macbook up from my timemachine backup but instead installed everything from scratch. I greatly recuded the amount of software (lots of stuff I don’t usually need) and then got to the development stuff.

I remembered what a mess I had with all the development tools, different php, ruby, node and python versions. The setup of mysql and all the reinstalls all the time. Confusion when pow blocked my port 80 and what not.

As I recently attended FOSDEM and heard an interesting talk about docker, I decided to try it out. I also decided that I don’t want to install docker natively but instead installed vagrant so that I could quickly spin up new machines or start fresh if necessary.

So I read myself into the topic and had a lot of problems with understanding what was going on, but two days later everything is shaping up really nicely.

I started of by installing vagrant and building a slightly customized Vagrantfile, that basically looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
  VAGRANTFILE_API_VERSION = "2"

  Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

    # use the default ubuntu 64bit image
    config.vm.box = "precise64"

    # get it fromt vagrant if it does not exist
    config.vm.box_url = "http://files.vagrantup.com/precise64.box"

    # enable private access to the machine via a static ip address
    config.vm.network "private_network", ip: "10.168.0.2"

    # disable the default shared folder
    config.vm.synced_folder ".", "/vagrant", :disabled => true
    # set up as many shared folders as necessary
    # in this case for baumgartner fenster.
    # you might want to set www-data as owner and group, to make them
    # easily readable by the nginx/php in docker. :)

    # install docker
    config.vm.provision "docker",
      images: ["phusion/baseimage"]

    # update package list, install some packages & build custom docker image
    config.vm.provision :shell, :path => "vagrant-bootstrap.sh"

  end

All it does is getting the default ubuntu 64bit image (just because it has good provisioning support), configuring a static private ip address for easier access, removing the default shared vagrant folder, installing docker, and pulling a docker arch base image and launching a bootstrap script that installs some packages on the VM (i.e. git).

That is basically everything your VM needs to be ready. After that I set the machine up with vagrant up and ssh into it using vagrant ssh.

Now you have a virtual machine, but I don’t even want to dirty this one with a lot of dependencies and stuff. Instead I use docker and containers, because it will let me have revisions. So first thing I do is getting my LAMP image (because many of my customer projects need this typical setup):

1
docker build -rm -t alexanderjulo/lamp git://github.com/alexanderjulo/docker-lamp.git

This will build an image with the name alexanderjulo/lamp (all images have to follow the <owner>/<name> scheme, so you would have to replace alexanderjulo with a name of your choosing) and will remove all images created by the intermediate steps (-rm). It will get the informaton on how to build the image from my github repository.

If you don’t need any customization you can just use the newly built image and run it using your app:

1
docker run -t -d -v /vagrant/<app>:/srv/http -name <app> -p 80 alexanderjulo/lamp

This will provide the container with a fake tty (-t), will detach it into the background (-d), will mount the directory /vagrant/<app>, in which your app should be, as /srv/http in the container (-v /vagrant/<app>:/srv/http), name the app <app> (-name <app>), expose the containers port 80 to the public (-p 80) and use our new image (alexanderjulo/lamp).

By running docker ps | grep <app> you will get information on the container, especially at what port your app is publicily available. Combining this information with the private IP of your VM (10.168.0.2, if you didn’t modify the Vagrantfile), you can go to your browser and type: http://10.168.0.2:<port>/ and will see your web app. That’s it!

As long

Customizing

Well, docker would not be docker, if you could not easily customize this stuff. The phusion image provides /sbin/my_init to make sure that the LAMP setup is will be started. If you want to work in a shell you can run /sbin/my_init -- bash, this way you can easily configure your mysql or whatever is necessary.

It basically works very similar to running the actual app:

1
docker run -t -i /vagrant/<app>:/srv/http alexanderjulo/lamp /sbin/my_init -- bash

This will run an interactive session (-i) and do not just start /sbin/my_init, but instead run /sbin/my_init -- bash, which will start your services in the background and then give you a shell. All other parameters are explained above. We do not expose the port and do not give this container a name as we do not intend to keep it.

Now you can make all the modifications necessary. One of my customers has a Yii webapp that requires LDAP and has some migrations and other stuff, so I set the machine up for that. In the shell that the docker run command opened for me I do the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
# install the php-ldap package
apt-get update
apt-get install -y php-net-ldap

# configure mysql
mysql -e "CREATE DATABASE app;"

# run the Yii migrations
cd /srv/http/protected
./yiic migrate up --connectionID=dbMY

# clean up 
apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

You might want to add your ssh key to the machine (the image will automatically run an sshd in the background). That way you can easily peek into it while running to check log files or whatever.

Just store your pubkey in the shared volume and add it to the authorized_keys file:

1
2
3
4
cat /srv/http/pubkey > ~/.ssh/authorized_keys
# and make sure the permissions are alright.
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys

After that exit the container with exit. This will automatically shut down your services.

Now I have a new docker container with the changes, but to start new containers with my changes again and again I need a new image. Which is where one of dockers magic functionalities comes in commit. First you will have to find your new container, you can use docker ps -a. Your container should be based on the image you used and probably the most recent one (first in the list). You either take it’s name or ID and do the following:

1
docker commit --run='{"Cmd": ["/sbin/my_init"]}' <id/name> <yourname>/<app> 

This will save the changes in the container you just made as an image under <yourname>/<app>. It will also change back the default command back to supervisord as we changed it to /bin/shell when we modified it.

Afterwards you can delete the container using:

1
docker rm <id/name>

Now you can just spawn containers of your app using:

1
docker run -t -d -v /vagrant/<app>:/srv/http -name <app> -p 80 <yourname>/<app>

You can stop them using docker stop <app> and then start them again with docker start <app>. In case you mess something up in your container, you can just stop it, remove it using docker rm <app> and then run a new one as described above.

As long as you just start & stop your changes will persist. At the moment there is no feature to get into the machine once started with supervisor and have a look at it. You could just start my shell script and detach/attach if you wanted though.

Automation

You could obviously automate this. I tried it and it works pretty well. You will need an outer bootstrap script, that does the run and commit for you, and an inner script that will modify your container from the inside.

One might ask now, why not use an automated Dockerfile for that? That’s easy. As long as your changes do not depend on your app you can absolutely do that. But as soon as you want to run your app’s migrations or anything alike, you need the data volume. And as that is not portable you can’t attach volumes in a Dockerfile, which in turn means that you can not do it in a Dockerfile.

If you are interested in more details regarding automation you can always ask me here or on twitter or anywhere else.