Lovely deployment with git, github and gunicorn

17. September 2012. Tagged git, github, python, work.

Well, we all know the problem. Deployment. It is just such a pain. SUCH a pain!

I think with time I got around to a quite nice setup, which works pretty well. Although I did not cover all the problems I ran across yet, there are a lot of things working pretty well so far.



When I am working, I am preferably working with git. And I try to always have 3 general branches: master, testing and development. There might be more feature specific branches and other stuff, but these 3 should always exist on a project that is a little bigger.

Every feature or bug fix starts either in a feature specific branch, if it is something bigger, that might break the application, or in developement. As soon as I am happy with it, it moves to testing, where bug fixes might be applied, but no features are added any more. When the feature was tested and everybody is happy, it is moved to the master branch.


I usually push all branches to the server for multiple reasons. The first one is security. The more copies of the code, the better. If my apartment was on fire tonight, all my code would be at github as well. I do have some private repositories for work purposes.

Well so far, this is the easy part probably everyone knows about.


I do have a subdomain, just to run a github autodeploy service. I add a webhook to all the projects I am working on and then add all instances of this project I have on the server to the autodeploy config. This way all get updated every time I push to github.


Multiple Subdomains

This approach enables me to do something very nice. I can run multiple subdomains. For example i could run www.<domain>.com, testing.<domain>.com and development.<domain>.com. Every subdomain runs on a git repo in which the matching branch is active. So when I push to the development branch at github, it tells the auto deploy service, it should pull. All instances pull the latest code, and it is active on the development subdomain a few seconds later.


Gunicorn has a nice feature. It has a very specific reaction to the SIGHUP signal. When you sighup the gunicorn master process, it will reload the app from source, and start a new worker and then kill an older one, until all old workers are replaced by new ones.

This way you can introduce features anytime without having downtimes. The auto deploy service I run calls this switch on every pull introducing changes I have, so all the features I push to a branch are going live immediately. And if you do work in a team, every member can push changes that go live on any branch. If you want so!


If you are a good developer, you will write tests for your applications. They might be annoying and time consuming, but they can be so helpful. I basically have only one test and this test does nothing. But everytime it is run, it ensures that the syntax is intact and won’t crash any of my gunicorn instances. I added it to my local git repository as a post-commit-hook. Everytime I try to commit something, the test will be run and ensure the syntax is intact. If it is not, it will refuse the commit.

Problems to be handled


Yes, SQL and VCS do not like each other at all. What I am dreaming of is applying update scripts automatically. So, if you change the database tables (I do not even have them in SQL as I use sqlalchemy) you just include a script to upgrade your tables, which will automatically be applied. Although I have no idea yet, how to achieve that. But maybe I will find a way.


This is kind of a python specific problem, but might worry a lot of people. Basically you would just have to write a post-receive script that installs your requirements.txt again, but as I do not have upgrades of my virtualenv that often, I just do it manually.

Probably there are a lot more problems, that need to be taken care of, but I did not encounter any of them, or I just forgot about them. If you have ideas how to easily take care of my problems or improve my solutions, I would love to hear about it!