I spent last Friday looking at setting up an OpenID provider for the Baserock project. This is kind of a prerequisite to having our own Storyboard instance, which we’d quite like to use for issue and task tracking.
I decided to start by using some existing tools that have nothing to do with Baserock. Later we can move the infrastructure over to using Baserock, to see what it adds to the process. We have spent quite a lot of time “eating our own dogfood” and while it’s a good idea in general, a balanced diet contains more than dogfood.
The Baserock project has an OpenStack tenency at DataCentred which can host our public infrastructure. The goal is to deploy my OpenID provider system there. However, for prototyping it’s much easier to use a container on my laptop because I don’t need to be working across an internet connection.
The morph deploy command in Baserock allows deploying systems in multiple ways and I think it’s pretty cool. Another tool which seems to do this is Packer.
Differences between `morph deploy` and Packer
Firstly, I notice Packer takes JSON as input, where Morph uses YAML. So with Packer I have to get the commas and brackets right and can’t use comments. That’s a bit mean!
A bigger difference is that Morph considers ‘building and configuring’ the image differently to ‘writing the image somewhere.’ By constrast, Packer ties ‘builder’ and ‘type of hosting infrastructure’ together. In my case I need to use the Docker builder for my prototype and the OpenStack builder for the production system. There can be asymmetry here: in Docker my Fedora 20 base comes from the Docker registry, but for OpenStack I created my own image from the Fedora 20 cloud image. I could have created my own Docker image too if I didn’t trust the semi-official Fedora image, though.
The downside to the `morph deploy` approach is that it can be less efficient. To deploy to OpenStack, Morph first unpacks a tarball locally, then runs various ‘configuration’ extensions on it, then converts it to a disk image and uploads it to OpenStack. Packer starts by creating the OpenStack instance and then configures it ‘live’ via SSH. This means it doesn’t need to unpack and repack the system image, which can be slow if the system being deployed is large.
Packer has separate types of ‘provisioner’ for different configuration management frameworks. This is handy, but the basic ‘shell’ and ‘file’ provisioners actually give you all you need to implement the others. The `morph deploy` command doesn’t have any special helpers for different configuration management tools, but that doesn’t prevent one from using them.
Packer doesn’t seem to be packaged for Fedora 20, which I use as my desktop system, so I had a go at building it from source:
sudo yum install golang
mkdir gopath && cd gopath
go get -u github.com/mitchellh/gox
git clone https://github.com/mitchellh/packer \
There’s no `make install`, but you can run the tool as
. Note there’s also a program called `packer` provided by the CrackLib package in /usr/sbin on Fedora 20 which can cause a bit of confusion!
Prototyping with Docker
I’m keen on use of Docker for prototyping and for managing contains in general. I’m not really sold on the idea of the Dockerfile as it seems like basically a less portable incarnation of shell scripting. I do think it’s cool that Docker takes a snapshot after each line of the file is executed, but I’ve no idea if this is useful in practice. I’d much prefer to use a configuration management system like Ansible. And rather than installing packages every time I deploy a system, it’d be nice to just build the right one to begin with, like I can do with Morph in Baserock.
Using Packer’s Docker builder doesn’t need to me to write a Dockerfile, and as the Packer documentation points out, all of the configuration that can be described in a Dockerfile can also be specified as arguments to the docker run command.
As a Fedora desktop user it makes sense to use Fedora for now as my Docker base image. So I started with this template.json file:
I ran packer build template.json and (after some confusion with /usr/sbin/packer) waited for a build. Creating my container took less then a minute including downloading the Fedora base image from the Docker hub. Nice!
I could then enter my image with docker run -i -t and check out my new generic Fedora 20 system.
Initially I thought I’d use Vagrant, which is a companion tool to Packer, to get a development environment set up. That would have required me to use VirtualBox rather than Docker for my development deployment, though, which would be much slower and more memory-hungry than a container. I realised that all I really wanted was the ability to share the Git repo I was developing things in between my desktop and my test deployments anyway, which could be achieved with a Docker volume just as easily.
Edit: I since found out that there are about four different ways to use Vagrant with Docker, but I’m going to stick with my single docker run command for now
So I ended up with the following commandline to enter my development environent:
docker run -i -t --rm \
The only issue is that because I’m running as ‘root’ inside the container, files from the develpoment Git repo that I edit inside the container become owned by root in my home directory. It’s no problem to always edit and run Git from my desktop, though (and since the container system lacks both vim and git, it’s easy to remember!).
Running a web service
I knew of two OpenID providers I wanted to try out. Since Baserock is mostly a Python shop I thought the first one to try should be the Python-based Django OpenID Provider (the alternative being Masq, which is for Ruby on Rails).
Fedora 20 doesn’t ship with Django so the first step was install it in my system. The easiest (though far from the best) way is using the Packer ‘shell’ provisioner and running the following:
yum install python-pip
pip install django
Next step was to follow the Django tutorial and get a demo webserver running. Port forwarding is nobody’s friend it took me a bit of time to be able to talk to the webserver (which was in a container) from my desktop. The Django tutorial advises running the server on 127.0.0.1:80 but this doesn’t make so much sense in a container. Instead, I ran the server with the following:
python ./manage.py runserver 0.0.0.0:80
And I ran the container as follows:
docker run -i -t --rm \
So inside the container the web server listens on all interfaces, but but it’s forwarded only to localhost on my desktop, so that other computers can’t connect to my rather insecure test webserver.
I then spent a while learning Django and setting up Django OpenID Provider in my Django project. It was actually quite easy and fun! Eventually I got to the point where I wanted to put my demo OpenID server on the internet, so I could test it against a real OpenID consumer.
Deploying to OpenStack
The Packer to deployment to OpenStack proved a bit more tricky than deploying to Docker. It turns out that fields like ‘source_image’, ‘flavor’ and ‘networks’ take IDs rather than names, which is pain (although I understand the reason). I had to give my instance a floating IP, too, as Packer needs to contact it via SSH and I’m deploying from my laptop, which isn’t on the cloud’s internal network. It took a while to get a successful deployment but we got there.
I found that using the “files” provisioner to copy in the code of the Django application didn’t work: the files were there in the resulting system, but corrupted. It may be better to try and deploy this from Git anyway, but I’m a little confused what went wrong there.
Edit: I found that there was quite a bit of corruption in the files that were added during provisioning, and while I didn’t figure out the cause, I did find that adding a call to ‘sleep 10’ as the first step in provisioning made the issue go away. Messy.
I’m fairly happy with what I managed to get done in a single day: we now have a way of developing and deploying infrastructure which requires minimal fluff in the Git repository and a pretty quick turnaround time. As for the Django OpenID provider, I’ve not yet managed to get it to serve me an OpenID — I guess next Friday I shall start debugging it.
The code is temporarily available at http://github.com/ssssam/test-baserock-infrastructure. If we make use of this in Baserock it will no doubt move to git.baserock.org.