Dockerized ghost

Or why devops went container insane

The devops team, well that would be me really, recently decided it was time to do something about our Ghost installation. Ghost is the blog software displaying the very post you are reading right now. We had been running a simple source install and the upgrade process for this kind of install involved carefully copying in and replacing old directories and files with new ones, lest you might write over some content that you meant to keep!

So the time was ripe for a full make-over of the Ghost installation. Docker containers are all the rage today, and I've been using them with much success for some projects at work. Docker is essentially a management utility on top of Linux containers. According to Wikipedia

LXC (Linux Containers) is an operating-system-level virtualization environment for running multiple isolated Linux systems (containers) on a single Linux control host.

A Docker container essentially allows you to run a customised Linux OS and your application in isolation from the host Linux system. It does not however require full hardware virtualisation as some virtualisation technologies such as Virtualbox require. Thus you only need to run one Linux kernel for many containers and can avoid the overhead of simulating both the hardware and kernel.

Containers afford some nice features which makes them an excellent choice in many cases. Docker containers are self contained. All dependencies are bundled in the container image thus preventing dependency hell. You no longer have to worry about one application requiring Node.js version 0.12.X and another requiring 0.10.X. The Docker containers use the aufs layered file-system were the container files packaged into the container image are read only. This prevents application files from being edited run time. Further more, you can link a volume into the container so the application inside the container can write data on a specific folder on the host Linux file-system or in another "data only" container. All of these properties of Docker containers make it really easy to move Docker containers around, upgrading them without losing data, as well as running them on different types of host systems. A docker does not care if you run Debian, Gentoo, Arch, or any other distro.

Running ghost in docker

Install Docker and docker-compose

This depends on your OS of choice. Docker provides a good overview of install options. You will want to finish the Docker tutorial in order to learn the key concepts and commands used with Docker. The docker-compose program is recommended as it will allow you to define your containers in a structured file (yaml).

Install Ghost Docker container

There are Docker images for most open-source projects these days, and you will find them on Docker Hub. A quick search reveals the official Docker Ghost image. The instructions shows you have to run Ghost with the docker run shell command. This approach might work well enough, but you should consider storing the command in a shell-script as you should use the same options each time you remove and start the container anew. Running docker run --name some-ghost -v /path/to/ghost/blog:/var/lib/ghost ghost will make Docker pull the ghost image from Docker Hub, create a container from that image, mount /path/to/ghost/blog on your host system into /var/lib/ghost in the Ghost containers file system.

I like to run my Docker application services with docker-compose to make configuration more transparent. The following docker-compose.yml file shows you how you can set up Ghost:

The file is quite simple. The outermost ghost: key tells the docker-compose command to create a service called ghost using the properties listed within. The image: ghost:latest section tells Docker to use the "Ghost" docker image from the Docker Hub tagged with the latest tag. ports: - "2368" lets docker know you want to map that port onto your host. This is useful, at least when debugging the connection to the blog. Please note that the port mapped to on the host is chosen randomly and is not stable between runs. You can extend the option to "8888:2368" if you want Docker to map the 2368 container port to port 8888 on the host. This is okay for servers where you only have a few containers and can remember which ports have already been mapped. environment: passes environment variables to the Docker container. The "VIRTUAL_HOST=yourhostname.example" will be explained in more detail later, but is used by a different container to automatically generate proxy settings for nginx. Lastly we have volumes: - /srv/ghost:/var/lib/ghost which tells Docker to mount the /srv/ghost folder into /var/lib/ghost on the container. This makes it possible to store the mysqlite database files as well as template files and configuration files on the host. The /srv/ghost folder will never be deleted by Docker as it is bound specifically to a host os folder.

Running sudo docker-compose up -d should get Ghost up and running. You can see the status of the container by running docker-compose ps. The command will list useful information such as the container id, the uptime, the ports being listened on, etc.

Reverse-proxying Docker containers

So how do we proxy incoming requests to the server to the Ghost docker container? Like most reasonable people I usually go for nginx when looking at http servers supporting reverse proxying. It is fast, simple to configure, and popular enough to afford easy answers to problems via stackoverflow and Google.

At this point you might wonder if the way to go is an nginx install on the host OS and a sites configuration file for each docker container you want to proxy to. Well, not quite. While it is certainly possible to proxy to our ghost container by manually creating a sites configuration file with proxy settings, it is not advisable in the long run. First of all, unless you mapped the Ghost docker container port to a specific host port, your nginx proxy config might stop working the next time you restart your Ghost docker service. Another issue is the fact that you have to create, edit, and delete these nginx server configs as you add, change, and delete Docker containers. I believe this might prove cumbersome in the long run. So lets find a better way!

Some people resort to dedicated service discovery frameworks such as Consul to set up self-configuring proxies and the like. Such frameworks are way overkill for what we are trying to achieve here though (even for someone like myself who like overengineering things for the sake of practice). It turns out there is a very nice docker image called nginx-proxy which packages nginx and docker-gen into an easy to use container. The way it works is that you mount your host OS's docker.sock file into the container, which can then use this socket to listen for Docker engine events (such as containers being started/stopped). The bundled docker-gen application then uses an nginx template file to create the relevant reverse proxy settings. It uses the earlier mentioned "VIRTUAL_HOST=yourhostname.example" environment variable to generate the nginx proxy configuration. Yes, it is that easy!

Hold on a minute, isn't mounting your docker.sock file into a front-facing proxy server somewhat a security risk? Well, it might be so let us run them separately. You can run the official nginx container image that nginx-proxy builds upon, and then use a separate dockergen container to handle updating the nginx configuration. The following docker-compose.yml file shows you how this can look:

As you can see the docker engine's socket file is mounted into the dockergen container: /var/run/docker.sock:/tmp/docker.sock:ro. The :ro tells docker to mount the file in read-only mode in the docker-container. This prevents dockergen or any other application in that container from mucking with your Docker process. /srv/nginx-proxy:/etc/docker-gen/templates is important as you need to provide dockergen with a template to generate the nginx configuration files from. This one can be downloaded separately. Read more about that on the nginx-proxy Github page.

When you have edited the docker-compose.yml file to your liking you should be able to start proxying your docker-containers with a simple command: sudo docker-compose up -d. If everything went well dockergen should now be able to filter in all containers with a VIRTUAL_HOST environment variable and create a nginx proxy server for it.

You might think that the whole dockergen and nginx container setup might seem a bit much if you are only hosting Ghost, and you would be right in thinking so. But if you are running things in Docker it is probably because you are running a bunch of different services in different containers and want an easy way to manage them all. In this use case, which I hope will be the case for Kompiler, nginx and dockergen is a very nice tool!

Devops/Research and Development signing out.