As described in an earlier blog post “Bootstrapping the Project”, comSysto’s performance and continuous delivery guild members are currently evaluating several aspects of distributed architectures in their spare time left besides customer project work. In the initial lab, a few colleagues of mine built the starting point for a project called Hash-Collision:
They focused on the basic application structure and architecture to get a showcase up and running as quickly as possible and left us the following tools for running it:
In search of a more sophisticated runtime environment we went down the most obvious path and chose to get our hands on the hype technology of late 2014: Docker. I assume that most people have a basic understanding of Docker and what it is, so I will not spend too much time on its motivation here. Basically, it is a tool inspired by the idea of ‘write once, run anywhere’, but on a higher level of abstraction than that famous programming language. Docker can not only make an application portable, it allows to ship all dependencies such as web servers, databases and even operating systems as one or multiple well-defined images and use the very same configuration from development all the way to production. Even though we did not even have any production or pre-production environments, we wanted to give it a try. Being totally enthusiastic about containers, we chose the most container-like place we could find and locked ourselves in there for 2 days.
One of the nice things about Docker is that it encourages developers to re-use existing infrastructure components by design. Images are defined incrementally by selecting a base image, and building additional functionality on top of it. For instance, the natural way to create a Tomcat image would be to choose a base image that already brings a JDK and install Tomcat on top of it. Or even simpler, choose an existing Tomcat image from the Docker Hub. As our services are already built as fat JARs with embedded web servers, things were almost trivial.
Each service should run in a standardized container with the executed JAR file being the sole difference. Therefore, we chose to use only one service image and inject the correct JAR using Docker volumes for development. On top of that, we needed additional standard containers for nginx (dockerfile/nginx) and RabbitMQ (dockerfile/rabbitmq). Each service container has a dependency on RabbitMQ to enable communication, and the nginx container needs to know where the Routing service resides to fulfill its role as a reverse proxy. All other dependencies can be resolved at runtime via any service discovery mechanism.
As a first concrete example, this is the Dockerfile for our service image. Based on Oracle’s JDK 8, there is not much left to do except for running the JAR and passing in a few program arguments:
After building this image, it is ready for usage in the local Docker repository and can be used like this to run a container:
Very soon we ended up with a handful of such bash commands we pasted into our shells over and over again. Obviously we were not exactly happy with that approach and started to look for more powerful tools in the Docker ecosystem and stumbled over fig (which was not yet deprecated in favor of docker-compose at that time).
Docker-compose is a tool that simplifies the orchestration of Docker containers all running on the same host system based on a single docker installation. Any `docker run` command can be described in a structured `docker-compose.yml` file and a simple `docker-compose up` / `docker-compose kill` is enough to start and stop the entire distributed application. Furthermore, commands such as `docker-compose logs` make it easy to aggregate information for all running containers.
Here is an excerpt from our `docker-compose.yml` that illustrates how self-explanatory those files can be:
Semantically, the definition of the user service is equivalent to the last sample command given above except for the handling of the underlying image. The value given for the `build` key is the path to a directory that contains a `Dockerfile` which describes the image to be used. The AMQ service, on the other hand, uses a public image from the Docker Hub and hence uses the key `image`. In both cases, docker-compose will automatically make sure the required image is ready to use in the local repository before starting the container. A single `docker-compose.yml` file consisting of one such entry for each service is now sufficient for starting up the entire distributed application.
For being able to debug an application running in a Docker container from the IDE, we need to take advantage of remote debugging as for any physical remote server. For doing that, we defined a second service debug image with the following `Dockerfile`:
This will make the JVM listen for a remote debugger on port 10000 which can be mapped to any desired host port as shown above.
With a local installation of Docker (on a Mac using boot2docker http://boot2docker.io/) and docker-compose, starting up the whole application after checking out the sources and building all JARs is now as easy as:
Note that several problems originate from boot2docker on Mac. For example, containers can not be accessed on `localhost`, but using the IP of a VM as boot2docker is built using a VirtualBox image.
In an upcoming blog post, I will outline one approach to migrate this setup to a more realistic environment with multiple hosts using Vagrant and Ansible on top. Until then, do not forget to check if Axel Fontaine’s Continuous Delivery Training at comSysto is just what you need to push your knowledge about modern deployment infrastructure to the next level.