Overview

Continuous Delivery Patterns: Building your application inside a Docker container

No Comments

Let me be clear: This post is not about building a Docker container for your application – it is about building your application inside a container specially designed for exactly doing that – building your application – and nothing else. It helped us a lot to deal with different environments, technologies, versions etc. without polluting our continuous delivery infrastructure.

Where we came from

Our continuous delivery pipeline was perfect, everything’s automated! We generated our jobs with the Jenkins Job DSL Plugin upon creation of a new repository, nothing to do manually inside Jenkins. All jobs looked the same and were 100% reproducible within seconds. What more do you want?

What bothered us

In the beginning we had one type of job – Maven 3 with JDK 7, every project was built that way. But soon things started to fragment – there was that JavaScript web project that needed to be built with NPM, there was that legacy project built with ant that needed to be integrated into the pipeline as well, and soon projects started to upgrade to Java 8.

What we did to solve it

We started out with tagging build types to the repositories and having a lot of different Job DSL definitions for the different build types. It was okay. Still, everything was maintained by the continuous delivery team, and we wanted to do more devops – more power to the people! Technology choices – and therefore build tool choices – became more decentralized. Everybody building up their own continuous delivery platform was an option, but we didn’t like it. It might be not as hard as it was, but you still need knowledge and time for doing that.
So – one platform to rule them all. But how?
We decided that every project manages on its own how exactly it is built by providing a Dockerfile for it. Its name has to be Dockerfile-build to distinguish it from a Dockerfile meant for running the application. Now we have the same job definition for everybody – again. And it goes like this, triggered by a commit:

  1. Clone the repository
  2. Take the Dockerfile-build and build a docker image for it
  3. Take the image and run it, thereby mounting the Jenkins workspace into the container
  4. Take the Dockerfile (meant for running the application) and build it, copying the built artifacts from step 3 into the image
  5. Push the image to a docker repository

Here is an example for a Dockerfile-build for an Angular app:

FROM node:6.8.0

RUN npm install -g angular-cli

CMD npm install && ng build

And the according Dockerfile for running the application:

FROM nginx

COPY dist /usr/share/nginx/html

Why we like it

Since all the build specific technologies are hidden in the docker image we have very little maintenance costs for the Jenkins itself. No installing and upgrading of Maven, NPM, ant, Java, etc. No different Job DSL definitions for different build types. You can check out any old version, and you can still build it, even if it’s built with far outdated build tools. There are pre-built docker images for every build tool you need.

Caveats

Nearly all the build tool docker images run per default as root, so for avoiding file permission problems we let Jenkins run in a docker container as well, under the user root, and mount both the Jenkins and the build container to a shared volume. Since both are root there’s no problem.
You cannot do that if Jenkins is running directly on the host. One option would be then to make the build images run under the Jenkins user by changing file permissions in the image.

Comment

Your email address will not be published. Required fields are marked *