Deploying Pull Requests with Docker

No Comments

The Git repositories in my current project are hosted on Bitbucket Cloud. Any code changes have to go through pull requests. Jenkins builds the pull requests and gives its approval if the build is green. Additionally, at least one team member carries out a code review. Though a code review does not generally mandate that a reviewer build and test the app locally, one sometimes does want to see the result of the code changes, especially when, e. g., CSS/HTML are affected. In such cases, a visual test can help to understand and verify the changes.

In order to make things as easy as possible for the reviewer, we wanted to find a way to make a pull request testable with just the click of a button. We’ve just recently migrated our application to Spring Boot. Currently, we are still building a war file that is deployed on Tomcat. However, the executable war file can be run directly with the embedded Tomcat and, thus, can quite easily be packaged into a Docker image.

MongoDB is used as database. For the local development we dockerize the database. What’s special about that, is that the Docker image already contains data. The nature of our application requires that the database always have current data. For that purpose we have a nightly job that generates test data for the integration instance of our MongoDB. Jenkins in turn builds a Docker image for the MongoDB every night. The job creates a dump from the integration instance and imports it into the Docker image. The image is pushed to the Docker registry. Every morning, the developers can then pull a fresh image.

Our Jenkins also runs on Docker, one container for the master and another one for the build slave. The host’s Docker socket is mounted into the slave container. Now, when Jenkins builds a pull request, it also builds a Docker image thereof, which is tagged with the pull request number. Via Build Pipeline Plugin it is now possible to deploy a pull request together with a fresh MongoDB instance on the Jenkins host.

PR Pipeline

The job for building the pull request has a manual downstream job (part of the Build Pipeline Plugin) which deploys app and database.


A Python script creates a Docker network and starts the containers.

#!/usr/bin/env python
# -*- coding: utf-8 -*-
import socket
from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter
from subprocess import Popen, STDOUT, PIPE
def docker(*arguments):
    cmdline = ['docker']
    print(' '.join(arg for arg in cmdline))
    proc = Popen(cmdline, stdout=PIPE, stderr=STDOUT)
    output = []
    while True:
        line = proc.stdout.readline()
        if not line:
        line = line.rstrip()
        print('docker> %s' % line)
    return output
def start_myproject():
    print('Starting MyProject: %s' % version)
    docker('network', 'create', '-d', 'bridge', network_name)
    docker('run', '-d', '--name', mongo, '--network', network_name, '--network-alias', mongo, '-P', 'myrepo/mongo')
    docker('run', '-d', '--name', myproject, '--network', network_name, '-P', 'myrepo/myproject:%s' % version,
           '', '--database.hosts=%s' % mongo)
    docker('port', mongo)
    docker('port', myproject)
def destroy_myproject():
    print('Destroying MyProject: %s' % version)
    docker('stop', myproject)
    docker('rm', myproject)
    docker('stop', mongo)
    docker('rm', mongo)
    docker('network', 'rm', network_name)
args_parser = ArgumentParser(description='Manages MyProject/Docker dev deployments')
sub_parsers = args_parser.add_subparsers()
start_parser = sub_parsers.add_parser('start', help='Start MyProject and Mongo')
start_parser.add_argument('--version', '-v', required=True, help='The MyProject version to start')
destroy_parser = sub_parsers.add_parser('destroy', help='Destroy MyProject and Mongo')
destroy_parser.add_argument('--version', '-v', required=True, help='The MyProject version to destroy')
args = args_parser.parse_args()
version = args.version
network_name = version
mongo = 'mongo_%s' % version
myproject = 'myproject_%s' % version


The port for the application is automatically assigned by Docker. The URL is printed to the log of the job and can be quickly accessed from the build pipeline view. The job gets the pull request number as parameter and thus knows which image to start.

The deployment job has yet another manual downstream job for destroying the deployment.



In order to get quick access to application logs or to open a shell on a container without having to ssh into the Jenkins host, we installed Shipyard. This gives a good overview over running containers and allows you to destroy obsolete forgotten ones.

Why no Pipeline Job?

Unfortunately, the Bitbucket Pull Request Builder Plugin is not compatible with the new Jenkins pipelines. There is a JIRA ticket for this which refers to the Bitbucket Branch Source Plugin. Unfortunately, that’s out of question for us because it uses webhooks. For the master build, however, we do already use a pipeline job.


Reinhard is a Senior IT Consultant at codecentric’s Munich office. He has more than 20 years of Java development experience and is also on tour in Go, Python, or Kotlin. In his projects, he is a strong proponent of automation. In recent years, he has gained substantial knowledge in infrastructure topics around Maven, Git, Jenkins, Docker, and has also spread his knowledge in trainings. Reinhard enjoys contributing to open-source projects. He is involved in the community around Kubernetes and serves as a Helm charts and org maintainer.


Your email address will not be published. Required fields are marked *