This article describes how to configure a continuous integration server and build a continuous delivery pipeline.
- Part 1: Continuous Delivery Overview
- Part 2: Getting started with Amazon EC2
- Part 3: Configure your Continuous Integration Server
- Part 4: Provisioning your Test, Staging and Production environments
- Part 5: Configure your Continuous Delivery Pipeline
- Part 6: Create a Dashboard of your Systems
The most significant role in a continuous integration pipeline belongs to the continuous integration server. It executes and monitors repeated build, test and deployment jobs.
The basic CI workflow checks at regular intervals the version control system for new commits. In case there were changes in repository, it checks out the latest source code version and runs specified build scripts to compile the software. After successful source compilation a bunch of tests are executed and notification of the results are sent. Additionally, the application can be deployed into the production environment.
Demo environment is created using Amazon WebServices as described in Continuous Delivery in the Cloud – Part 1: Overview. It consists of 6 AWS instances:
- the CI Server Master and four test/production environments running 64 Bit CentOS, which is common server operating system
- the CI Server Slave running 32 Bit Ubuntu
The separation of the CI Server into two subcomponents is justified by the requirement to run performance tests. Additionally we can run acceptance tests in different browsers, such as Internet Explorer, Chrome or Firefox since we can setup additional slaves (i.e. MS Windows client). Another advantage is a possibility of a parallel test execution and test reports gathering on different CI Server Slave instances, which shortens the run time of the CI pipeline.
As a central component of the CI pipeline, the CI Server Master must be available under a static IP address. Therefore its configuration uses an Elastic IP mentioned in Part 2 of the current CI series.
This project uses the open-source, java-based CI server Jenkins. There are two possible ways to install the Jenkins server:
1. Using a package manager, such as yum for CentOS, or apt-get for Ubuntu.
2. By deploying the web application jenkins.war into a servlet container which supports Servlet 2.4 or later specification, such as Tomcat 6, Glassfish, JBoss etc.
After successful Jenkins installation and set up, master-slave relationship mentioned above has to be established. In Jenkins Manager console I registered a new slave node and configured it respectively.
From this point master starts distributing load to the slave. The exact delegation behavior depends on configuration of the slave. In this project, job labeled with “jbehave continuousdelivery” will be executed on CI server slave instance. Job runs JBehave performance tests and collects useful performance reports. Due to seamless integration you can still see all test reports, browse javadoc or download build results without noticing master-slave split.
The version control system Git is integrated into the Jenkins server using following plugins:
After the Jenkins server is restarted with these plugins, the GitHub Job is configured, which creates hyperlinks between Jenkins projects and the GitHub Project:
Another advantage of GitHub Plugin is used by triggering a build job every time a code change is pushed into Git repository:
Developer team receives immediate feedback if a source compilation or test execution fails during the build.
We manage software binaries (files created by the build process, any libraries and static files the application requires) through JFrog Artifactory.
Due to the fact that manual distributing of files across many file systems is very inefficient and error-prone in case of updates, rollbacks and uninstalls, Artifactory offers many advantages:
- provides fully-reproducible builds
- provides full visibility of deployed artifacts and used dependencies
- provides information about build environment and build status for full traceability
- provides bidirectional connection between build and artifact information inside the CI server and Artifactory.
Artifactory is available in 3 installation versions: OpenSource, Artifactory Pro and Artifactory Cloud (SaaS). I set up open source Artifectory version on CI master server instance. Jenkins Artifactory Plugin is installed to integrate with CI Server software.
The implemented demo workflow runs through the following steps to build a releasable software version. Firstly Jenkins resolves dependencies from Artifactory during the build process. In this case Artifactory is used as a library cash. Afterwards compiled binaries are versioned and deployed into a repository inside Artifactory. At this point you can obtain any checked-in software version from Artifactory.
Furthermore, you can take an advantage of the Artifactory Pro edition and install the YUM feature on top of the Artifactory open-source version. This allows you to use Artifactory as a YUM repository, bundle custom installation packages and provision RPMs directly from Artifactory to YUM clients. This option is especially important for automatic environment provisioning (described in the next Part 4 of the current CI series).
To check the quality of build software open source analysis tool Sonar is adopted. It runs various rules checkers such as PMD, Checkstyle, FindBugs and gives feedback on such characteristics of the code base as test coverage, maintainability, amount of duplicated code, cyclomatic complexity etc.
After Sonar installation on CI master instance I added following Sonar Plugins to enhance the configuration:
- PDF Export. Plugin allows PDF generation of Sonar report with relevant information
- Motion Chart. Plugin displays motion chart of a project’s metric evolution over time
- Timeline. Plugin visualizes project’s metric history
- Artifact Size. Plugin allows you to measure and record the size of the project artifacts.
- Sonargraph. Plugin enables you to check and measure the overall coupling and the level of cyclic dependencies
- Taglist. Plugin handles Checkstyle ToDoComment rule and Squid NoSonar rule and generates a report.
In this demo Sonar tests are executed automatically after every source code check-in and compilation. Developers team receives immediately the feedback on code quality.
In case any of defined rules are not met, the build fails and the last commiter has to resolve the issue.
Build Pipeline Plugin
To visualize deployment pipeline in Jenkins we installed Build Pipeline Plugin.
This anables you to configure a chain of Jenkins jobs, which will be executed in defined order. You can even run dependencies trees with the option to start automaticaly the successor job after predecessor job is finished.
The build pipeline view in Jenkins demonstrates the current execution status and possible errors in the pipeline.
In this article I have described a possible setup of a CI server. It covers most essential components of a continuous delivery pipeline. In the following sections we will introduce environment provisioning with Puppet (Part 4) and build a sample delivery pipeline (Part 5).