Beliebte Suchanfragen

Cloud Native

DevOps

IT-Security

Agile Methoden

Java

//

Continuous Delivery in the Cloud – Part 4: Provisioning your Test, Staging and Production environments

20.5.2012 | 8 minutes of reading time

This article will give you an insight how provisioning of different environments is done automatically with smart configuration management in combination with Amazon EC2. I will introduce the open source tools we are using and I will talk about best practices we found to be useful. There are several ways to achieve the same goal, and this is just one approach to configuration management.

Introduction

Manually keeping track of every single server configuration is hard. Systems evolve over time and keeping test, staging and production environments in sync is not an easy task. Especially when you are in development mode and keep changing not just your source code but also your systems configuration. You might have firewall changes, database changes, new software patches, bugfixes, etc. You want to test all changes in all environments to make sure that releases to production environments happen smoothly and do not break the existing system. These are just a couple of reasons why you should treat your infrastructure the same way as you treat your source code.

System Requirements

The following table shows our system requirements. We decided to use Oracle/Sun JDK since its more widely used than OpenJDK. For the operating system we choose CentOS 64bit to build on top of an Enterprise Linux Distribution. Any other Linux that AWS offers would have been fine as well.

ComponentDescription
OSCentOS 64bit
JavaOracle JDK Version 7
Application ServerApache Tomcat 6
DatabaseMySQL 5

First Step: Built a base image

Since I will be running several EC2 instances, the first step is to create a base image from a pre-configured instance. This base image will contain the basic features that we need from any instance. This includes login via ssh keys, a Puppet installation and a Java installation. Afterwards I will create all other instances from this Amazon Machine Image (AMI). From that point on, I will use Puppet to provision the system.

By default Amazon EC2 provides a list of system images you can choose from when creating a new instance. You can even create and share your own images or buy images from other vendors on the AWS Marketplace. I picked a CentOS 64bit Amazon Machine Image (AMI) since our recently bought in-house hardware is certified for CentOS 64bit. You can go with any favor of OS as you wish.

After creating a clean EC2 instance from a CentOS 64bit AMI following the steps outlined in these two blog posts (Getting started with Amazon EC2 and Screencast: Create your first EC2 instance ) there are a couple of things I need to install for my base image.

1. SSH Keys

In order for multiple administrators or users to login to the EC2 instances via ssh, I suggest you add their public SSH keys to the authorized_keys file in the .ssh folder. This keeps the user management quite simple.

2. Install Oracle JDK Version 7

Download the latest version of the Oracle JDK as a rpm package. Unfortunately you can not use the yum package manager since Oracle requires you to accept their licence agreement before using the JDK. After downloading the rpm package use the following command to install the package [ORA] . For production use you would only install the Java Runtime Environment (JRE) and not the full JRE. For running our demo installing the JDK is ok.

1$ rpm -ivh jdk-7u3-linux-x64.rpm

3. Install Puppet 2.7

There is long list of open source configuration management tools out there, i.e. Chef, Puppet, CXEngine [PUP] . We have found the Puppet community and tool support quite helpful and mature. As a result we choose Puppet for this demo. It is mostly a matter of taste which tool you select to provision your environments [PRO] .

The Puppet rpm packages are not part of the standard CentOS software repository. To install version 2.7 you need to add the PuppetLabs repository to the yum repositories. First, you need to create the following file:

1$ sudo touch /etc/yum.repos.d/puppetlabs.repo

Then add the Puppet repository information in the file:

1[puppetlabs]
2name=Puppet Labs Packages
3baseurl=http://yum.puppetlabs.com/el/6/products/$basearch/
4gpgcheck=0
5enabled=1

In the next step, you can install Puppet via the yum package manager:

1$ yum -y install puppet

There are different options how to keep your Puppet configuration in sync. Either you use the puppet-server to store all scripts and a cron job to poll for configuration changes, or you keep all files in your version control system and push the files from your CI server to the different environments. I choose the second option, to avoid having to deal with yet another system. All configuration files are kept in a git repository. In Jenkins I configured provisioning jobs, that take the latest Puppet scripts, copy them to the different environments, and execute Puppet each time a system is provisioned. To allow Puppet script execution from a remote server via ssh, you need to add the following line to the /etc/sudoers file.

1Defaults:ec2-user !requiretty

Now we can create the puppet scripts.

Second Step: Create Puppet Scripts

To provision the Tomcat server and MySQL database I am using the following Puppet scripts. You can find the scripts in this Github repository [GIT] . Puppet will install a MySQL database, configure a couple of users necessary for the web application, create a new database schema and make sure the database is running. It will also install a Tomcat server and configure a data source using Puppet template files. Here is an overview of the Puppet files.

1.
2├── manifests
3│   └── site.pp
4└── modules
5    ├── mysql
6    │   ├── files
7    │   │   └── mysql-connector-java-5.1.15.jar
8    │   ├── manifests
9    │   │   └── init.pp
10    │   └── templates
11    │       └── my.cnf.erb
12    └── tomcat
13        ├── files
14        │   └── mysql-connector-java-5.1.15.jar
15        ├── manifests
16        │   └── init.pp
17        └── templates
18            ├── context.xml.erb
19            ├── server.xml.erb
20            └── tomcat-users.xml.erb

You can create Puppet scripts with an editor of your choice or use the Geppetto IDE. Geppetto can be installed as an Eclipse Plugin [GEP] or can be run standalone. It provides instant feedback if there are syntax errors and supports auto-completion in Puppet scripts.

Another nice feature is the integration with existing forge modules [FOR]. As a developer you can easily search for existing modules (“tomcat”, “mysql”, “mongo”, …) and have a look how other developers automate their systems using Puppet.

Third Step: Configure Jenkins Job

Provisioning of all environments is done with Jenkins jobs. I will take the UAT environment as an example. Before test users can do manual testing the environment needs to be up and running and provisioned with the latest software.

Using the EC2 API tools I am first checking if the UAT environment is up and running. It is important to remember that a restart of any environment creates a new IP address. In order to distinguish the environments I tag each instance with a unique name. The following shell script shows how to automatically get the login url of the UAT server by searching for the instance with the correct tag.

1TAG="UAT-ENVIRONMENT"
2 
3echo "find instance with tag $TAG"
4instance=`ec2-describe-instances | grep $TAG | awk '{print $3}'`
5 
6echo "get server for instance $instance"
7server=`ec2-describe-instances $instance | grep INSTANCE | awk '{print $4}'`

In the next step I am checking if that specific EC2 instance is up and running. Since most manual testing is done between 8am-6pm there is no need to have the test environments up and running all night. To save instance time and money I configured a Jenkins job that turns off all instances in the evening that are not needed. Once a tester or a source code change triggers the provisioning of the environment I check if the instance is up and running. If that is not the case, I will start the instance using EC2 API tools.

1state=`ec2-describe-instances $instance | grep INSTANCE | awk '{print $4}'`
2if [ "$state" = "stopped" ]
3  then
4   echo "$instance is stopped. trying to start ...";
5    ec2-start-instances $instance
6fi
7 
8server=`ec2-describe-instances $instance | grep INSTANCE | awk '{print $4}'`
9while [ "$server" = "pending" ]; do 
10 echo " -> Waiting for instance to startup completely"
11  server=`ec2-describe-instances $instance | grep INSTANCE | awk '{print $4}'`
12  sleep 5
13done

Once the instance is started I call Puppet to provision the environment. First the Jenkins job copies all Puppet files to the instance and in a second step executes Puppet as the root user.

1echo "Copy Puppet files"
2scp -r puppet/ ec2-user@$instance:
3ssh ec2-user@$instance "sudo su --session-command='cp -r /home/ec2-user/puppet/* /etc/puppet/' root"
4 
5echo “Execute Puppet”
6ssh ec2-user@$instance "sudo su --session-command='puppet apply /etc/puppet/manifests/site.pp' root"

After that the Tomcat server and the MySQL database have the latest configuration. In the next step I execute Liquibase to update the database. Liquibase is a database change management tool that allows you to create changesets of the database schema in XML files. All changes to your database schema are kept in XML files. Liquibase will take care of upgrading your database schema when it recognizes that the database version is not up-to-date [LIQ] .

Summary

Right now complete provisioning of an environment takes about 3 minutes.

So far we have covered the basic tooling that is necessary for building a continuous delivery pipeline. In the next article we will have a closer at the full continuous delivery pipeline. We will put all the pieces together that we have described so far to create a fully automated delivery pipeline. I hope we have made you curious to come back next week 🙂

Best practices

  • #1: Keep everything under version control
  • #2: Treat infrastructure as code
  • #3: Fully automate every build step
  • #4: Use tags to distinguish EC2 instances
  • #5: Use public ssh keys for user management
  • #6: Use a change management tool for your database schema
  • #7: Provision your environments with a configuration management software

References

[ORA] Oracle Download Website http://www.oracle.com/technetwork/java/javase/downloads/index.html
[PUP] PuppetLabs http://puppetlabs.com/
[FOR] Puppet Forge Modules http://forge.puppetlabs.com/
[GEP] Geppetto Eclipse Update Site http://download.cloudsmith.com/geppetto/updates
[LIQ] Liquibase Database Change Management http://www.liquibase.org/
[PRO] List of Open Source Provisioning Software http://en.wikipedia.org
[GIT] Github Puppet Source Code http://bit.ly/LayOda

share post

Likes

1

//

More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.

//

Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.