How are Docker containers being used today in software delivery pipelines, and what lessons have been learned on using containers to deliver software in production?
Last Wednesday I participated in an online panel on the subject of Docker and Containers in Your CD Pipeline, as part of Continuous Discussions (#c9d9), a series of community panels about Agile, Continuous Delivery and DevOps. Watch a recording of the panel:
Continuous Discussions is a community initiative by Electric Cloud, which powers Continuous Delivery at businesses like SpaceX, Cisco, GE and E*TRADE by automating their build, test and deployment processes.
Below are a few insights from my contribution to the panel:
What is so exciting about containers?
“I’m excited about Docker because I have the possibility to consolidate a lot of different technologies. I can do Tomcat or NGINX deployments, and the only thing I have to do is create an image, put it in a repository and then I can pull it on any machine and execute it. I guess it’s really important to product people, because they can deliver products faster without needing an effort to make changes.”
What are some of the challenges that this technology brings?
“We currently have the most problems with ops guys because they don’t accept containers so well. From the development side, we produce an artifact which is in a Docker image, put it to a repository and then execute on the client. But ops don’t trust the Docker daemons because the processes run in root and they said there’s a HTTP server exposed that can be DDoS-attacked. But they don’t do load tests, and this is mostly what people consider critical. They don’t see it’s easy to set up new machines, and run my applications when I take custom management tools, it’s really easy to build a huge microservice infrastructure without the knowledge where my servers are running. These tools are responsible to run the whole system, they do the server discovery, the deployment, and the process supervision – when a process crashes it starts on a new machine. When a machine crashes, the load is shared across the machines, these are the benefits that these people don’t appreciate.”
How do you see it used today?
“I see this mostly at eCommerce companies who want to have zero downtime in their deployments. The possibility of canary releases is important to them. And they want to have an auto-scale mechanism when they have higher load, let’s say during launch time, the people go the website and check the catalogs, then workload has to be scaled across multiple nodes, and these companies are mostly interested in Docker because it’s really easy to build systems that can handle that.
“When you have multiple test systems, you can use Docker clusters really easily to spin up new tests and present your new feature to product management, and they can say ‘yes, merge into a master branch, it should be in the new release.’”
What have you learned?
“Containers make it really easy to move applications around. From the application side, it’s really easy to adopt containers. Our interface to do this are environment variables, so I only had to evaluate these environment variables in my application. Then the application will work in a container just as it worked before in a non-container environment. This makes it really easy to build pipelines where I can deliver software in the Docker daemon, it can execute multiple branches, and do control flow for our product management – they can see the features really fast, and can say ‘yes, this should go into the upcoming release’, and then you can easily integrate it to the new release.
“You can easily do canary releases, when you want to check if a feature you have developed is really successful with customers. It’s really easy to build systems, where you can put 5% of the load to the new version of the software, and then see what happens with real customer feedback. Because normally, companies hire students that do the UI tests, but when you do this with real customers, you mostly see hazard behaviors.”