A Complete Setup of GitLab CI & Docker Using Vagrant & Ansible: HTTPS/Let’s Encrypt, Container Registry, Runners

No Comments

Tired of Jenkins? Always keeping an eye on all those new kids on the block with their super cool and simple Continuous Integration Pipeline files? Here´s a guide on how to fire up a fully functional GitLab Continuous Integration/Delivery pipeline with Let´s Encrypt, Docker Container Registry and Runners in no time.

The problem with Jenkins

There are many reasons to stick with Jenkins. It´s a mature Continuous Integration Server and it has a big market share. Everybody uses Jenkins. So why should you bother about something different? Well, I´am quite a Jenkins fanboy. As a consultant I used it so often in many projects and it always felt like a good choice.

Always? Well, just until the concept of Pipeline as Code arose and the Jenkins Pipeline Plugin was proposed as the answer to that concept in Jenkins 2.x. Together with a smart colleague we setup a new Jenkins server and started to rewrite all our existing Jenkins Jobs in the Jenkins Pipeline way… And it wasn´t easy! We felt that this approach was missing many things we´d already done before and now needed to implement into our Jenkinsfiles, which took much more time than we intended in the first place. At that time we had a really good standing in the project and the customer was on our side. We somehow managed to put everything together – but it didn´t feel finished. And it was way too verbose! And I don´t really know how we convinced our customer to not just scream at us about that decision (I think that was all about all the other architectural decisions that were pretty good 🙂 ).

At the same time I was heavily using Open Source projects and also started to contribute to some, including building my first own projects on GitHub. The “standard way” to do Continuous Integration there is to use TravisCI. And all you have to do to configure your pipeline is to create a simple file called .travis.yml (see an example here). Comparing these files to the big pipelines of our customer projects is of course inappropriate. But the thought that everything should work in a much easier way remained.

Thinking about all of this, I went to the codecentric coffee kitchen. Well, maybe you already know what happened then. 🙂 Many colleagues there saying

“Hey Jonas, you Jenkins fanboy. Check out all those cool new CI servers like Concourse, Circle CI or even GitLab CI! We don´t know why you´re still messing around with Jenkins…” .

With a fresh coffee in my hand, I opened Google and found what my stomach was telling me all the time: “Jenkins 2.0 tries to address this by promoting a Pipeline plugin (plus another plugin to visualize it), but it kind of misses the point.”

That also reminded me of other pain points. Ever tried to keep all those Jenkins Plugins updated? Why the heck do I need all those Plugins actually?!! And why is Jenkins so hard to set up in a fully automated way that my colleague Reinhard needed to give deep-dive talks on this (I really recommend them!)?!!

Now I was ready to switch my CI fanboy server! And as there are many good rumors about GitLab CI, I wanted to give it a try. And that should be no problem, right? It´s just one of thoose new and easy to setup tools!

A GitLab CI real life setup

Installing and configuring GitLab CI isn´t always as easy as one could think in the first place. Yeah I know, there are those tutorials that present you a docker-compose up and you´re already 80 % there. But in the end you´ll see that you just achieved maybe 10 %. 🙂 Why is that? Well, if we want to set up a modern CI Pipeline, we for sure want to use Docker somewhere. It simplifies the effort to test, build and run our applications and also prevents us from getting into trouble with unmatched build-requirements on our CI server itself: everything needed is just already there inside the matching Docker images. No matter what kind of software you´re building or what programming language you´re using!. The GitLab CI docs propose this strategy also:

One of the new trends in Continuous Integration/Deployment is to:

1. Create an application image
2. Run tests against the created image
3. Push the image to a remote registry
4. Deploy to a server from the pushed image

This means we need a working Docker installation on our Pipeline server as a prerequisite for the GitLab configuration. And as this post will show, there are more prerequsites. So it turns out to be a good idea to leave the simple path with docker-compose up and to shift to a much more comprehensible setup here. This also has another advantage: Every step described could be used inside your companies infrastructure and on your servers! It´s also a good idea to strive for a fully automated setup of our CI Pipeline – having all the steps available in automatically executable code, checked in to version control.

To achieve a fully comprehensible setup, we use some Infrastructure-as-Code tools. The Ansible Playbooks will contain every step necessary to provision a GitLab server. There’s also great documentation about what´s needed to set up everything from the ground up – even if you don´t want to use Ansible! And with the help of Vagrant we´ll define our infrastructure inside a Vagrantfile. Now we can easily fire up a server locally that is based on a certain OS. And switching to your company’s GitLab server is extremely easy: Just edit the Ansible inventory file and add [yourcompany-gitlab-server] including its IP.

Prerequisites

For the sake of comprehensibility, every Ansible Playbook and Vagrant file used in this post is available inside the example project on GitHub. To run this post´s setup, you need a running installation of Ansible and Vagrant together with a Virtualization provider like VirtualBox. On a Mac, this is just a few homebrew commands away:

brew install ansible
brew cask install virtualbox
brew cask install vagrant

To really achieve a comprehensible setup, we also need the vagrant-dns Plugin (we´ll talk about that in a second). Just install it with:

vagrant plugin install vagrant-dns

Now we´re ready to get our hands dirty and clone the github.com/jonashackt/gitlab-ci-stack. Now be sure to add your domain name into the Vagrantfile. As I own the domain jonashackt.io and later want GitLab to be available on gitlab.jonashackt.io, I added the following:

    config.vm.hostname = "jonashackt"
    config.dns.tld = "io"

After a vagrant dns --install, we´re ready to fire up our server! Just go right into the gitlab-ci-stack directory and fire up our Vagrant Box with the common vagrant up:

Depending on your internet connection, this can take some time – especially if the command is executed for the first time. As soon as our Vagrant Box is running, we have everything set up to run our Ansible Playbooks on. Let´s do a connection check first:

ansible gitlab-ci-stack -i hostsfile -m ping

If this returns a SUCCESS, we can move on to really execute our Ansible playbooks.

One command to install & configure full GitLab CI

There are basically two options to install GitLab. The Omnibus way and from source. We´re using Omnibus here, because it makes life much easier.

Everything you need to install a fully functional GitLab instance is done by the Playbook prepare-gitlab.yml. Before we execute it, we´ll need to check two things. First make sure the domain name your GitLab instance should answer on is provided inside the prepare-gitlab.yml. In my case this is gitlab.jonashackt.io:

  vars:
    gitlab_domain: "gitlab.jonashackt.io"

The second part depends on your preferences. If you use this setup together with the provided Vagrant Box, you´ll need to have API access to your DNS provider. This is because our Vagrant Box isn´t accessible from the Let´s Encrypt servers directly (we´ll also talk about the “why” in a second, I promise). For now just provide providername, providerusername and providertoken for your DNS provider´s API in the extra-vars. In some cases you also need to add your current IP (check a site like whatsmyip.org) to the DNS provider´s IP whitelist. Now we´re ready to execute our Playbook:

ansible-playbook -i hostsfile prepare-gitlab.yml --extra-vars "providername=yourProviderNameHere providerusername=yourUserNameHere providertoken=yourProviderTokenHere"

Only if you don´t use the Vagrant Box of our current setup and your server is publicly accessible, you can safely ignore these extra-vars. GitLab will handle everything for you. Just execute:

ansible-playbook -i hostsfile prepare-gitlab.yml

Ansible will now install and configure a fully functional GitLab CI for you. If you don´t want to know anything else, that´s perfectly fine! Just wait for the Playbook to complete, open up your browser and enter your domain name. This should look like this somehow:

complete let's encrypt gitlab

But feel free to read on if you want to know about the how and the whys 🙂

Five steps from zero to GitLab CI platform

As already mentioned, Ansible provides us with a perfect (and up-to-date) documentation on how to install everything. So let´s have a look into the GitLab installation process. The main Playbook prepare-gitlab.yml is structured into five tasks:

- hosts: all
  become: true
 
  vars:
    gitlab_domain: "gitlab.jonashackt.io"
    gitlab_url: "https://{{ gitlab_domain }}"
    gitlab_registry_url: "{{ gitlab_url }}:4567"
 
  tasks:
 
  - name: 1. Prepare Docker on Linux node
    include_tasks: prepare-docker-ubuntu.yml
    tags: install_docker
 
  - name: 2. Prepare Let´s Encrypt certificates for GitLab if we setup an internal server like Vagrant (you have to provide providername, providerusername & providertoken as extra-vars!)
    include_tasks: letsencrypt.yml
    when: providername is defined
    tags: letsencrypt
 
  - name: 3. Install GitLab on Linux node
    include_tasks: install-gitlab.yml
    tags: install_gitlab
 
  - name: 4. Configure GitLab Container Registry
    include_tasks: configure-gitlab-registry.yml
    tags: configure_registry
 
  - name: 5. Install & Register GitLab Runner for Docker
    include_tasks: gitlab-runner.yml
    tags: gitlab_runner

We need to (1.) install Docker on our machine and (2.) fetch proper Let´s Encrypt certificates for our not publicly accessible Vagrant Box. Then everything needed for the (3.) GitLab Omnibus installation is done in the next task, followed by a Playbook on how to (4.) configure the GitLab Container Registry. The fifth Playbook then finally (5.) register our GitLab Runners that will be able to interact with the server´s Docker engine.

The full setup will look like this in the end:

Full blog posts GitLab setup

logo sources: GitLab icon, Ubuntu logo, Let´s Encrypt icon, Vagrant logo, VirtualBox logo, Ansible logo, Docker logo

Install & configure Docker

The first included task list prepare-docker-ubuntu.yml simply walks you through the standard guide on how to install Docker on Ubuntu. If you use a different distro, you can simply change modules etc. to match your Linux version.

There´s really nothing special here – except the way we install Docker Compose. The path proposed in the Docs unappealingly uses a hard-coded version inside the needed curl command. Therefore the docs need to add the following hint:

Use the latest Compose release number in the download command.

But there´s a much nicer way! Because the Python package manager PIP always provides us with the current Docker Compose package. So all we have to do is the following:

  - name: Install pip
    apt:
      name: python3-pip
      state: latest
 
  - name: Install Docker Compose
    pip:
      name: docker-compose

Now we don´t need to mess with maintaining the Docker Compose version number and are able to use the smooth upgrade process of a package manager.

Don´t go without HTTPS and domain!

As mentioned before, we want to achieve a real life GitLab CI setup here. What we therefore don´t want is to access GitLab via an URL like http://localhost:30080, which would be the standard way with a Vagrant port forwarding and without HTTPS in place. A central point about the usage of GitLab CI with Docker incl. the GitLab Container Registry and the Docker Runners is to use a valid domain name and HTTPS configured properly. Trust me. You don´t want to start without that! There will be so many error messages waiting for you. From a simple failing Git push like:

$ git push
fatal: unable to access 'https://gitlab.jonashackt.io/root/yourRepoNameHere/': SSL certificate problem: self signed certificate

to errors while trying to register GitLab Runners:

ERROR: Registering runner... failed
runner=gyy8axxP status=couldn't execute POST against https://gitlab.jonashackt.io/api/v4/runners: Post https://gitlab.jonashackt.io/api/v4/runners: x509: certificate signed by unknown authority
PANIC: Failed to register this runner. Perhaps you are having network problems

up to problems while trying to push into the GitLab Container Registry:

Error response from daemon: Get https://gitlab.jonashackt.io:4567/v2/: x509: certificate signed by unknown authority
ERROR: Job failed: exit status 1

I think there are many more stumbling blocks on the way to a properly configured GitLab CI Platform. To avoid most of them, let´s configure proper HTTPS!

Using domain names for Vagrant Boxes

Let´s start the journey by configuring a domain name for our Vagrant Box. After that step, we should be able to access our Box with an address like http://gitlab.jonashackt.io. Luckily this is easily achievable with the help of vagrant-dns Plugin. Remember, I promised to tell you why you´d already installed the Plugin?! There we go 🙂

We already configured config.vm.hostname = "jonashackt" and config.dns.tld = "io" inside our Vagrantfile. Now we´re able to configure our top-level domain on our host machine io with the help of the vagrant-dns Plugin. Just execute the following:

vagrant dns --install

To check if anything went right and our top-level domain will be resolvable, we use our host´s appropriate tooling. On a Mac this is scutil --dns. Using this, we see if the resolver is part of your DNS configuration (there are more resolvers configured, you may need to scroll down):

...
 
resolver #10
  domain   : io
  nameserver[0] : 127.0.0.1
  port     : 5300
  flags    : Request A records, Request AAAA records
  reach    : 0x00030002 (Reachable,Local Address,Directly Reachable Address)
 
...

This looks pretty good! If you already fired up the Vagrant Box, you should vagrant halt it before. After the next startup of our Vagrant Box with a usual vagrant up we can try to reach our Box using our configured domain. Again on a Mac we can use:

dscacheutil -q host -a name gitlab.jonashackt.io

As we configured everything correctly, this should result in something like the following (containing the private IP 172.16.2.15 we configured inside the Vagrantfile):

$:gitlab-ci-stack jonashecht$ dscacheutil -q host -a name gitlab.jonashackt.io
  name: gitlab.jonashackt.io
  ip_address: 172.16.2.15

The last step is to get our nice domain name jonashackt.gitlab.io not only available on our host machine, but also inside our Vagrant Box. Sadly the great vagrant-dns Plugin doesn´t support propagating the host´s DNS resolver into the Vagrant Boxes itself.

But luckily we choose VirtualBox as a virtualization provider for Vagrant, which supports the propagation of the host´s DNS resolver to the guest machines 🙂 All we have to do is to use the host’s resolver as a DNS proxy in NAT mode, which is suggested in this serverfault answer:

# Forward DNS resolver from host (vagrant dns) to box
virtualbox.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]

After we restarted our Vagrant Box with this configuration in place, our domain name gitlab.jonashackt.io should be also resolvable inside our Ubuntu guest machine.

HTTPS & Let´s Encrypt for GitLab on publicly accessible servers

If you don´t want to use this post´s setup with Vagrant, have a publicly accessible server ready and a public DNS provider configured to resolve to this server, you don´t need to do much about HTTPS in GitLab:

From 10.7 we will automatically use Let’s Encrypt certificates if the external_url specifies https, the certificate files are absent, and the embedded nginx will be used to terminate ssl connections.

In this case the whole point of HTTPS with Let´s Encrypt is handled by the GitLab Omnibus installation for you. And this posts´ Ansible scripts will just build on top of that – just be sure to have your domain name configured in the main playbook prepare-gitlab.yml. We don´t have to worry about the process of obtaining Let´s Encrypt certificates and configuring them for GitLab. Everything is just done for you by Omnibus.

HTTPS & Let´s Encrypt for GitLab on non-publicly accessible servers

In most other scenarios the whole configuration process of GitLab CI will be much harder! If your GitLab host is not externally accessible by the Let´s Encrypt servers, you´ll need an alternative to the fully automated Omnibus Let´s Encrypt process. And this is true for our local setup with Vagrant as well as for GitLab servers, that should only be accessible for internal development teams.

In both cases the Let´s Encrypt servers won´t be able to validate, if the given domain name resolves to the same host from which the certification process was issued from. After all it´s just a non-public DNS configuration and the server isn´t visible for Let´s Encrypt. If you try to use the automated Omnibus process here, the GitLab installation wouldn´t really fail. But you´d be stuck with self-signed certificates which introduce many of the problems and errors already mentioned before. And to make matters worse, your Browser (and the ones of your colleagues´) will complain in that well known nasty way:

Because of this it would be really nice to use Let´s Encrypt all the same. Although Let´s Encrypt was designed to be used with public accessible websites, there are ways to create these certificates for non-public servers also. All you need is to own a regularly registered domain. That maybe sounds like a big issue, but isn´t really a problem! If you don´t mind about the actual top level domain the cheapest start would be somthing like yourDomainName.yxz or yourDomainName.online. Both are available starting from 1$/year. Just be sure to pick one from this provider list.

You´ll need API access! Besides your regularly registered domain you´ll need API access to your DNS provider. This isn´t always included in the standard price of your domain. Be sure to check the prerequisites for API access at your respective provider.

Owning a domain and having API access to the DNS provider, we have everything in place to fetch proper Let´s Encrypt certificates for our Vagrant Box (or private server). There are many discussions and blog posts about this topic, but the by far most elegant way to get the Let´s Encrypt certificates without having to spin up another (publicly accessible server) is to use dehydrated together with lexicon and Let´s Encrypt´s dns-01 challenge. This great answer on security.stackexchange.com nails it:

Since this challenge works by provisioning DNS TXT records, you don’t ever need to point an A record at a public IP address. So your intranet does not need to be reachable from the Internet, but your domain name does need to exist in the public DNS under your control.

Using dehydrated and lexicon together with Let´s Encrypt´s dns-challenge

Great work has been done by the dehydrated team to create an easier-to-use Let´s Encrypt client than the official certbot. And that´s also true for the lexicon team, because they standardise the way how to manipulate DNS records of multiple DNS providers´ via APIs. Thanks to the great post of Jason Kulatunga, who is the maintainer of lexicon, crafting an Ansible playbook to automatically use dehydrated and lexicon together with Let´s Encrypt´s dns-01 challenge is really straightforward! So let´s have a look at the example project´s playbook obtain-letsencrypt-certs-dehydrated-lexicon.yml:

  - name: Update apt
    apt:
      update_cache: yes
 
  - name: Install openssl, curl, sed, grep, mktemp, git
    apt:
      name:
        - openssl
        - curl
        - sed
        - grep
        - mktemp
        - git
      state: latest
 
  # install this neat tool https://github.com/lukas2511/dehydrated
  - name: Install dehydrated
    git:
      repo: 'https://github.com/lukas2511/dehydrated.git'
      dest: /srv/dehydrated
 
  - name: Make dehydrated executable
    file:
      path: /srv/dehydrated/dehydrated
      mode: "+x"
 
  - name: Specify our internal domain
    shell: "echo '{{ gitlab_domain }}' > /srv/dehydrated/domains.txt"
 
  - name: Install build-essential, python-dev, libffi-dev, python3-pip
    apt:
      name:
        - build-essential
        - python-dev
        - libffi-dev
        - libssl-dev
        - python3-pip
      state: latest
 
  - name: Install requests[security]
    pip:
      name: "requests[security]"
 
  # install this neat tool https://github.com/AnalogJ/lexicon
  - name: Install dns-lexicon with correct provider (dns-lexicon[providernamehere])
    pip:
      name: "dns-lexicon[{{providername|lower}}]"

As we don´t use a publicly accessible server, we need to use dns-01 challenges instead of the Let´s Encrypt “standard” http-01. Therefore, dehydrated needs a hook file to work with dns-01. lexicon has such a file for us dehydrated.default.sh and we copy it simply inside our playbook:

  - name: Configure lexicon with Dehydrated hook for dns-01 challenge
    get_url:
      url: https://raw.githubusercontent.com/AnalogJ/lexicon/master/examples/dehydrated.default.sh
      dest: /srv/dehydrated/dehydrated.default.sh
      mode: "+x"

At this point we need to use some private information about your DNS provider – because remember, the whole process could only be done, if you have access to a real domain. In order to grant lexicon access to your DNS provider´s API, we set some environment variables and execute dehydrated thereafter. As you maybe notice, lexicon´s environment variables are dynamic based on the provider´s name – which is kind of tricky to configure:

  - name: Generate Certificates
    shell: "/srv/dehydrated/dehydrated --cron --hook /srv/dehydrated/dehydrated.default.sh --challenge dns-01 --accept-terms"
    environment:
      - PROVIDER: "{{providername|lower}}"
      - "{'LEXICON_{{providername|upper}}_USERNAME':'{{providerusername}}'}"
      - "{'LEXICON_{{providername|upper}}_TOKEN':'{{providertoken}}'}"
    ignore_errors: true

You maybe need to whitelist the IP you’re approaching the DNS provider´s API from. You can use a tool like whatsmyip.org to get the IP. Add it to your DNS provider’s API access IP whitelist before you call the playbook

All environment variables values are depending on the --extra-vars which are configured as providername, providerusername and providertoken:

ansible-playbook -i hostsfile prepare-gitlab.yml --extra-vars "providername=yourProviderNameHere providerusername=yourUserNameHere providertoken=yourProviderTokenHere"

Configure the certificates in GitLab

Please don´t get confused with this part of the docs. That´s only needed if you want to install a custom certificate authority and not necessarily for properly created Let´s Encrypt certificates, since the Let´s Encrypt authority is already trusted.

According to the docs there are two ways to configure HTTPS in GitLab: the automatic Let´s Encrypt way, which we sadly can´t use in our scenario as our Vagrant Box isn´t publicly accessible. And the way to manually configure HTTPS, the one we need to choose here, because we acquired the Let´s Encrypt certificates for our selfs.

Therefore we set the external_url via the environment variable EXTERNAL_URL: "{{gitlab_url}}" at the GitLab Omnibus installation process to contain an https. In my case, this is https://gitlab.jonashackt.io. Thereafter the GitLab Omnibus installation will look for certificates placed in /etc/gitlab/ssl/ and named gitlab.jonashackt.io.key & gitlab.jonashackt.io.crt. Note that both file names must be derived from your domain´s name.

The playbook letsencrypt.yml takes care of this and will just copy the generated certificates with the correct name to the correct location. And as this step is done right before the actual GitLab installation, we also need to create the directory /etc/gitlab/ssl/ at first:

  - name: Create GitLab cert import folder /etc/gitlab/trusted-certs for later GitLab Installation usage
    file:
      path: /etc/gitlab/ssl
      state: directory
    when: success
 
  - name: Copy certificate files to GitLab cert import folder /etc/gitlab/trusted-certs
    copy:
      src: "{{ item.src }}"
      dest: "{{ item.dest }}"
      remote_src: yes
    with_items:
      - src: "/srv/dehydrated/certs/{{ gitlab_domain }}/fullchain.pem"
        dest: "/etc/gitlab/ssl/{{ gitlab_domain }}.crt"
 
      - src: "/srv/dehydrated/certs/{{ gitlab_domain }}/privkey.pem"
        dest: "/etc/gitlab/ssl/{{ gitlab_domain }}.key"
 
    when: success

Note that we´re copying the fullchain.pem instead of the cert.pem! This is essential to prevent our selfs from getting the described errors like x509: certificate signed by unknown authority or ERROR: Registering runner... failed later. Thanks to this great comment I understood that a green bar inside the security bar of Chrome or Firefox doesn´t mean that Docker or Ubuntu know about Let´s Encrypt´s CA at all levels.

If you ran the example project´s Ansible playbooks, you can use your GitLab CI instance without cryptic error messages because of self-signed certificates:

green security bar in Chrome with the help of proper HTTPS

Install GitLab itself

Now we´ve reached the point where we wanted to be in the first place: we´ll install GitLab itself right now! The playbook install-gitlab.yml will walk through the standard GitLab installation guide for Ubuntu. Just in a fully automated way:

  - name: Update apt and autoremove
    apt:
      update_cache: yes
      cache_valid_time: 3600
      autoremove: yes
 
  - name: Install curl, openssh-server, ca-certificates & postfix
    apt:
      name:
        - curl
        - openssh-server
        - ca-certificates
        - postfix
      state: latest
 
  - name: Add the GitLab package repository
    shell: "curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash"
 
  - name: Update apt and autoremove
    apt:
      update_cache: yes
 
  - name: Install GitLab with Omnibus-Installer
    apt:
      name: gitlab-ce
      state: latest
    environment:
      EXTERNAL_URL: "{{gitlab_url}}"
    ignore_errors: true
    register: gitlab_install_result
 
  - name: Gitlab Omnibus is based on Chef and will give many insights what it does in the background
    debug:
      msg:
       - "The installation process said the following: "
       - "{{gitlab_install_result.stdout_lines}}"
 
  - name: Wait for GitLab to start up
    wait_for:
      port: 443
      delay: 10
      sleep: 5
 
  - name: Let´s check if Gitlab is up and running
    uri:
      url: "{{gitlab_url}}"

This is one of the simplest playbooks in this setup here. After the required dependent packages, the GitLab package repository is added and GitLab Omnibus installation is started afterwards. The key point here is the environment variable EXTERNAL_URL which is set to "{{gitlab_url}}". The variable itself is configured inside the main playbook prepare-gitlab.yml. After the GitLab installation, we wait for the port 443 to become available and then check if GitLab answers on the configured URL.

GitLab Container Registry

Remember the introductory phrases? We liked to set up a modern CI Pipeline making heavy usage of Docker and its advantages. For this purpose the GitLab Container Registry comes just in time. With that tool we´ll be able to not only configure a Docker Registry for every GitLab project. We can also leverage the power of GitLab´s user authentication system for the Docker Registry. And last but not least, we will see a nice tab point inside our GitLab GUI where we can scroll through all the Docker images that reside in the project´s corresponding Docker Registry:

gitlab container registry overview

The docs about how to configure the GitLab Container Registry domain tells us that we could either use a completely separate domain for our Registry. Or we could just use the same domain as the main GitLab instance. Our Ansible playbook configure-gitlab-registry.yml demonstrates the second way:

  - name: Activate Container Registry in /etc/gitlab/gitlab.rb
    lineinfile:
      path: /etc/gitlab/gitlab.rb
      line: " registry_external_url '{{ gitlab_registry_url }}'"
 
  - name: Reconfigure Gitlab to activate Container Registry
    shell: "gitlab-ctl reconfigure"
    register: reconfigure_result
 
  - name: Let´s see what Omnibus/Chef does
    debug:
      msg:
       - "The reconfiguration process gave the following: "
       - "{{reconfigure_result.stdout_lines}}"

The playbook inserts the needed registry_external_url configuration into the file /etc/gitlab/gitlab.rb. With my domain, this contains https://gitlab.jonashackt.io:4567, where the port should be something different than 5000, according to the docs.

As I already mentioned in the paragraph Configure the certificates in GitLab, it is essential that we use the /srv/dehydrated/certs/{{ gitlab_domain }}/fullchain.pem inside our GitLab certification configuration. By doing so, we prevent errors while using the GitLab Container Registry. And these errors are sneaky: they will not show up until you try to actually use the Container Registry inside a GitLab CI pipeline:

Error response from daemon: Get https://gitlab.jonashackt.io:5000/v2/: x509: certificate signed by unknown authority
ERROR: Job failed: exit status 1

As our certificates are named accordingly with the correct domain name, the GitLab Container Registry also uses these certificates (including the fullchain.pem). The last step inside our configure-gitlab-registry.yml will show us the output of the GitLab Omnibus reconfiguration, which is executed with the command gitlab-ctl reconfigure (you maybe need to scroll a bit to see it 🙂 ):

    ...
 
    - create new file /var/opt/gitlab/nginx/conf/gitlab-registry.conf
    - update content in file /var/opt/gitlab/nginx/conf/gitlab-registry.conf from none to 38ba8d
    --- /var/opt/gitlab/nginx/conf/gitlab-registry.conf	2018-05-23 07:06:18.857687999 +0000
    +++ /var/opt/gitlab/nginx/conf/.chef-gitlab-registry20180523-13668-614sno.conf	2018-05-23 07:06:18.857687999 +0000
    @@ -1 +1,59 @@
    +# This file is managed by gitlab-ctl. Manual changes will be
    +# erased! To change the contents below, edit /etc/gitlab/gitlab.rb
    +# and run `sudo gitlab-ctl reconfigure`.
    +
    +## Lines starting with two hashes (##) are comments with information.
    +## Lines starting with one hash (#) are configuration parameters that can be uncommented.
    +##
    +###################################
    +##         configuration         ##
    +###################################
    +
    +
    +server {
    +  listen *:4567 ssl;
    +  server_name  gitlab.jonashackt.io;
    +  server_tokens off; ## Don't show the nginx version number, a security best practice
    +
    +  client_max_body_size 0;
    +  chunked_transfer_encoding on;
    +
    +  ## Strong SSL Security
    +  ## https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html & https://cipherli.st/
    +  ssl on;
    +  ssl_certificate /etc/gitlab/ssl/gitlab.jonashackt.io.crt;
    +  ssl_certificate_key /etc/gitlab/ssl/gitlab.jonashackt.io.key;
 
    ...

Here we see that GitLab Omnibus configured its internal Nginx with a new endpoint on port 4567 for our Container Registry and that our acquired Let´s Encrypt certificates are used. Of course you can configure this port inside the main playbook prepare-gitlab.yml.

Install GitLab Runners

Now we´ve already reached the 5th step of our main playbook: installing and registering the GitLab Runners to access the Docker Engine inside our GitLab CI pipeline. GitLab Runners are needed to really execute the steps inside a GitLab CI pipeline later. These steps are called Jobs inside GitLab.

The process could be split up into two sections: First we need to install the OS service gitlab-runner. In our playbook gitlab-runner.yml we used the official docs on how to do that on Linux as a blueprint:

  - name: Add the GitLab Runner package repository
    shell: "curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh | sudo bash"
 
  - name: Install GitLab Runner package
    apt:
      name: gitlab-runner
      state: latest

Nothing special here. The second part of the process is a bit more tricky. In order to register a GitLab Runner at the end, we´ll need to somehow automatically obtain the current registration token from our GitLab instance. And this token will change every time we start up GitLab together with our Vagrant Box or server. As we don´t want to stop with our automated GitLab installation process here, we need to get this token every time we want to register a new GitLab runner.

Sadly there´s no way to use the great GitLab REST API for that purpose right now. And this leaves us with the only thing we can do right now: we need to dive into GitLab´s Database directly:

  - name: Extract Runner Registration Token directly from GitLab DB
    become: true
    become_user: gitlab-psql
    vars:
        ansible_ssh_pipelining: true
        query: "SELECT runners_registration_token FROM application_settings ORDER BY id DESC LIMIT 1"
        psql_exec: "/opt/gitlab/embedded/bin/psql"
        gitlab_db_name: "gitlabhq_production"
    shell: '{{ psql_exec }} -h /var/opt/gitlab/postgresql/ -d {{ gitlab_db_name }} -t -A -c "{{ query }}"'
    register: gitlab_runner_registration_token_result
 
  - name: Extracting the Token from the Gitlab SQL query response
    set_fact:
      gitlab_runner_registration_token: "{{gitlab_runner_registration_token_result.stdout}}"
 
  - name: And the Token is...
    debug:
      msg: "{{gitlab_runner_registration_token}}"

In order to use Docker, we need to choose one of the Executors that GitLab Runners are implementing to serve in different scenarios. We’ll just kept it simple here and use the shell Executor:

Shell is the simplest executor to configure. All required dependencies for your builds need to be installed manually on the machine on which the Runner is installed.

And as we already decided to use and install Docker (in a fully automated way), that´s all we need right now. No manual interaction needed. 🙂 If you gained more experience with GitLab CI, you can switch to another Executor for your GitLab Runners in the future. I would be keen to hear about your experiences with different Executors in the comments!

Register GitLab Runners

Now we´re ready to register our GitLab Runners. And as our Ansible playbook should be designed idempotently so that it could be executed once or many times without changing the result, we need to unregister potential registered Runners at first. This is naturally not relevant for the first playbook run:

  - name: Unregister all previously used GitLab Runners
    shell: "sudo gitlab-runner unregister --all-runners"
 
  - name: Add gitlab-runner user to docker group
    shell: "sudo usermod -aG docker gitlab-runner"
 
  - name: Register Gitlab-Runners using shell executor
    shell: "gitlab-runner register --non-interactive --url '{{gitlab_url}}' --registration-token '{{gitlab_runner_registration_token}}' --description '{{ item.name }}' --executor shell"
    with_items:
      - { name: shell-runner-1 }
      - { name: shell-runner-2 }
      - { name: shell-runner-3 }
      - { name: shell-runner-4 }
      - { name: shell-runner-5 }
 
  - name: Retrieve all registered Gitlab Runners
    shell: "gitlab-runner list"
    register: runner_result
 
  - name: Show all registered Gitlab Runners
    debug:
      msg:
       - "{{runner_result.stderr_lines}}"

As you can see, we´re using the command gitlab-runner register together with its non-interactive mode, so that the registration process can be run without user interaction inside our playbook. The extension with_items shows how many GitLab Runners we´re registering here. To achieve a setup where GitLab CI Jobs can be run in parallel, we´re registering a list of five GitLab Runners.

I have to mention it again: we need to use the /srv/dehydrated/certs/{{ gitlab_domain }}/fullchain.pem inside our GitLab certification configuration (see the paragraph Configure the certificates in GitLab) also in order to be able to register our GitLab Runners properly. Otherwise errors like the following will occur:

ERROR: Registering runner... failed
runner=gyy8axxP status=couldn't execute POST against https://gitlab.jonashackt.io/api/v4/runners: Post https://gitlab.jonashackt.io/api/v4/runners: x509: certificate signed by unknown authority
PANIC: Failed to register this runner. Perhaps you are having network problems

And don´t try to work around these errors with the --tls-ca-file option. This would only fix the issue for the moment! If you try to use the GitLab Container Registry inside GitLab CI, you will run into problems.

Running an example GitLab CI pipeline

That´s all! If you executed the main playbook already, your GitLab instance should already be waiting for you. If not, that´s no problem. Just fire up Ansible now and grab yourself a coffee:

ansible-playbook -i hostsfile prepare-gitlab.yml --extra-vars "providername=yourProviderNameHere providerusername=yourUserNameHere providertoken=yourProviderTokenHere"

Your GitLab instance will be waiting for you to define a new root password:

Https Let's Encrypt

In order to run an example GitLab CI pipeline, we need to import another example project on GitHub containing a GitLab CI pipeline definition file called .gitlab-ci.yml and an application to build. The example project is an extremely simple Spring Boot Microservice using the Java build tool Maven.

To import the project into our new GitLab instance, just add a new password for the root user first and login with that credentials. Then head over to Create a project and there click on Import Project / Repo by URL:

import a new project into GitLab

Now paste the example project’s Git URL https://github.com/jonashackt/restexamples.git into the Git repository URL field, change the Visibility Level to Internal and hit Create Project.

After the import you can head over to the project and its CI / CD / Pipelines section and fire up the pipeline by running it. No worries: only this time we have to do this manually since we didn´t push something new into our project. Each following push will automatically trigger your GitLab CI pipeline to run!

The pipeline should be already running right now:

First pipeline run

The example project has a prepared .gitlab-ci.yml ready for us which resembles the 4 steps of the new trends in Continuous Integration/Deployment that the GitLab docs propose:

# One of the new trends in Continuous Integration/Deployment is to:
#
# 1. Create an application image
# 2. Run tests against the created image
# 3. Push image to a remote registry
# 4. Deploy to a server from the pushed image
 
stages:
  - build
  - test
  - push
  - deploy
 
# see usage of Namespaces at https://docs.gitlab.com/ee/user/group/#namespaces
variables:
  REGISTRY_GROUP_PROJECT: $CI_REGISTRY/root/restexamples
 
# see how to login at https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#using-the-gitlab-container-registry
before_script:
  - docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
 
build-image:
  stage: build
  script:
    - docker build . --tag $REGISTRY_GROUP_PROJECT/restexamples:latest
 
test-image:
  stage: test
  script:
    - echo Insert fancy API test here!
 
push-image:
  stage: push
  script:
    - docker push $REGISTRY_GROUP_PROJECT/restexamples:latest
 
deploy-2-dev:
  stage: deploy
  script:
    - echo You should use Ansible here!
  environment:
    name: dev
    url: https://dev.jonashackt.io

In GitLab, every stage defines a building block inside the CI Pipeline. You can have multiple Jobs inside those stages. We don´t use them in this simple example here. But if you do, you also get to know the advantages of multiple registered GitLab Runners, because they are able to run in parallel then inside a given stage.

Caution: Mind the namespaces when working with GitLab Container Registry!!!

As you may have noticed, using the GitLab Container Registry has one hidden obstacle. You have to use a correct namespace to push into the GitLab Container Registry! I cannot advise the GitLab team more to please make this hint as prominent in their docs as possible – this just drove me nuts! You need to not only use the GitLab Registry URL itself to push into it. You must also use a user or a group name and the project name in the following order:

gitlab.jonashackt.io:4567/UserOrGroupName/ProjectName

As you can see, I´am using GitLab CI predefined variables alongside self-defined variables heavily here. This will make your life easier und help other people to be able to read your pipeline definitions!

Another cool GitLab CI feature are Environments. Although just being another view onto your pipelines, this view is really cool as one can easily see which deployments went to which infrastructural stage. All you have to do is to use the environment keyword inside your .gitlab-ci.yml files. The environment will then pop up automatically under the CI/CD / Environments tab:

GitLab Environments overview

GitLab CI is really great

As an old Jenkins-fanboy I have to admit it: GitLab CI is a really cool tool! After all this journey I wouldn´t say everything is totally easy to install and configure in the first place. But after going over all the small stumbling blocks, where many are naturally only introduced in private server environments, I strongly recommend to give it a try.

With GitLab CI you will be able to use the super neat YAML style pipeline definition files you are used to inside your own projects and also behind big corporate firewalls. And what´s really cool: you don´t need to mess around with a huge bunch of plugins! And you don´t need to integrate your process central Git server with the CI server using all those half-baked web hooks and plugins. No, they are just already integrated. Generally I really like the idea of using the best tools for the respective scenario. But GitLab CI makes it really hard to not love this fully integrated Continuous Integration Platform!

Jonas Hecht

Trying to bridge the gap between software architecture and hands on coding, Jonas hired at codecentric. He has deep knowledge in all kinds of enterprise software development, paired with passion for new technology. Connecting systems via integration frameworks Jonas learned to not only get the hang of technical challenges.

Comment

Your email address will not be published. Required fields are marked *