Beliebte Suchanfragen

Cloud Native

DevOps

IT-Security

Agile Methoden

Java

//

Traefik – The modern reverse proxy

6.9.2017 | 8 minutes of reading time

Imagine you have a set of microservices or applications that you want to publish to the world. There are several alternatives out there that you can choose from. Most of the reverse proxies were created when container technology was not around, so you have to jump through some loops to get going. As a broadly used example we will have a peek at how to configure nginx and some of its downsides. To get our hands dirty, we will have a more detailed walk-through of the modern, dynamic Traefik reverse proxy which we will use to deploy some services.

Best in class before Docker: Nginx

There is quite a number of container deployments out there that use nginx as a front end. The configuration is easy to read and write and the C style syntax gives you a cozy feeling. A simple config to deploy a service might look like this:

1worker_processes 4;
2
3events { worker_connections 1024; }
4
5http {
6
7upstream upstream-servers {
8
9              least_conn;
10
11                 server container1:80 weight=10 max_fails=3 fail_timeout=30s;
12
13                 server container2:80 weight=10 max_fails=3 fail_timeout=30s;
14
15                 server container1280 weight=10 max_fails=3 fail_timeout=30s;
16
17            }
18
19            server {
20
21                 listen 80;
22
23                 location / {
24
25                        proxy_pass http://upstream-servers;
26
27                        proxy_http_version 1.1;
28
29                        proxy_set_header Host $host;
30
31                    proxy_cache_bypass $http_upgrade;
32
33                     }
34
35                }
36
37}

With this config we created a simple HTTP reverse proxy on port 80. Nginx will answer the requests by forwarding these to the upstream servers. The container with the least number of active connections will be chosen from the pool. Nice configuration. Plain and easy to read.

One thing you will encounter when you deploy nginx in a mutable container enviroment is that you can’t replace containers without at least reloading the nginx configuration although the container DNS name is still the same. Nginx will cache the IP address to the container and you have to manually take care of it. You can work around this problem in many ways but still you have to take care of it.

Don’t get me wrong: I don’t want to pick on nginx. I like it and used it a lot before I switched to Traefik as my go to solution.

Traefik

There’s a more modern reverse proxy around that is able to handle dynamic container environments: Traefik . It is a small application written in GO tailored to the new challenges. You can use it as a frontend in a variety of environments. The simpler ones are Docker and Docker Swarm, the more complex ones are Apache Mesos or Kubernetes . You can even read metadata from directory services like etcd or Consul .

Back to our application we want to deploy. Let’s imagine we have a set of services that are described in a Docker Compose file. We can wire up our services and deploy these. Here we can have a look at a simple configuration:

This configuration will start the whoami test image that will allow us to see our requests in a kind of echo chamber. You can start it with docker-compose up and call it with your browser. How can we deploy it on a specific virtual host? Let’s extend the configuration a bit by adding Docker labels.

1version: '3.1'
2
3services:
4  whoami:
5    image: emilevauge/whoami
6    networks:
7      - web
8    labels:
9      - "traefik.backend=whoami"
10      - "traefik.frontend.rule=Host:whoami.server.test"
11    restart:
12      always
13
14networks:
15  web:
16    external:
17      name: traefik_webgateway

This is the configuration needed for Traefik to deploy our service at http://whoami.server.test . Pretty straightforward.

When you followed the example closely you might ask where Traefik is involved and you are right. Next we need to spin Traefik itself up:

1version: '3.1'
2
3services:
4  proxy:
5    image: traefik:1.4
6    command: --web --docker --docker.domain=server.test --logLevel=INFO
7    networks:
8      - webgateway
9    ports:
10      - "80:80"
11      - "8080:8080"
12    volumes:
13      - /var/run/docker.sock:/var/run/docker.sock
14      - /dev/null:/traefik.toml
15    restart:
16      always
17
18networks:
19  webgateway:
20    driver: bridge

The config above takes care of starting Traefik and will connect to the hosting Docker daemon to retrieve the needed metadata. It will scan for Docker containers that are marked with labels and will publish the services accordingly. The connection is kept so that changes will be reflected without any delay.

Both services are wired together using a Docker network called call webgateway that is prefixed by the project name. If no project name is specified, the name is inferred from the directory name where the compose file is located.

You can find more configuration options on the documentation site: https://docs.traefik.io/toml/#docker-backend. This takes you straight to a backend configuration part. Below that there is a list of Docker labels that can be used to further configure the publishing of the services. Currently we just used traefik.backend and traefik.frontend.rule in the sample above but you can find more.

Once Traefik and the service are up you can connect to the service and test the deployment. Make sure you first start Traefik in this example because the config provides the network the services can connect to.

Have a look at the built in dashboard by pointing your browser to: http://localhost:8080/

Here you can see what services are deployed and how these are configured. You see that our service is available as whoami.server.test. But since we don’t have a DNS record pointing from this name to our localhost, we need to manually set the Host header and call the localhost.

1curl -H Host:whoami.server.test http://localhost -v

Once you have entered the command, you can see the request that is being sent to Traefik and the response that will be returned. The result will look similar to:

1curl -H Host:whoami.server.test http://localhost
2Hostname: 2f3de5835785
3IP: 127.0.0.1
4IP: 172.21.0.3
5GET / HTTP/1.1
6Host: whoami.server.test
7User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:45.0) Gecko/20100101 Firefox/45.0
8Accept: */*
9Accept-Encoding: gzip
10Referer:
11X-Forwarded-For: 172.21.0.1
12X-Forwarded-Host: whoami.server.test
13X-Forwarded-Proto: http
14X-Forwarded-Server: b2554c36ab87

DNS

To fully enjoy the power of Traefik, you need to take care of the DNS records of your deployments. But this is a one time setup, so don’t worry about it too much. One deployment we run at a customer consists of two DNS records per host. One is an A record pointing to the IP Address of the host. And the other is a CNAME record that catches all virtual hosts below this one and forwards it to the A entry. This is a simple way to locate our services.

Just a little example to show you the setup:

1server.test A 172.16.2.10
2
3*.server.test CNAME server.test

This will allow us to reach our server as server.test and whatever.server.test, youwant.server.test. When you access the site with your favorite tool like a REST service, httpie or a browser, the client will forward the target host in an HTTP header called Host. Traefik will find the right container based on this header to forward the request to.

One advantage of using virtual hosts is that you don’t need to take extra care of redirects in your web application. We ran into problems when using relative paths for deployments because nobody thought about that being an option. Once you have your environment set up, you can deploy the services as you like.

Scaling out

Deploying a single container is easy and we could achieve it without any great effort. But what about scaling out? What part of the config do I have to change? The simple answer is: You don’t have to change your config. Just spin up more containers and that’s it. In our simple example:

1docker-compose scale whoami=4

This command will spin up four containers that will get added to the load balancer instantly. You can verify it by hitting the endpoint repeatedly and see the container name change and by having a look at the dashboard that is located on port 8080.

Sticky Sessions

Clearly I would strive for a stateless application backend that can be scaled independently and without any restrictions regarding the service endpoint.

Sometimes you don’t have the freedom because you are running a session-based application that doesn’t distribute the sessions in a cluster of servers. Or – if you think more of REST services – you heavily use caching of resources and you want your sessions pinned to a specific container. No matter why you want your clients to be pinned to a specific Docker container, there is an easy configuration switch to handle it:

traefik.backend.loadbalancer.sticky=true

This will check for a session cookie with a specific backend as a value. If the cookie is present and the backend is up, the request will be forwarded there. If the backend is down, another is chosen.

Be sure to use the upcoming 1.4 version for this feature or my patched Docker image (marcopaga/traefik:1.3.5.2) if you want to use this feature. Before these versions, the cookie path wasn’t specified so that clients tended to drop the cookie once in a while depending on the request path.

Closing

I’m pretty sure: It became clear how simple life can be when it comes to the deployment of your services and web applications with Traefik.
If you want to play around with Traefik have a look at my sample project on github . This vagrant multi machine setup will use ansible to create a local playground for you.

share post

Likes

1

//

More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.

//

Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.