Almost two years have passed since the last blog post on this topic by Alexander Melnyk and in the meantime a lot has happened in terms of “API management with Kong”. So it’s time to update the content of Alexander’s post and to take a closer look at API management with Kong and the product Kong. This post marks the beginning of a series of posts that will follow soon. The focus of the series lies on the open source product, i.e. the API gateway Kong.
Building the infrastructure
In order not to bore you with content from the release notes, we will start directly with technical features. The basis for the following considerations is the infrastructure based on a docker-compose
-file, which can be found in the GitHub repo of the article. In addition, you need an API. The Python framework FastAPI is used for this purpose. It provides the API with three endpoints (/
(root), service1
, service2
) in a container via Dockerfile. The API is also part of the repository.
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
def read_root():
return {"Hello": "World"}
@app.get("/api/service1")
def read_service1():
return {"status_code": 200, "message": "service1 is called"}
@app.get("/api/service2")
def read_service2():
return {"status_code": 200, "message": "service2 is called"} |
from fastapi import FastAPI app = FastAPI() @app.get("/")
def read_root():
return {"Hello": "World"} @app.get("/api/service1")
def read_service1():
return {"status_code": 200, "message": "service1 is called"} @app.get("/api/service2")
def read_service2():
return {"status_code": 200, "message": "service2 is called"}
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
COPY ./app /app |
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7 COPY ./app /app
version: '3'
networks:
kong-blogposts:
driver: bridge
services:
api-service:
build: ./api-service
networks:
- kong-blogposts
expose:
- 80
ports:
- "80:80"
kong-database:
container_name: kong-database
image: postgres:9.6
restart: always
networks:
- kong-blogposts
environment:
- POSTGRES_USER=kong
- POSTGRES_DB=kong
healthcheck:
test: ["CMD", "pg_isready", "-U", "kong"]
interval: 10s
timeout: 5s
retries: 5
kong-migration:
image: kong
depends_on:
- "kong-database"
container_name: kong-migration
networks:
- kong-blogposts
restart: on-failure
environment:
- KONG_DATABASE=postgres
- KONG_PG_HOST=kong-database
- KONG_PG_DATABASE=kong
command: kong migrations bootstrap
kong:
container_name: kong
image: kong:latest
depends_on:
- "kong-migration"
- "kong-database"
restart: always
networks:
- kong-blogposts
environment:
- KONG_DATABASE=postgres
- KONG_PG_HOST=kong-database
- KONG_PG_DATABASE=kong
- KONG_PROXY_LISTEN=0.0.0.0:8000
- KONG_ADMIN_LISTEN=0.0.0.0:8001
ports:
- 8000:8000
- 8001:8001
- 8443:8443
- 8444:8444
healthcheck:
test: ["CMD-SHELL","curl -I -s -L http://localhost:8000 || exit 1"]
interval: 5s
retries: 10 |
version: '3'
networks:
kong-blogposts:
driver: bridge services:
api-service:
build: ./api-service
networks:
- kong-blogposts
expose:
- 80
ports:
- "80:80" kong-database:
container_name: kong-database
image: postgres:9.6
restart: always
networks:
- kong-blogposts
environment:
- POSTGRES_USER=kong
- POSTGRES_DB=kong
healthcheck:
test: ["CMD", "pg_isready", "-U", "kong"]
interval: 10s
timeout: 5s
retries: 5 kong-migration:
image: kong
depends_on:
- "kong-database"
container_name: kong-migration
networks:
- kong-blogposts
restart: on-failure
environment:
- KONG_DATABASE=postgres
- KONG_PG_HOST=kong-database
- KONG_PG_DATABASE=kong
command: kong migrations bootstrap kong:
container_name: kong
image: kong:latest
depends_on:
- "kong-migration"
- "kong-database"
restart: always
networks:
- kong-blogposts
environment:
- KONG_DATABASE=postgres
- KONG_PG_HOST=kong-database
- KONG_PG_DATABASE=kong
- KONG_PROXY_LISTEN=0.0.0.0:8000
- KONG_ADMIN_LISTEN=0.0.0.0:8001
ports:
- 8000:8000
- 8001:8001
- 8443:8443
- 8444:8444
healthcheck:
test: ["CMD-SHELL","curl -I -s -L http://localhost:8000 || exit 1"]
interval: 5s
retries: 10
The docker-compose
definition creates the three services kong
, kong-database
and kong-migration
. We will use PostgreSQL as database or DataStore component. The kong
service as API gateway provides four ports for two endpoints: the comsumer endpoint and the admin endpoint, for http and https, respectively.
To operate the kong
service, the kong-migration
service is used for the initial generation of the objects in the kong-database
. Unfortunately, the configuration of the database is not managed by the kong
service. The services are started with docker-compose up
. With the command docker-compose ps
we now get an overview of the running services.

First you need to see if Kong is available. To do this, check the status of the API gateway with a GET
request to the admin API. The call http localhost:8001/status
should return the status code 200
. For this purpose, I personally use the tool httpie.
About services, routes, consumers, and plugins
You can see that the access to the Kong admin API works, but no APIs are configured yet. Now an API is added which consists of a service and a route. Here you can see a change of the Admin API. Starting with version 0.13, routes and services were introduced for better differentiation and the ability to apply plugins to specific endpoints. To create a service, send a POST request to the gateway (localhost:8001/services
).
http POST localhost:8001/services/ name=service1 url=http://host.docker.internal/api/service1
For this short introductory example, the other possible parameters are not considered. Since Kong is used inside Docker, it is important to use the host.docker.internal IP. Otherwise there will be problems calling the API via the API gateway.
After the service has been created, a route must also be generated.
http POST localhost:8001/services/service1/routes paths:='["/service1"]' name=service1_route methods:='["GET"]'
The service can be called via http localhost:8000/service1
and returns the following result.
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 50
Content-Type: application/json
Via: kong/1.3.0
X-Kong-Proxy-Latency: 7
X-Kong-Upstream-Latency: 5
server: uvicorn
{
"message": "service1 is called",
"status_code": 200
} |
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 50
Content-Type: application/json
Via: kong/1.3.0
X-Kong-Proxy-Latency: 7
X-Kong-Upstream-Latency: 5
server: uvicorn {
"message": "service1 is called",
"status_code": 200
}
Typically, you want to protect your API from unauthorized access or allow only dedicated users access. In Kong, plugins are used for this, which are executed during a request. In order to provide technical users, Kong offers another entity: the consumer.
http post localhost:8001/consumers username=api-user
HTTP/1.1 201 Created
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Length: 120
Content-Type: application/json; charset=utf-8
Date: Sun, 01 Sep 2019 10:02:54 GMT
Server: kong/1.3.0
{
"created_at": 1567332174,
"custom_id": null,
"id": "a37333ea-c346-488b-a1f0-1a0b078ea152",
"tags": null,
"username": "api-user"
} |
HTTP/1.1 201 Created
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Length: 120
Content-Type: application/json; charset=utf-8
Date: Sun, 01 Sep 2019 10:02:54 GMT
Server: kong/1.3.0 {
"created_at": 1567332174,
"custom_id": null,
"id": "a37333ea-c346-488b-a1f0-1a0b078ea152",
"tags": null,
"username": "api-user"
}
You add the Key Authentication Plugin to the API and also connect the consumer to it, thus ensuring that only this consumer with a certain key can access the API.
http post localhost:8001/services/service1/plugins name=key-auth
HTTP/1.1 201 Created
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Length: 380
Content-Type: application/json; charset=utf-8
Server: kong/1.3.0
{
"config": {
"anonymous": null,
"hide_credentials": false,
"key_in_body": false,
"key_names": [
"apikey"
],
"run_on_preflight": true
},
"consumer": null,
"created_at": 1567332763,
"enabled": true,
"id": "a3b0ea80-98ba-43ed-a3dd-9fb5e1b0bbad",
"name": "key-auth",
"protocols": [
"grpc",
"grpcs",
"http",
"https"
],
"route": null,
"run_on": "first",
"service": {
"id": "3d0d837d-8d42-4764-9111-16f195baf762"
},
"tags": null
} |
HTTP/1.1 201 Created
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Length: 380
Content-Type: application/json; charset=utf-8
Server: kong/1.3.0 {
"config": {
"anonymous": null,
"hide_credentials": false,
"key_in_body": false,
"key_names": [
"apikey"
],
"run_on_preflight": true
},
"consumer": null,
"created_at": 1567332763,
"enabled": true,
"id": "a3b0ea80-98ba-43ed-a3dd-9fb5e1b0bbad",
"name": "key-auth",
"protocols": [
"grpc",
"grpcs",
"http",
"https"
],
"route": null,
"run_on": "first",
"service": {
"id": "3d0d837d-8d42-4764-9111-16f195baf762"
},
"tags": null
}
The next step is to check whether the plugin is set up.
http localhost:8000/service1
HTTP/1.1 401 Unauthorized
Connection: keep-alive
Content-Length: 41
Content-Type: application/json; charset=utf-8
Server: kong/1.3.0
WWW-Authenticate: Key realm="kong"
{
"message": "No API key found in request"
} |
HTTP/1.1 401 Unauthorized
Connection: keep-alive
Content-Length: 41
Content-Type: application/json; charset=utf-8
Server: kong/1.3.0
WWW-Authenticate: Key realm="kong" {
"message": "No API key found in request"
}
Now we have to create a key for the consumer api-user. If no key is specified, it will be created automatically.
http post localhost:8001/consumers/api-user/key-auth key=secret_key
HTTP/1.1 201 Created
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Length: 145
Content-Type: application/json; charset=utf-8
Server: kong/1.3.0
{
"consumer": {
"id": "a37333ea-c346-488b-a1f0-1a0b078ea152"
},
"created_at": 1567333309,
"id": "e65bcc7a-9478-40cc-be6c-0df43bad5b03",
"key": "secret_key"
} |
HTTP/1.1 201 Created
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Length: 145
Content-Type: application/json; charset=utf-8
Server: kong/1.3.0 {
"consumer": {
"id": "a37333ea-c346-488b-a1f0-1a0b078ea152"
},
"created_at": 1567333309,
"id": "e65bcc7a-9478-40cc-be6c-0df43bad5b03",
"key": "secret_key"
}
The API is called with the api-key and returns the following result.
http localhost:8000/service1 apikey:secret_key
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 50
Content-Type: application/json
Via: kong/1.3.0
X-Kong-Proxy-Latency: 8
X-Kong-Upstream-Latency: 35
date: Sun, 01 Sep 2019 10:27:25 GMT
server: uvicorn
{
"message": "service1 is called",
"status_code": 200
} |
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 50
Content-Type: application/json
Via: kong/1.3.0
X-Kong-Proxy-Latency: 8
X-Kong-Upstream-Latency: 35
date: Sun, 01 Sep 2019 10:27:25 GMT
server: uvicorn {
"message": "service1 is called",
"status_code": 200
}
All shown steps can now be applied to the second service (service2
). When this is done, both services are protected by Kong.
Outlook: Kong Enterprise
We have come to the end of the first part of the Kong series. I hope this first post has given you some insight into how the Kong Admin API works and how it has been changed for the first time. In the upcoming parts I will not only discuss Kong itself, but also Kong Enterprise, the Service Control Platform. So stay tuned!