Beliebte Suchanfragen

Cloud Native

DevOps

IT-Security

Agile Methoden

Java

//

Fargate with EFS and Aurora Serverless using AWS CDK

25.3.2021 | 9 minutes of reading time

In this blog post, I will demonstrate how you can deploy an application to AWS Fargate using the AWS CDK. The WordPress software is used, which introduces some requirements. So Elastic File System and Aurora Serverless database will be deployed, too.

As an example, think of a website for an event that allows for registration at a specific time. Or there are thousands of people trying to register for a very limited course that many want to attend. Everyone knows the registration likely is available during a 10 minute time frame twice a year as it is instantly fully booked. Now they are hitting the page reload button. An approach to counter such situations is a service named AWS Fargate, which provides computing resources. You don’t have to choose instance types or any other unit in advance. Amazon provides resources as needed by the workload. This is very useful if the usage pattern of the deployed application is extremely volatile and you don’t want to spend time evaluating the best amount of computing resources to handle the load. Maybe there is a database that has to be scaled with your workload.

Overview

To get an idea about the moving parts we need to create, I will provide an image showing the amazon products involved, more details will follow.

Bootstrap

Before we start, you have to create the project files. For your convenience, the AWS CDK provides a command to generate some boilerplate code. Install the CDK and create an empty directory. After that, initialize your project.

npm install -g aws-cdk
mkdir project && cd project
cdk init --language=python --generate-only

Docker build and push

First, we build the Fargate task in the private section. The task serving the WordPress site needs an image, from which the container can be created from. Prebuilt images are available for download from the official docker hub. That image will be used to spin up a container in an ECS cluster.

You will notice that the default images have a maximum HTTP post size of 2 MB configured, which is not enough if you want to upload larger plugins or media files. To configure the parameters affecting that limit, you have to create or modify some files inside the image. If you do this in local docker or in Kubernetes, you simply mount a file or a config to the container. Fargate does not have such a mechanism. It would be quite nice to mount a file from s3. Amazon is currently working on it , but the feature is not completed yet. Another option would be to place a file in an EFS volume. But the AWS CDK lacks an upload feature as known from s3. A different approach is to build your own images, based on the official ones. They can be uploaded to ECR for later usage. To build an image in AWS CDK, you need the context path and a dockerfile:

# docker context directory
docker_context_path = os.path.dirname(__file__) + "../../src"

# upload images to ecr
nginx_image = ecr_assets.DockerImageAsset(
    self, "Nginx",
    directory=docker_context_path,
    file="Docker.nginx",
)

wordpress_image = ecr_assets.DockerImageAsset(
    self, "Php",
    directory=docker_context_path,
    file="Docker.wordpress",
)

That’s it! The AWS CDK detects whether a build is needed and will upload the resulting image to ECR . To configure PHP to allow more than the 2 MB limit for file upload, create a file containing

file_uploads = On
memory_limit = 64M
upload_max_filesize = 64M
post_max_size = 64M
max_execution_time = 600

and include the path in your dockerfile.

As you can see in the CDK code above, there are two docker images. I prefer to have a single process per container as this is a cleaner approach. The downside is greater complexity when it comes to communication between the processes.

To allow Nginx to send messages to the PHP process, it needs a network address and port. To allow some local test, you can configure the endpoint at build time:

ARG upstream_wordpress_php=localhost

COPY ./nginx/conf.d/http.conf /etc/nginx/conf.d/default.conf

RUN echo -e "upstream wordpress_php {\n  server $upstream_wordpress_php:9000;\n}\n" \
    > /etc/nginx/conf.d/upstream.conf

This allows the usage of a local docker-compose.yml , to enable a local spin-up of a WordPress site. In Fargate all containers of a task are reachable through the loopback device. Amazon uses a network mode called awsvpc . Now that we have an image, we are able to define our Fargate task. The image object can be passed directly to the task definition. The AWS CDK takes care of all tasks needed to have the correct image used.

Fargate task definition

event_task = ecs.FargateTaskDefinition(self, "WordpressTask")

app_container = event_task.add_container(
    "Php",
    image=ecs.ContainerImage.from_docker_image_asset(wordpress_image)
)
nginx_container = event_task.add_container(
    "Nginx",
    image=ecs.ContainerImage.from_docker_image_asset(nginx_image)
)

Note: The strings to name the resources are meaningless. In AWS, a common pattern is to name your resources with capitalized letters.

Now you can create an ECS cluster and place your task definition on it. At this point, it is very important to specify the Fargate version to 1.4. People at AWS have decided that the “latest” platform version in fact means the “penultimate” version .

cluster = ecs.Cluster(
    self, 'ComputeResourceProvider',
    vpc=properties.vpc
)

wordpress_service = ecs.FargateService(
    self, "InternalService",
    task_definition=event_task,
    platform_version=ecs.FargatePlatformVersion.VERSION1_4,
    cluster=cluster,
)

Now you would be able to start a task. There’s one thing more to do. We have to specify a port mapping for each container , even though we may not need it. This can be done using the following.

nginx_container.add_port_mappings(
    ecs.PortMapping(container_port=80)
)
app_container.add_port_mappings(
    ecs.PortMapping(container_port=9000)
)

Database

You can now start a task, but there are still some parts missing. We do need some persistence for files and structured data (SQL). As stated before, an Aurora Serverless database provides the SQL service for our task. The definition is straight forward:

database = rds.ServerlessCluster(
    self, "WordpressServerless",
    engine=rds.DatabaseClusterEngine.AURORA_MYSQL,
    default_database_name="WordpressDatabase",
    vpc=properties.vpc,
    scaling=rds.ServerlessScalingOptions(
        auto_pause=core.Duration.seconds(0)
    ),
    deletion_protection=False,
    backup_retention=core.Duration.days(7),
    removal_policy=core.RemovalPolicy.DESTROY,
)

A secret and a database are automatically created. To pass the values to the WordPress container, you can reference them directly. The AWS CDK creates appropriate roles and permissions to apply to the task that allows the retrieval of secret values.

app_container = event_task.add_container(
    "Php",
    environment={
        'WORDPRESS_DB_HOST': database.cluster_endpoint.hostname,
        'WORDPRESS_TABLE_PREFIX': 'wp_'
    },
    secrets={
        'WORDPRESS_DB_USER':
            ecs.Secret.from_secrets_manager(database.secret, field="username"),
        'WORDPRESS_DB_PASSWORD':
            ecs.Secret.from_secrets_manager(database.secret, field="password"),
        'WORDPRESS_DB_NAME':
            ecs.Secret.from_secrets_manager(database.secret, field="dbname"),
    },
    image=ecs.ContainerImage.from_docker_image_asset(wordpress_image)
)

Elastic File System

WordPress is an application written in PHP and extendable by plugins. All that functionality is relying on a writable filesystem, where the PHP user can place its files, so everything can be done through the web interface. Therefore you need a shared filesystem that can be mounted to the container running the PHP process. Since last year , attaching an EFS to Fargate task definitions has generally been available. More details about persistent storage in amazon ECS is available in the aws documentation . Create your EFS storage and define a volume.

file_system = efs.FileSystem(
    self, "WebRoot",
    vpc=properties.vpc,
    performance_mode=efs.PerformanceMode.GENERAL_PURPOSE,
    throughput_mode=efs.ThroughputMode.BURSTING,
)
wordpress_volume = ecs.Volume(
    name="WebRoot",
    efs_volume_configuration=ecs.EfsVolumeConfiguration(
        file_system_id=file_system.file_system_id
    )
)

Later that volume can be used to extend your task definition.

event_task = ecs.FargateTaskDefinition(
    self, "WordpressTask",
    volumes=[wordpress_volume]
)

Now, this volume is available to container mounts in your task definition. WordPress needs write access to that volume. Read permissions are sufficient for Nginx.

nginx_container_volume_mount_point = ecs.MountPoint(
    read_only=True,
    container_path="/var/www/html",
    source_volume=wordpress_volume.name
)
nginx_container.add_mount_points(nginx_container_volume_mount_point)
container_volume_mount_point = ecs.MountPoint(
    read_only=False,
    container_path="/var/www/html",
    source_volume=wordpress_volume.name
)
app_container.add_mount_points(container_volume_mount_point)

Networking

The final part of the deployment is about network connectivity. You may have noticed the VPC parameter before. We can create this VPC or use a preexisting one. In my example repository you have the whole network-related resources, separated in a different stack, to be reusable by several other stacks. You can have a look at the file . There are two different network types created and endpoints of a load balancer are placed in the public subnet. The result looks like the following picture.

The VPC and the load balancer are then passed to the WordPress stack as property objects. You can also create a parent stack around the constructs, but I think you get the point. Note that there is an upper limit of stacks per account. If you have too many, you can use nested stacks.

To allow the Fargate tasks to connect to the elastic filesystem and the Aurora Serverless database, add them to the appropriate security groups:

database.connections.allow_default_port_from(wordpress_service)
file_system.connections.allow_default_port_from(wordpress_service)

In this blog post, an application load balancer is used to provide an endpoint to which the outside users can connect. The Fargate tasks are automatically added to the targets of this load balancer as they are created. You simply have to define a listener and configure your load balancer watch for the WordPress service. You also can apply some rules to the listener affecting request routing. But there is only one site delivered at the moment. If the WordPress site is not initialized yet, then it is redirecting requests to an installation page. Therefore the load balancer health checks have to accept that redirect as successful. To achieve that, the HTTP Code 302 can be added to the list of acceptable return codes.

http_listener = properties.load_balancer.add_listener(
    "HttpListener",
    port=80,
)

http_listener.add_targets(
    "HttpServiceTarget",
    protocol=elbv2.ApplicationProtocol.HTTP,
    targets=[wordpress_service],
    health_check=elbv2.HealthCheck(healthy_http_codes="200,301,302")
)

Deploy

When you are done defining your stacks, all resources can be deployed using the deploy command. This takes a few minutes and outputs information about created instances. You can read about the whole process here . In short, the CDK command synthesizes the code to cloud assemblies, which contain the images and cloudformation templates. After creation, the artifacts are uploaded to AWS.

cdk deploy --require-approval never --all

I have stripped out the progress bar and other elements of less relevance. You can see the ExternalDNSName the deployed WordPress instance is reachable at. In production scenarios, you should use HTTPS certificates and a suitable name for your users. Both can be done using the AWS CDK.

Tests

As you define your infrastructure as code, this code can be tested and validated. You can write simple unit tests using your favorite test suite and scan the generated AWS cloud formation templates. This can be checked against what you expect. As I am using python, there are already testing frameworks available.

To get a reusable snippet of code, which synthesizes your template, add a fixture:

@pytest.fixture()
def template():
    app = core.App()
    NetworkStack(app, "NetworkStackTest")
    return json.dumps(app.synth().get_stack("NetworkStackTest").template)

Later this can be used to get a template and check the content using asserts. As an example, you can check that a VPC is defined.

def test_vpc_created(template):
    assert("AWS::EC2::VPC" in template)

Conclusion

AWS CDK is a very convenient tool for deploying resources to AWS. There are many native AWS mechanisms used so that constructs are easily integrated into existing environments. With a few lines of source code you can define a lot of functionality, as you don’t have to define all resources needed by yourself. The complete code of this blog post is available in the example Github repository . Give it a try!

share post

Likes

2

//

Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.