Assembling a cloud hosted application – Part 1: Cast a glance at the cloud

No Comments

Moving your application to the AWS Cloud is a challenge. I will show you how to assemble a cloud hosted application with AWS Cloud Services.

Setting the Scene

We have to wait for the provisioning of the database“, “We cannot use the latest version of that technology because we run on premise“, “The deployment to production takes about one day” – I guess that most of you have felt slowed down by a sentence like these at least once in your professional career. Speed is essential in the realm of digitization of business processes. One way to address factors that slow down IT projects is to leverage the advantages of the managed cloud services like the ones provided by Azure, Google Cloud or Amazon Web Services (AWS).

Managed services can help you speed up by a variety of means:

  • Managed updates keep your technologies up-to-date without the need to apply updates manually
  • Release cycles can be shortened by automation of build, deployment and testing of your application as well as the automation of your infrastructure itself
  • Availability of technologies that might not be provided by your corporate IT units
  • Lower risk by reducing the number of human errors

Having a glance at the AWS console, I assume that one might feel easily overwhelmed. So, you might ask yourself how you can migrate your application from on-premises to the cloud or even start with a green field approach? Don’t worry, this series of articles is here to help you to get started. Following along the steps and code samples, you will set up your full stack web application and build a continuous delivery process in AWS. We will work with a Spring Boot application and a React Single-Page Application (SPA). Thereby, we will align with the AWS reference architecture for hosting highly available and scalable web applications.

Components That We Want to Move to the Cloud – Collecting the Pieces of the Puzzle

In the first step we have a look at the two software artifacts which we are going to move to the cloud. To keep the focus on the aspect of moving software to AWS, we will work with pretty basic pieces of software. We will deploy a Spring Boot application that answers requests with random Star Wars characters. This Spring Boot app will act as the backend of our application. Furthermore, we will use a React-based Single-Page Application which acts as a frontend and renders a picture and a button. A click on the button requests a random Star Wars character and renders its name. The Spring Boot app will use Gradle as build tool, while the frontend will use Yarn for build and dependency management.

The Tooling

You need the following tools to work with the code samples within this article:

The sample code is contained within the project-cloud GitHub repository. I add a tag to each revision of the source code. At the end of each part of this series of blog post I will create a git tag. So, you can jump from tag to tag and see how the sample code evolves. The git tag for the revision of this post is named part-01.

Let’s Get Our Backend up and Running

Our goal is to create a runnable jar which we can deploy in AWS. Therefore, we will use a managed runtime service like Elastic Container Service (ECS) or Elastic Beanstalk (EB). For the sake of simplicity, we will go for EB. It lets us deploy a runnable jar file without the need of any specific configuration or infrastructure setup to do. As we build with Gradle, we will make use of the Spring Boot gradle plugin, which packages our dependencies in a “über-jar”. That JAR file is easily deployable on AWS’ Elastic Beanstalk. Our build.gradle file looks like this:

You can find the sources of the backend in the sample code GitHub repository. The backend includes two controllers – the GreetingController and the StarwarsCharacterController. The GreetingController controller is mapped to the greeting path `/greeting` of the application. It returns a simple greeting when sending a GET request. The greeting is rendered when accessing the frontend through the root path of the web app. Furthermore, we do have a StarwarsCharacterController that returns a random Star Wars character payload on requesting the path `/starwars-character`. We use that controller to simulate simple user actions that sends a request to the backend.

In order to test the backend locally, you can checkout the project cloud GitHub repository. From within the backend/ folder you can run a Gradle build and start the backend from command line. You can also import the project to your IDE and use the IDE support to start the application.

Checkout the project

Build the project

Check the health of the backend

Get a random Star Wars character

Put a Fancy Frontend on Top

As a next step we would like to set up a React Single-Page Application. It handles requests to our backend and wraps them with a nice user interface. As a first step we will be running the frontend locally. However, during the course of this article we will be deploying the Single-Page Application to Amazon S3 and attach it to a content delivery network (CDN), which is called CloudFront in AWS.

The React frontend is set up with the learning environment Create React App. You can try it yourself by running `npx create-react-app project-cloud`. npx is a package runner tool, which comes with npm and helps to speed up the setup of a simple React app. I won’t go into detail with regards to the JavaScript development ecosystem. Only in case we need to know certain details that are relevant for the article, I will cover those.

The React Single-Page Application renders the greeting and the random Star Wars character, which are both returned by the Spring Boot application controllers. To do so, it renders a simple App component which processes HTTP GET request against the backend. On clicking the Get Character button another request is triggered which queries a random Star Wars character from the backend. In order to test the React app locally you only have to install dependencies and start the development server.

The production build can be started with `yarn build`. All static files can be found in build/. In subsequent paragraphs of this article we will be setting up a build environment in AWS which will run several AWS CodeBuild projects. The CodeBuild projects will execute the build commands to build and deploy the frontend in a S3 bucket. The source code of the React app can be found in the frontend/ folder of the project cloud GitHub repository.

Move the Pieces Into the Cloud

Time to celebrate – now we have our application up and running locally. The next – and most important – thing we want to do is to host our application on AWS. As I already mentioned above, we are going to deploy the Spring Boot backend to Amazon’s Elastic Beanstalk service. The frontend will be hosted in a S3 bucket with static website hosting enabled. Website Hosting is AWS’ option to turn cloud storage into publicly accessible web storage.

The first thing we need is an AWS account. For the following paragraphs, I will assume that you have got an AWS account for testing purposes. If not, it takes only a couple of minutes and a credit card to create such an account. No worries – you will not end up poor. The infrastructure that we will build up costs around 30 to 50 US$ per month.

If you’re not familiar with AWS, I suggest to browse the AWS console and have a look at the services. We are going to use Elastic Beanstalk, S3, CloudFront and IAM mainly. Within the context of this article however, we will use the AWS console as less as possible. Our goal is to script and automate the creation and configuration of our infrastructure. The so called concept of Infrastructure as Code (IaC) delivers the advantages that I listed in the beginning:

  • Shortened release cycles through automation
  • Free up people’s time by removing manual work (they have more time to create business and customer value)
  • Mitigate security risks by decreasing the number of human errors supported by the declarative approach

Clean and Tidy Terraform Scripts

As a tool for creating our infrastructure as code we chose Terraform which is developed by HashiCorp. Terraform (in contrast to AWS’ proprietary CloudFormation) is vendor-agnostic. It can also be used to work with Google Cloud or Azure. Furthermore, I prefer the HashiCorp configuration language (HCL) readability over the CloudFormation YAML format. We will use the terraform CLI to create our infrastructure. In order to configure Terraform appropriately, I want to mention three important aspects: a) credential management, b) Terraform remote state and c) state locking. We will be looking at those in the next paragraph.

The terraform scripts will be structured in a way that allows us to create modules. So we are going to create modules for our frontend, backend and for roles and permissions. In the following articles of this series we will introduce further modules. The modularization allows us to re-use modules for the creation of several environments like development, staging or production. Furthermore, we will use a separate module to prepare (see An own state for Terraform below) an AWS account. One AWS account could host one or more environments for our cloud application. We will apply the following structure:

  • infrastructure / account: The module, which prepares an AWS account
  • infrastructure / environments: Contains one module for each environment
  • infrastructure / modules: Contains reusable modules which are referenced by each environment
    • modules / backend: The runtime and deployment resources of the Spring Boot backend
    • modules / frontend: The hosting and CDN resources of the Single-Page Application
    • modules / roles-and-permissions: Identity and Access Management (IAM) configuration

    You can find the Terraform module hierarchy depicted in the diagram below. Dashed lines represent folder hierarchies and solid lines show Terraform module hierarchies.

    Terraform Module Hierarchy

    An Own State for Terraform

    a) For AWS authentication we do use an AWS Access Key and a Secret Key. As we are using a git, we don’t want to check credentials into source control. That’s why we will be using the AWS credential file which Terraform will look for in ~/.aws/credentials. You could also use your default AWS profile. If you do so, you don’t have to export the `AWS_PROFILE` environment variable like done below.

    b) Terraform keeps track of the infrastructure state. Therefore, it creates state files (*.tfstate). We work in a (maybe also distributed) team. So we don’t want the Terraform state to be kept locally. Therefore, we use the Terraform remote state.

    c) S3 as a Terraform backend also supports locking. We will make use of this feature in order to prevent concurrent executions of Terraform. Otherwise, concurrency could corrupt the state. In addition to that we will also rely on S3 buckets’ ability for versioning. It allows us to rollback if state gets corrupted. The snippet shows the remote backend configuration with reference to the S3 bucket and the DynamoDB table. The DynamoDB stores locks while a Terraform execution runs.

    Before we can start to create the resources that host our application we have to create the S3 bucket and DynamoDB. Those two resources are required to manage the Terraform state. To do so, a terraform module is provided in infrastructure/account/. The module itself works with local state and is supposed to be used only once for the initial account setup. The team member that administers the AWS account would usually apply it.

    Host the Backend on Elastic Beanstalk

    First we will be creating the Elastic Beanstalk environment. The Elastic Beanstalk environment will host the Spring Boot backend of our Project Cloud application. Therefore, we change to the folder infrastructure/application/, which contains the files main.tf, remote.tf and variables.tf. We do a terraform initialization and apply the infrastructure to our AWS account. The main.tf file creates the resources, while remote.tf queries existing resources from AWS. variables.tf provides variables referenced within the other files. The snippet below shows the resources that belong to the Elastic Beanstalk. The detailed configurations are left out intentionally. They can be found in the source code in infrastructure/application/main.tf.

    Please also note that we are working with the AWS default Virtual Private Cloud (VPC) here. From a security perspective this is not recommended. The Network ACLs (firewalls) of the default subnets in the VPC may not be sufficiently configured for your purpose. However, this article is not supposed to go into detail with regards to securing your AWS infrastructure. So for the sake of simplicity I will use the default VPC to deploy the Elastic Beanstalk.

    Get the Frontend Closer to Your Users

    The frontend hosting infrastructure consists of two main parts – the static website hosting enabled S3 bucket and the CloudFront CDN. The S3 bucket is a private S3 bucket. This means that resources are not accessible from the public internet. The index document will point to index.html which will render the app component of the React SPA. In order to make the S3 bucket publicly accessible, we will create a CloudFront Distribution. We will give the distribution access to the frontend S3 bucket. As a result the static resources will be publicly accessible through the CDN. For the development environment the caching time(min_ttl, default_ttl and max_ttl) will be set to 0 seconds. That gives us the possibility to debug the SPA without having to deal with caching issues. The CDN can be configured to have specific cache behaviours for paths like /* as a default route or /starwars-character for requests to the backend. Below you can find the default cache behaviour.

    Within the CloudFront CDN, we will create two origins – one for the frontend S3 bucket and another one for the Elastic Beanstalk backend. The cache behaviours map defined URL paths to origins. We will create the following behaviours:

    • Default: mapping /* to the S3 frontend
    • Static resources: mapping /static/* to the S3 frontend
    • Backend: mapping /starwars-character to the Elastic Beanstalk backend
    • Backend: mapping /greeting to the Elastic Beanstalk backend

    We recently configured the S3 bucket to be private. Therefore, we need to add a bucket policy that allows the CloudFront CDN to access the static resources hosted in the bucket. An origin access identity allows the CDN to access the S3 bucket. The snippet below shows the origin access identity. The S3 bucket itself gets a bucket policy assigned, which can be found in the module in infrastructure/modules/roles-and-permissions.

    The default cache behaviour routes all request with unknown paths to the S3 bucket. AWS S3 then tries to lookup static resources that are not present. It return a HTTP status 403 or 404. Those responses will be caught by CloudFront and redirected to index.html. We have to do this to be able to work with the React Router of the SPA. The snippet below shows the error pages configuration.

    The S3 bucket and CloudFront resources can be found in the terraform script in infrastructure/modules/frontend/main.tf.

    Finally – Going Live

    Similar to the account setup we now want to apply our Terraform script to create the hosting resources for our frontend and backend. Make sure you build backend and frontend before running the Terraform commands.

     

    Wrap-up

    In the first part of this series we developed a simple application that shows us random Star Wars characters. It’s made up of a React Single-Page Application frontend and a Spring Boot backend. We created a AWS infrastructure that follows the AWS reference architecture for web application hosting in the Cloud. Terraform automates our infrastructure setup and deploys our application to that infrastructure. We use several AWS services. Among others, we used Elastic Beanstalk to host a scalable backend and attached a frontend hosted on a S3 bucket. The S3 bucket uses CloudFront as a content delivery network.

    In the following articles we will further advance our Continuous Integration / Continuous Deployment (CI/CD) automation. At the moment the application is built locally. The Terraform scripts copy those locally built artifacts to the AWS Cloud environment. In one of the next steps we will setup an automated build pipeline in AWS. Therefore, we will add several roles and permissions provided by the AWS Identity and Access Management. Furthermore, we will be working on a database cluster for data persistence and come across some helpful hints and tricks for working with Terraform and infrastructure.

    Stay tuned and thank you for reading!

     


     

    References

    For further readings on the AWS reference architectures, check the following two links:
    AWS Architectures
    AWS Reference Architectures – Web Application Hosting

    For further details on the Spring Boot application and RESTful web services with Spring Boot Actuator, you can review the following resources, which I used while writing this article:
    Building an Application with Spring Boot
    Building a RESTful Web Service with Spring Boot Actuator

    The following articles and tutorials have been used throughout the setup of the Single-Page Application and can be used for detailed reference.
    Create React App
    React tutorial
    React HTTP requests
    React backend integration

    The links below, can be used to make yourself familiar with Terraform:
    Terraform getting started

Marco Berger

Marco is working as a Software Developer and IT-Consultant for the codecentric AG in Stuttgart since October 2018. He acquired a degree in Business Information Systems and before joining codecentric he has been in the IT business for more than four years. Thereof, he worked for two years as a Consultant and Specialist for Content Management Systems. Furthermore, he gained two years of experience as a System Integration Engineer in the mobility sector.

He is focusing on Infrastructure as Code and running applications in the Cloud. Thereby, he appreciates the simplicity of software products and likes to put different components together, so that new solutions with high value for his customers are created. One of his favourite occupation is the deletion of unused code.

Comment

Your email address will not be published. Required fields are marked *