Ian Rodrigues

DevOps Engineer at Oowlish, working for Petco. I enjoy learning new things and, sharing knowledge.

| 5 min read

Deploying containers to production with Terraform and AWS Fargate – Introduction

#aws #docker #containers #terraform #python

With the advent of Docker these days, it’s pretty straightforward to set up a local development environment to build our applications. It would be even high if we could deploy that same container to production with the same ease.

It’s not only possible as you will learn how to do it. Today, I’ll start a series of posts to show you all how to provide the necessary infrastructure on AWS to get your containerized application up and running. At this first part, I’ll explain a few key concepts like what is AWS EC2 Container Service and AWS Fargate and how to write infrastructure code using Terraform.

Let’s get started!

First of all, what is ECS?

Well, EC2 Container Service (ECS) is an AWS service responsible for orchestrating a cluster of Docker containers. It’s a high-performance and compelling alternative to Docker Swarm and Kubernetes.

With it, you don’t have to install or manage an orchestration software, neither manage or scale virtual machines or containers as it abstracts all of this for you.

Some key concepts surrounding ECS are:

  • Cluster: a group of EC2 instances where your containers run;
  • Task Definition: a JSON file that describes one or more containers;
  • Task: an instantiation of a Task Definition within a Cluster;
  • Service: it’s responsible for instantiating and maintain a given number of Tasks running on your cluster.

“Serverless” containers with Fargate

Initially, you needed to set up a cluster of EC2 instances where your containers did run. You had to select the instance types, manage auto-scaling, and optimize cluster utilization. However, things had changed.

In 2017 AWS introduced Fargate, a compute engine that enables you to run containers without the need to worry about the underlying infrastructure. Now you can set up your ECS cluster without having to manage servers. With Fargate, you only have to think about the containers. It eliminates the need to manage EC2 instances. You no longer have to deal with servers or configurations. Now, you can only focus on the application itself.

Infrastructure-as-Code and Terraform

At this point, you may have noticed that ECS/Fargate is made up of a lot of small parts. Also, it is necessary to provide the whole underlying part of the environment. Here is where Terraform comes in.

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. It can manage existing and popular service providers as well as custom in-house solutions. With it, you can describe your infrastructure in configuration files with the advantage to use the same Terraform configuration to set up identical staging, QA, and production environments.

Basic Hello World!

The best way to explain how Terraform works is to show it in action. Let’s provision a t2.micro EC2 instance on AWS using Terraform.

I’ll to assume that you already have Terraform installed. If you don’t have, check their website.

First, create a file called main.tf (you can use whatever name you want) in a directory of your choice (it is your working directory) with the following content:

provider "aws" {
  profile = "default"
  region  = "us-east-1"
}

resource "aws_instance" "webserver" {
  ami           = "ami-2757f631"
  instance_type = "t2.micro"
}

With the file in place, you need to run terraform init to initialize the working directory. It is usually the first step after writing new Terraform code or cloning a new repository, as it downloads all the necessary providers’ plugins to provision your infrastructure.

$ terraform init

Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "aws" (terraform-providers/aws) 2.18.0...

Terraform has been successfully initialized!

Great, Terraform has initialized your working directory, and you’re ready to go. The next step is to create an execution plan, that is, Terraform will analyze the “current” state of your infrastructure and then determine what is necessary to achieve the “desired” state. It will print all these required actions on the screen for you to review.

$ terraform plan

...

Terraform will perform the following actions:

  # aws_instance.webserver will be created
  + resource "aws_instance" "webserver" {
      + ami                          = "ami-2757f631"
      + instance_type                = "t2.micro"
      ...

Plan: 1 to add, 0 to change, 0 to destroy.

After reviewing the execution plan, it’s time to apply those changes. Terraform will print the required actions once again and ask if you want to proceed and perform those actions. Type yes and let it go.

$ terraform apply

...

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

Congratulations! You’ve provisioned your first resource using Terraform. It is a pretty straightforward example but, imagine provisioning a cluster of EC2 instances, databases, and cache clusters. Or rather, imagine provisioning those same resources for staging, QA, and production environments using the same code. It surely will save much time.

Conclusion

AWS EC2 Container Service and AWS Fargate are practical tools that help you to deploy your dockerized application and, provisioning that infrastructure can be a pretty easy task by writing Terraform configuration files.

In the next blog post, you will see the application I’ll deploy and how I have structured it to be able to run it either locally and on the cloud. See you next time!

Thanks for reading and, if you have enjoyed it, feel free to share.

comments powered by Disqus