Ian Rodrigues

I'm a software developer based in Fortaleza, Brazil, and I enjoy building things for the web.

| 4 min read

Deploying containers to production with Terraform and AWS Fargate – Containerization

#aws #docker #containers #terraform #python

In the previous post, I’ve covered some of the services available to run containers on AWS. Also, I’ve presented Terraform, a tool to declare and provision those services using codes. This time, I’m gonna set up the project to run on Docker either locally and in production, using almost the same configuration.

Though I have more experience with PHP, I’m gonna provision a REST API written in Python (Django), since I’ve been working with it lately a lot. Please, don’t focus on the application itself; it’s written only for the sake of this post. Also, I’m sure you can apply the same principles I will use here for any language or framework. So, let’s get the hands dirty!

Setting up the local environment

First, let’s set up the local environment to run with Docker. This way, we will end up with an environment to work with identical to the production one. For this, we’ll use Docker Compose. Create a new file called docker-compose.yml on the root directory:

version: '3.7'

volumes:
  postgres-data:
    driver: local

services:
  django:
    build: .
    volumes:
      - .:/usr/local/app
    ports:
      - 8000:8000
    depends_on:
      - database

  database:
    image: postgres:11-alpine
    volumes:
      - postgres-data:/var/lib/postgresql
    environment:
      POSTGRES_USER: wannajob
      POSTGRES_PASSWORD: wannajob
      POSTGRES_DB: wannajob

I won’t get into details of the docker-compose.yml file. If you wanna learn more about it, check this out. Let’s now create a Dockerfile. Docker Compose will use it to build the Django’s container:

FROM python:3.6-slim-stretch

ENV PYTHONUNBUFFERED 1

WORKDIR /usr/local/app

RUN pip install --upgrade pip safety bandit pylint flake8 pytest coverage

COPY requirements.txt /usr/local/app/requirements.txt
RUN pip install -r requirements.txt

CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]

Again, I won’t get into details of the Dockerfile. If you wanna learn more about it, check this out. Everything’s set up, it’s time to get the project up and running:

$ docker-compose up -d

Finally, run the migrations:

$ docker-compose run --rm --no-deps django python manage.py migrate

See? The project is ready for local development using Docker. Now, it’s easy to start up the project anywhere. You can share it with your entire team, and everyone will have the same environment. No more, “But it works on my machine!”.

Preparing for production

Now that the project is running locally, we can spend some time and make a few adjustments to prepare it for production. While locally we are using the runserver command – a lightweight development webserver – to serve the application, in production we’ll need a “real” webserver. Here is where Nginx comes in. Yet, we can’t serve the app directly through Nginx as it cannot process Python. We need something (a WSGI) to where Nginx will forward the request, process the Python code, send back to Nginx, and then return the response to the client. For this, we will use Gunicorn.

The first thing to do is to update the Dockerfile. We will make use of the Docker Multi-Stage Build, see:

##############
# Base Stage
##############
FROM python:3.6-slim-stretch AS base

ENV PYTHONUNBUFFERED 1

WORKDIR /usr/local/app

RUN pip install --upgrade pip gunicorn

COPY requirements.txt /usr/local/app/requirements.txt
RUN pip install -r requirements.txt

CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--log-level", "warning", "--reload", "wannajob.wsgi:application"]


#####################
# Development Stage
#####################
FROM base AS development

RUN pip install --upgrade safety bandit pylint flake8 pytest coverage


####################
# Production Stage
####################
FROM base AS production

ENV DJANGO_SETTINGS_MODULE 'wannajob.settings.production'

COPY . /usr/local/app


#################
# Statics Stage
#################
FROM production AS statics

RUN python manage.py collectstatic --no-input --clear


############################
# Production (Nginx) Stage
############################
FROM nginx:1.17-alpine AS nginx

COPY ./docker/nginx/django.conf /etc/nginx/conf.d/default.conf

WORKDIR /usr/local/app

RUN mkdir statics
COPY --chown=nginx:nginx --from=statics /usr/local/app/static /usr/local/app/static

Then, create the docker/nginx/django.conf file:

server {
  listen 80;

  location = /favicon.ico {
    access_log    off;
    log_not_found off;
  }

  location /static/ {
    alias /usr/local/app/static/;
  }

  location / {
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_pass http://django:8000;
  }
}

And finally, update the docker-compose.yml, to use Nginx:

version: '3.7'

volumes:
  postgres-data:
    driver: local

services:
  django:
    build:
      context: .
      target: development
    volumes:
      - .:/usr/local/app
    depends_on:
      - database

  nginx:
    image: nginx:1.17-alpine
    volumes:
      - ./docker/nginx:/etc/nginx/conf.d
      - ./static:/usr/local/app/static
    ports:
      - 8000:80
    depends_on:
      - django

  database:
    image: postgres:11-alpine
    volumes:
      - postgres-data:/var/lib/postgresql
    environment:
      POSTGRES_USER: wannajob
      POSTGRES_PASSWORD: wannajob
      POSTGRES_DB: wannajob

Let’s rebuild the application:

$ docker-compose up -d --build

Now, the project is being served through Nginx locally, there’s no difference from what we will be doing in production in the future. Also, thanks to the Docker Multi-Stage Build, we have all the necessary development tools locally, but we can get rid of them when running in production, which will result in a more lightweight image.

Have a look at the size of the development and production images:

$ docker images --filter reference=ianrodrigues/wannawork-app
REPOSITORY                   TAG                 IMAGE ID            CREATED             SIZE
ianrodrigues/wannawork-app   production          29a4de730436        3 minutes ago       196MB
ianrodrigues/wannawork-app   development         cf3a9bf4ca10        2 hours ago         223MB

Pretty cool, huh?!

As you could see, it’s simple to set up Docker for a project to run locally, and with a few modifications, you can use the same setup ready for production. Thanks to Docker Multi-Stage Build, you can create flexible Dockerfiles that will result in more lightweight images, which is pretty suitable for production.

The source code used in this post is available on Github. If you have questions or suggestions, please let me know in the comments. In the next post, I’ll set up the environment on the AWS using Terraform, stay tuned!

Thanks for reading and, if you have enjoyed it, feel free to share.

comments powered by Disqus