How to Dockerize Django and Postgres

Docker is an invaluable tool that makes things like setting up your local development environment, continuous integration, and deployment a breeze. Docker is good to set up early on for any application, but it is never too late. With the help of Docker Compose, we are going to set up a Django application to install and deploy itself on any machine with only one command.

It should be noted that Docker Compose has a quickstart tutorial on this very subject. However, it does not provide a solution for running migrations automatically, and I think the solution I provide here is cleaner and more extensible.

Getting Started

For this tutorial, we are going to be Dockerizing the application we built in Combining Inherited Django Forms in the Same FormView. If you don’t already have them installed, install Docker and Docker Compose for your platform. These are the only two dependencies you need to deploy our application.

If you would like to follow along, you can check out the code on GitHub and check out this commit as a starting point.

Configuring Django for Postgres

For this tutorial, we are going to be using the Postgres Docker image on DockerHub with no modifications. We just need to ensure that the following database configuration is present in our Django project’s settings.py.

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': 'postgres',
        'USER': 'postgres',
        'HOST': 'db',
        'PORT': '5432',
    }
}

This just tells Django to use the postgres database and the postgres user; both of which are created automatically by default. This is the easiest option, as we don’t have to extend the postgres Docker image or provide additional environment variables. However, if you don’t want this behavior or need to perform some other custom initialization, you can read about how to do this on the Postgres DockerHub page linked above.

Creating the Application Image

Next, we need to create a custom image for our application. For this, we need a Dockerfile. Create the directory docker/app at the root of your project and create two files in it: Dockerfile and startup.sh. Dockerfile should look like the following:

FROM python:3.7.1

COPY requirements.txt /code/
WORKDIR /code/
RUN pip install -r requirements.txt

COPY . /code/

CMD [ "./docker/app/startup.sh" ]

Let’s break this down step by step.

FROM python:3.7.1

This tells Docker we want to extend the python:3.7.1 base image. We use this image because we require Python to run Django. Be sure to specify a specific version in the tag (in our case, 3.7.1). It is always a good idea to pin specific versions for application dependencies so that upgrades don’t unexpectedly break things.

COPY requirements.txt /code/
WORKDIR /code/
RUN pip install -r requirements.txt

Here, we copy requirements.txt into a new /code/ directory and make /code/ the working directory for future commands. Then, we install the application requirements. We copy only requirements.txt here to leverage the Docker cache and avoid having to reinstall these requirements each time a code change occurs.

COPY . /code/

This block copies the remote directory specified by the build context into the /code/ directory in the image. This is how we access the project code to build the image.

CMD [ "./docker/app/startup.sh" ]

Lastly, CMD defines behavior that occurs when the container starts up. In this case, we want to run the startup.sh script. That script should look like the following:

#!/bin/bash

python3 manage.py runserver 0.0.0.0:8000

This runs the Django server when the container starts up. Make certain to give root permission to execute startup.sh (on Linux, with chmod 744 docker/app/startup.sh) or you will run into problems when running your container.

Creating the Migration Image

We could simply run migrations in our application image, before we start the server. That would result in less code and be more efficient memory-wise. However, a Docker container should do only one thing in order to maintain maximum flexibility. For this reason, we are going to create a separate image for running Django migrations on our database. This allows us to avoid being forced to run migrations every time we start up the application container.

Create another subdirectory in docker named db-init and create another Dockerfile and startup.sh in it, as well as wait-for-postgres.sh. Don’t forget to set the appropriate permissions on both scripts like before.

The Dockerfile is somewhat similar to the one created before:

FROM python:3.7.1

RUN apt-get update && \
    apt-get install -y postgresql-client && \
    rm -rf /var/lib/apt/lists/*

COPY requirements.txt /code/
WORKDIR /code/
RUN pip install -r requirements.txt

COPY . /code/

CMD [ "./docker/db-init/wait-for-postgres.sh", "db", "./docker/db-init/startup.sh" ]

The main difference here is that we’re installing and using the Postgres client to ensure that Postgres is up and accepting connections before we try to run migrations. For this, we’ll also need to fill out the wait-for-postgres.sh script as follows:

#!/bin/sh

set -e

host="$1"
shift
cmd="$@"

until PGPASSWORD=$POSTGRES_PASSWORD psql -h "$host" -U "postgres" -c '\q'; do
  >&2 echo "Postgres is unavailable - sleeping"
  sleep 1
done

>&2 echo "Postgres is up - executing command"
exec $cmd

This script accepts a host argument and a cmd argument and only runs cmd when a psql connection can be made.

Next, the actual initialization script, startup.sh:

#!/bin/bash

python3 manage.py migrate --no-input

For this simple application, we’re only running migrations, but other database initialization steps could be placed here as well.

It may seem like a waste to have a container just for initializing the database, but as a project grows large and complex, it is increasingly more valuable to avoid the overhead of initialization in the main application container.

Orchestrating the Containers

Now we have a total of three containers that need to be initialized and linked to run our application. It is an arduous process to build the images, run the containers, and link them, involving long commands that are tedious to type out and remember. Docker Compose allows us to create a configuration file and handle this entire process with short, convenient commands.

At the root of the project directory, create docker-compose.yml. It should look like the following:

version: '3'

services:
  db:
    image: postgres:11.1
  db-init:
    build:
      context: .
      dockerfile: docker/db-init/Dockerfile
    depends_on:
      - db
  app:
    build:
      context: .
      dockerfile: docker/app/Dockerfile
    volumes:
      - .:/code
    ports:
      - "8000:8000"
    depends_on:
      - db

This file defines three “services” (AKA containers): db, which pulls the official postgres:11.1 image, and db-init and app, which are both dependent on db. Let’s take a closer look at the latter two services.

  db-init:
    build:
      context: .
      dockerfile: docker/db-init/Dockerfile
    depends_on:
      - db

The build object defines how the docker build command is executed for this image. context defines the build context for the image, or the set of remote files the Docker image can see when it is being built. dockerfile defines the location of the Dockerfile. Since Docker always looks for this file in the current directory, we need to tell it where to look instead. Lastly, the depends_on configuration makes it so that db-init doesn’t start until db is started. You can read more about the startup order, and the need for wait-for-postgres.sh here.

  app:
    build:
      context: .
      dockerfile: docker/app/Dockerfile
    volumes:
      - .:/code
    ports:
      - "8000:8000"
    depends_on:
      - db

The app service defines a couple of extra configurations. The volumes configuration maps a directory on the host system to a directory on the container. This makes it so that if there is a change on the host system, the container immediately sees it, and vice-versa. For Django specifically, this means any code changes will trigger a hot-deploy inside the container, just as it would if it were deployed outside of a container. Lastly, the ports configuration exposes and maps port 8000 in the container to port 8000 on the host system, which is required to access the application.

Deploying

Now that we’re all set up, you can simply run docker-compose up from the project’s root directory to build the images and deploy the containers appropriately. Once it finishes, you should be able to access the application at localhost:8000/job-applications/submit/.

Here are a few other useful Docker Compose commands that I use frequently.

  • docker-compose up -d: Start containers detached (in the background).
  • docker-compose down: Stop and remove containers.
  • docker-compose build --no-cache: Build images without using the cache. Useful for development purposes.
  • docker-compose ps: View containers (active or inactive) started with up.

Of course, Docker Compose creates Docker containers and images, so any Docker commands will work as well.

Closing Thoughts

Now that you’ve Dockerized your application with Docker Compose, you can easily do the following:

  • Set up a new development environment
  • Implement or improve continuous integration or deployment
  • Deploy your application to any machine

This gives your application the flexibility and mobility it needs so that you can focus on improving it.

For full example code, check out the repository on GitHub.

One thought on “How to Dockerize Django and Postgres”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.