It is cool that we can create encapsulated instantiations of the software services that we’ve created. In theory, we can publish these images to Docker repositories, and then launch the containers on any server we want. For example, our task in Chapter 10, Deploying Node.js Applications to Linux Servers, would be greatly simplified with Docker. We could simply install Docker Engine on the Linux host and then deploy our containers on that server, and not have to deal with all those scripts and the PM2 application.
But we haven’t properly automated the process. The promise was to use the Dockerized application for deployment on cloud services. In other words, we need to take all this learning and apply it to the task of simplifying deployment.
We’ve demonstrated that, with Docker, Notes can be built using four containers that have a high degree of isolation from each other and from the outside world.
There is a glaring problem: our process in the previous section was partly manual, partly automated. We created scripts to launch each portion of the system, which is good practice. However, we did not automate the entire process to bring up Notes and the authentication services, nor is this solution scalable beyond one machine.
Let’s start with the last issue first—scalability. Within the Docker ecosystem, several Docker orchestrator services are available. An orchestrator automatically deploys and manages Docker containers over a group of machines. Some examples of Docker orchestrators are Docker Swarm, Kubernetes, CoreOS Fleet, and Apache Mesos. These are powerful systems that can automatically increase/decrease resources as needed to move containers from one host to another, and more. We mention these systems for you to further study as your needs grow. In Chapter 12, Deploying a Docker Swarm to AWS EC2 with Terraform, we will build on the work we’re about to do in order to deploy Notes in a Docker Swarm cluster that we’ll build on AWS EC2 infrastructure.
Docker Compose (https://docs.docker.com/compose/overview/) will solve the other problems we’ve identified. It lets us easily define and run several Docker containers together as a complete application. It uses a YAML file, docker- compose.yml, to describe the containers, their dependencies, the virtual networks, and the volumes. While we’ll be using it to describe deployment on a single host machine, Docker Compose can be used for multi-machine deployments. Namely, Docker Swarm directly uses compose files to describe the services you launch in a swarm. In any case, learning about Docker Compose will give you a headstart on understanding the other systems.
Before proceeding, ensure that Docker Compose is installed. If you’ve installed Docker for Windows or Docker for Mac, everything that is required is installed. On Linux, you must install it separately by following the instructions in the links provided earlier.
1. Docker Compose file for the Notes stack
We just talked about Docker orchestration services, but Docker Compose is not itself such a service. Instead, Docker Compose uses a specific YAML file structure to describe how to deploy Docker containers. With a Docker Compose file, we can describe one or more containers, networks, and volumes involved in launching a Docker-based service.
Let’s start by creating a directory, compose-local, as a sibling to the users and notes directories. In that directory, create a file named docker- compose.yml:
version: ‘3’
services:
db-userauth:
image: “mysql/mysql-server:8.0” container_name:
db-userauth command: [ “mysqld”,
“–character-set-server=utf8mb4”,
“–collation-server=utf8mb4_unicode_ci”, “–bind-address=0.0.0.0”,
“–socket=/tmp/mysql.sock” ]
expose:
– “3306”
networks:
– authnet
volumes:
– db-userauth-data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: “w0rdw0rd”
MYSQL_USER: userauth
MYSQL_PASSWORD: userauth
MYSQL_DATABASE: userauth
svc-userauth:
build: ../users
container_name: svc-userauth
depends_on:
– db-userauth
networks:
– authnet
# DO NOT EXPOSE THIS PORT ON PRODUCTION
ports:
– “5858:5858”
restart: always
db-notes:
image: “mysql/mysql-server:8.0”
container_name: db-notes
command: [ “mysqld”,
“–character-set-server=utf8mb4”,
“–collation-server=utf8mb4_unicode_ci”,
“–bind-address=0.0.0.0”,
“–socket=/tmp/mysql.sock” ]
expose:
– “3306”
networks:
– frontnet
volumes:
– db-notes-data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: “w0rdw0rd”
MYSQL_USER: notes
MYSQL_PASSWORD: notes12345
MYSQL_DATABASE: notes
svc-notes:
build: ../notes
container_name: svc-notes
depends_on:
– db-notes networks:
– frontnet
ports:
– “3000:3000”
restart: always
networks: frontnet:
driver: bridge authnet:
driver: bridge
volumes:
db-userauth-data:
db-notes-data:
That’s the description of the entire Notes deployment. It’s at a fairly high level of abstraction, roughly equivalent to the options in the command-line tools we’ve used so far. It’s fairly succinct and self-explanatory, and, as we’ll see, the docker- compose command makes these files a convenient way to manage Docker services.
The version line says that this is a version 3 Compose file. The version number is inspected by the docker-compose command so that it can correctly interpret its content. The full documentation is worth reading at https://docs.docker.com/compose/compose-file/.
There are three major sections used here: services, volumes, and networks. The services section describes the containers being used, the networks section describes the networks, and the volumes section describes the volumes. The content of each section matches the containers we created earlier. The configuration we’ve already dealt with is all here, just rearranged.
There are the two database containers—db-userauth and db-notes—and the two service containers—svc-userauth and svc-notes. The service containers are built from a Dockerfile located in the directory named in the build attribute. The database containers are instantiated from images downloaded from Docker Hub. Both correspond directly to what we did previously, using the docker run command to create the database containers and using docker build to generate the images for the services.
The container_name attribute is equivalent to the –name attribute and specifies a user-friendly name for the container. We must specify the container name in order to specify the container hostname to effect a Docker-style service discovery.
The networks attribute lists the networks to which this container must be connected and is exactly equivalent to the –net argument. Even though the docker command doesn’t support multiple –net options, we can list multiple networks in the Compose file. In this case, the networks are bridge networks. As we did earlier, the networks themselves must be created separately and, in a Compose file, this is done in the networks section.
The ports attribute declares the ports that are to be published and the mapping to container ports. In the ports declaration, we have two port numbers, the first being the published port number and the second being the port number inside the container. This is exactly equivalent to the -p option used earlier.
The depends_on attribute lets us control the start up order. A container that depends on another will wait to start until the depended-on container is running.
The volumes attribute describes mappings of a container directory to a host directory. In this case, we’ve defined two volume names—db-userauth- data and db-notes-data—and then used them for the volume mapping. However, when we deploy to Docker Swarm on AWS EC2, we’ll need to change how this is implemented.
Notice that we haven’t defined a host directory for the volumes. Docker will assign a directory for us, which we can learn about by using the docker volume inspect command.
The restart attribute controls what happens if or when the container dies. When a container starts, it runs the program named in the CMD instruction, and when that program exits, the container exits. But what if that program is meant to run forever; shouldn’t Docker know that it should restart the process? We could use a background process supervisor, such as Supervisord or PM2. However, the Docker restart option takes care of it.
The restart attribute can take one of the following four values:
- no: Do not restart.
- on-failure:count: Restart up to N times.
- always: Always restart.
- unless-stopped: Start the container unless it was explicitly stopped.
In this section, we’ve learned how to build a Docker Compose file by creating one that describes the Notes application stack. With that in hand, let’s see how to use this tool to launch the containers.
2. Building and running the Notes application with Docker Compose
With the Docker Compose CLI tool, we can manage any sets of Docker containers that can be described in a docker-compose.yml file. We can build the containers, bring them up and down, view the logs, and more. On Windows, we’re able to run the commands in this section unchanged.
Our first task is to create a clean slate by running these commands:
$ docker stop db-notes svc-userauth db-auth svc-notes db-notes
svc-userauth db-auth
svc-notes
$ docker rm db-notes svc-userauth db-auth svc-notes db-notes
svc-userauth db-auth
svc-notes
We first needed to stop and delete any existing containers left over from our previous work. We can also use the scripts in the frontnet and authnet directories to do this. docker-compose.yml used the same container names, so we need the ability to launch new containers with those names.
To get started, use this command:
$ docker-compose build
db-userauth uses an image, skipping
db-notes uses an image, skipping
Building svc-userauth
Step 1/12 : FROM node:13.8
—> 07e774543bdf$ docker-compose up
…
Successfully built f714b877dbec
Successfully tagged compose-local_svc-userauth:latest
Building svc-notes
Step 1/26 : FROM node:13.8
—> 07e774543bdf
…
Successfully built 36b358e3dd0e
Successfully tagged compose-local_svc-notes:latest
This builds the images listed in docker-compose.yml. Note that the image names we end up with all start with compose-local, which is the name of the directory containing the file. Because this is the equivalent of running docker build in each of the directories, it only builds the images.
Having built the containers, we can start them all at once using either docker- compose up or docker-compose start:
$ docker-compose start
Starting db-userauth … done
Starting svc-userauth … done
Starting db-notes … done
Starting svc-notes … done
$ docker-compose stop
Stopping svc-notes … done
Stopping svc-userauth … done
Stopping db-notes … done
Stopping db-userauth … done
We can use docker-compose stop to shut down the containers. With docker- compose start, the containers run in the background.
We can also run docker-compose up to get a different experience:
$ docker-compose up
Recreating db-notes … done
Starting db-userauth … done
Starting svc-userauth … done
Recreating svc-notes … done
Attaching to db-userauth, db-notes, svc-userauth, svc-notes
db-userauth | [Entrypoint] MySQL Docker Image 8.0.19-1.1.15
db-userauth | [Entrypoint] Starting MySQL 8.0.19-1.1.15
db-notes | [Entrypoint] MySQL Docker Image 8.0.19-1.1.15
db-notes | [Entrypoint] Starting MySQL 8.0.19-1.1.15
If necessary, docker-compose up will first build the containers. In addition, it keeps the containers all in the foreground so that we can see the logging. It combines the log output for all the containers together in one output, with the container name shown at the beginning of each line. For a multi-container system such as Notes, this is very helpful.
We can check the status using this command:
$ docker-compose ps
Name Command State Ports
db-notes /entrypoint.sh mysqld –ch … Up (healthy) 3306/tcp, 33060/tcp
db-userauth /entrypoint.sh mysqld –ch … Up (healthy) 3306/tcp, 33060/tcp
svc-notes docker-entrypoint.sh /bin/ … Up 0.0.0.0:3000->3000/tcp
svc-userauth docker-entrypoint.sh /bin/ … Up 0.0.0.0:5858->5858/tcp
This is related to running docker ps, but the presentation is a little different and more compact.
In docker-compose.yml, we insert the following declaration for svc-userauth:
# DO NOT EXPOSE THIS PORT ON PRODUCTION
ports:
– “5858:5858”
This means that the REST service port for svc-userauth was published. Indeed, in the status output, we see that the port is published. That violates our security design, but it does let us run the tests with users/cli.mjs from our laptop. That is, we can add users to the database as we’ve done so many times before.
This security violation is acceptable so long as it stays on our laptop. The compose- local directory is named specifically to be used with Docker Compose on our laptop.
Alternatively, we can run commands inside the svc-userauth container just as before:
$ docker exec -it svc-userauth node cli.mjs list-users [
…
]
$ docker-compose exec svc-userauth node cli.mjs list-users [
…
]
We started the Docker containers using docker-compose, and we can use the docker-compose command to interact with the containers. In this case, we demonstrated using both the docker-compose and docker commands to execute a command inside one of the containers. While there are slight differences in the command syntax, it’s the same interaction with the same results.
Another test is to go into the containers and explore:
$ docker-compose exec svc-notes bash
…
$ docker-compose exec svc-userauth bash
…
From there, we can try pinging each of the containers to see which containers can be reached. That will serve as a simplistic security audit to ensure that what we’ve created fits the security model we desired.
While doing this, we find that svc-userauth can ping every container, including db-notes. This violates the security plan and has to be changed.
Fortunately, this is easy to fix. Simply by changing the configuration, we can add a new network named svcnet to docker-compose.yml:
services:
..
svc-userauth:
..
networks:
– authnet
– svcnet
..
svc-notes:
..
networks:
– frontnet
– svcnet
..
networks:
frontnet:
driver: bridge
authnet:
driver: bridge
svcnet:
driver: bridge
svc-userauth is no longer connected to frontnet, which is how we could ping db- notes from svc-userauth. Instead, svc-userauth and svc-notes are both connected to a new network, svcnet, which is meant to connect the service containers. Therefore, both service containers have exactly the required access to match the goals outlined at the beginning.
That’s an advantage of Docker Compose. We can quickly reconfigure the system without rewriting anything other than the docker-compose.yml configuration file. Furthermore, the new configuration is instantly reflected in a file that can be committed to our source repository.
When you’re done testing the system, simply type CTRL + C in the terminal:
^CGracefully stopping… (press Ctrl+C again to force)
Stopping db-userauth … done
Stopping userauth … done
Stopping db-notes … done
Stopping notes … done
As shown here, this stops the whole set of containers. Occasionally, it will instead exit the user to the shell, and the containers will still be running. In that case, the user will have to use an alternative method to shut down the containers:
$ docker-compose down
Stopping db-userauth … done
Stopping userauth … done
Stopping db-notes … done
Stopping notes … done
The docker-compose commands—start, stop, and restart—all serve as ways to manage the containers as background tasks. The default mode for the docker- compose up command is, as we’ve seen, to start the containers in the foreground.
However, we can also run docker-compose up with the -d option, which says to detach the containers from the terminal to run in the background.
We’re getting closer to our end goal. In this section, we learned how to take the Docker containers we’ve designed and create a system that can be easily brought up and down as a unit by running the docker-compose command.
While preparing to deploy this to Docker Swarm on AWS EC2, a horizontal scaling issue was found, which we can fix on our laptop. It is fairly easy with Docker Compose files to test multiple svc-notes instances to see whether we can scale Notes for higher traffic loads. Let’s take a look at that before deploying to the swarm.
Source: Herron David (2020), Node.js Web Development: Server-side web development made easy with Node 14 using practical examples, Packt Publishing.