Deploying Node.js: Setting up the user authentication service in Docker

With all that theory spinning around in our heads, it’s time to do something practical.

Let’s start by setting up the user authentication service. We’ll call this AuthNet, and it comprises a MySQL instance to store the user database, the authentication server, and a private subnet to connect them.

It is best for each container to focus on providing one service. Having one service per container is a useful architectural decision because we can focus on optimizing each container for a specific purpose. Another rationale has to do with scaling, in that each service has different requirements to satisfy the traffic it serves. In our case, we might need a single MySQL instance, and 10 user authentication instances, depending on the traffic load.

The Docker environment lets us not only define and instantiate Docker containers but also the networking connections between containers. That’s what we meant by a private subnet earlier. With Docker, we not only manage containers, but we can also configure subnets, data storage services, and more.

In the next few sections, we’ll carefully dockerize the user authentication service infrastructure. We’ll learn how to set up a MySQL container for Docker and launch a Node.js service in Docker.

Let’s start by learning how to launch a MySQL container in Docker.

1. Launching a MySQL container in Docker

Among the publicly available Docker images, there are over 11,000 available for MySQL. Fortunately, the image provided by the MySQL team, mysql/mysql- server, is easy to use and configure, so let’s use that.

A Docker image name can be specified, along with a tag that is usually the software version number. In this case, we’ll use mysql/mysql-server:8.0, where mysql/mysql-server is the image repository URL, mysql-server is the image name, and 8.0 is the tag. The MySQL 8.x release train is the current version as of the time of writing. As with many projects, the MySQL project tags the Docker images with the version number.

Download the image, as follows:

$ docker pull mysql/mysql-server:8.0

8.0.13: Pulling from mysql/mysql-server

e64f6e679e1a: Pull complete

799d60100a25: Pull complete

85ce9d0534d0: Pull complete

d3565df0a804: Pull complete

Digest: sha256:59a5854dca16488305aee60c8dea4d88b68d816aee62 7de022b19d9bead48d04

Status: Downloaded newer image for mysql/mysql-server:8.0.13

docker.io/mysql/mysql-server:8.0.13 

The docker pull command retrieves an image from a Docker repository and is conceptually similar to the git pull command, which retrieves changes from a git repository.

This downloaded four image layers in total because this image is built on top of three other images. We’ll see later how that works when we learn how to build a Dockerfile.

We can query which images are stored on our laptop with the following command:

$ docker images

REPOSITORY         TAG    IMAGE ID       CREATED      SIZE

mysql/mysql-server 8.0    716286be47c6 8 days ago   381MB

hello-world       latest  bf756fb1ae65 4 months ago 13.3kB

There are two images currently available—the mysql-server image we just downloaded and the hello-world image we ran earlier.

We can remove unwanted images with the following command:

$ docker rmi hello-world Untagged: hello-world:latest

Untagged: hello-world@sha256:8e3114318a995a1ee497790535e

7b88365222a21771ae7e53687ad76563e8e76

Deleted: sha256:bf756fb1ae65adf866bd8c456593cd24beb6a0

a061dedf42b26a993176745f6b

Deleted: sha256:9c27e219663c25e0f28493790cc0b88bc973ba

3b1686355f221c38a36978ac63 

Notice that the actual delete operation works with the SHA256 image identifier.

A container can be launched with the image, as follows:

$ docker run –name=mysql –env MYSQL_ROOT_PASSWORD=w0rdw0rd mysql/mysql-server:8.0

[Entrypoint] MySQL Docker Image 8.0.13-1.1.8 [Entrypoint] Initializing database

2020-02-17T00:08:15.685715Z 0 [System] [MY-013169] [Server]

/usr/sbin/mysqld (mysqld 8.0.13) initializing of server in progress as process 25

2020-02-17T00:08:44.490724Z 0 [System] [MY-013170] [Server]

/usr/sbin/mysqld (mysqld 8.0.13) initializing of server has completed

[Entrypoint] Database initialized

2020-02-17T00:08:48.625254Z 0 [System] [MY-010116] [Server]

/usr/sbin/mysqld (mysqld 8.0.13) starting as process 76

 

[Entrypoint] MySQL init process done. Ready for start up. 

[Entrypoint] Starting MySQL 8.0.13-1.1.8

2020-02-17T00:09:14.611614Z 0 [System] [MY-010116] [Server]

/usr/sbin/mysqld (mysqld 8.0.13) starting as process 1

The docker run command takes an image name, along with various arguments, and launches it as a running container.

We started this service in the foreground, and there is a tremendous amount of output as MySQL initializes its container. Because of the –name option, the container name is mysql. Using an environment variable, we tell the container to initialize the root password.

Since we have a running server, let’s use the MySQL CLI to make sure it’s actually running. In another window, we can run the MySQL client inside the container, as follows:

$ docker exec -it mysql mysql -u root -p

Enter password:

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 14

Server version: 8.0.13 MySQL Community Server – GPL

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

The docker exec command lets you run programs inside the container. The -it option says the command is run interactively on an assigned terminal. In this case, we used the mysql command to run the MySQL client so that we could interact with the database. Substitute bash for mysql, and you will land in an interactive bash command shell.

This mysql command instance is running inside the container. The container is configured by default to not expose any external ports, and it has a default my.cnf file.

Docker containers are meant to be ephemeral, created and destroyed as needed, while databases are meant to be permanent, with lifetimes sometimes measured in decades. A very important discussion on this point and how it applies to database containers is presented in the next section.

It is cool that we can easily install and launch a MySQL instance. However, there are several considerations to be made:

  • Access to the database from other software, specifically from another container
  • Storing the database files outside the container for a longer lifespan
  • Custom configuration, because database admins love to tweak the settings
  • We need a path to connect the MySQL container to the AuthNet network that we’ll be creating

Before proceeding, let’s clean up. In a terminal window, type the following:

$ docker stop mysql

mysql

$ docker rm mysql

mysql 

This closes out and cleans up the container we created. To reiterate the point made earlier, the database in that container went away. If that database contained critical information, you just lost it, with no chance of recovering the data.

Before moving on, let’s discuss how this impacts the design of our services.

2. The ephemeral nature of Docker containers

Docker containers are designed to be easy to create and easy to destroy. In the course of kicking the tires, we’ve already created and destroyed three containers.

In the olden days (a few years ago), setting up a database required the provisioning of specially configured hardware, hiring a database admin with special skills, and carefully optimizing everything for the expected workload. In the space of a few paragraphs, we just instantiated and destroyed three database instances. What a brave new world this is!

In terms of databases and Docker containers, the database is relatively eternal, and the Docker container is ephemeral. Databases are expected to last for years, or perhaps even decades. In computer years, that’s practically immortal. By contrast, a Docker container that is used and then immediately thrown away is merely a brief flicker of time compared to the expected lifetime of a database.

Those containers can be created and destroyed quickly, and this gives us a lot of flexibility. For example, orchestration systems, such as Kubernetes or AWS ECS, can automatically increase or decrease the number of containers to match traffic volume, restart containers that crash, and more.

But where does the data in a database container live? With the commands we ran in the previous section, the database data directory lives inside the container. When the container was destroyed, the data directory was destroyed, and any data in our database was vaporized. Obviously, this is not compatible with the life cycle requirements of the data we store in a database.

Fortunately, Docker allows us to attach a variety of mass storage services to a Docker container. The container itself might be ephemeral, but we can attach eternal data to the ephemeral container. It’s just a matter of configuring the database container so that the data directory is on the correct storage system.

Enough theory, let’s now do something. Specifically, let’s create the infrastructure for the authentication service.

3. Defining the Docker architecture for the authentication service

Docker supports the creation of virtual bridge networks between containers. Remember that a Docker container has many of the features of an installed Linux OS. Each container can have its own IP address and exposed ports. Docker supports the creation of what amounts to a virtual Ethernet segment, called a bridge network.

These networks live solely within the host computer and, by default, are not reachable by anything outside the host computer.

A Docker bridge network, therefore, has strictly limited access.

Any Docker containers attached to a bridge network can communicate with other containers attached to that network and, by default, that network does not allow external traffic. The containers find each other by hostname, and Docker includes an embedded DNS server to set up the hostnames required. That DNS server is configured to not require dots in domain names, meaning the DNS/hostname of each container is simply the container name. We’ll find later that the hostname of the container is actually container-name.network-name, and that the DNS configuration lets you skip using the network-name portion of the hostname. This policy of using hostnames to identify containers is Docker’s implementation of service discovery.

Create a directory named authnet as a sibling to the users and notes directories. We’ll be working on authnet in that directory.

In that directory, create a file—package.json—which we’ll use solely to record commands for managing AuthNet:

{

“name”: “authnet”,

“version”: “1.0.0”,

“description”: “Scripts to define and manage AuthNet”, “scripts”: {

“build-authnet”: “docker network create –driver bridge authnet”

},

“license”: “ISC”

}

We’ll be adding more scripts to this file. The build-authnet command builds a virtual network using the bridge driver, as we just discussed. The name for this network is authnet.

Having created authnet, we can attach containers to it so that the containers can communicate with one another.

Our goal for the Notes application stack is to use private networking between containers to implement a security firewall around the containers. The containers will be able to communicate with one another, but the private network is not reachable by any other software and is, therefore, more or less safe from intrusion.

Type the following command:

$ npm run build-authnet 

> authnet@1.0.0 build-authnet /home/david/Chapter10/authnet

> docker network create –driver bridge authnet 

876232c4f2268c5fb192702cd2a339036dc2e74fe777d863620dded498fc56d0

$ docker network ls

NETWORK ID   NAME    DRIVER SCOPE

876232c4f226 authnet bridge local

 

This creates a Docker bridge network. The long coded string is the identifier for this network. The docker network ls command lists the existing networks in the current Docker system. In addition to the short hex ID, the network has the name we specified.

Look at details regarding the network with this command:

$ docker network inspect authnet

… much JSON output 

At the moment, this won’t show any containers attached to authnet. The output shows the network name, the IP range of this network, the default gateway, and other useful network configuration information. Since nothing is connected to the network, let’s get started with building the required containers:

$ docker network rm authnet

authnet

$ docker network ls

NETWORK ID NAME DRIVER SCOPE 

This command lets us remove a network from the Docker system. However, since we need this network, rerun the command to recreate it.

We have explored setting up a bridge network, and so our next step is to populate it with a database server.

4. Creating the MySQL container for the authentication service

Now that we have a network, we can start connecting containers to that network. In addition to attaching the MySQL container to a private network, we’ll be able to control the username and password used with the database, and we’ll also give it external storage. That will correct the issues we named earlier.

To create the container, we can run the following command:

$ docker run –name db-userauth \

–env MYSQL_USER=userauth \

–env MYSQL_PASSWORD=userauth \

–env MYSQL_DATABASE=userauth \

–mount type=bind,src=`pwd`/userauth-data,dst=/var/lib/mysql \

–network authnet -p 3306:3306 \

–env MYSQL_ROOT_PASSWORD=w0rdw0rd \ mysql/mysql-server:8.0 \

–bind_address=0.0.0.0 \

–socket=/tmp/mysql.sock 

This does several useful things all at once. It initializes an empty database configured with the named users and passwords, it mounts a host directory as the MySQL data directory, it attaches the new container to authnet, and it exposes the MySQL port to connections from outside the container.

The docker run command is only run the first time the container is started. It combines building the container by running it for the first time. With the MySQL container, its first run is when the database is initialized. The options that are passed to this docker run command are meant to tailor the database initialization.

The –env option sets environment variables inside the container. The scripts driving the MySQL container look to these environment variables to determine the user IDs, passwords, and database to create.

In this case, we configured a password for the root user, and we configured a second user—userauth—with a matching password and database name.

The –network option attaches the container to the authnet network.

The -p option exposes a TCP port from inside the container so that it is visible outside the container. By default, containers do not expose any TCP ports. This means we can be very selective about what to expose, limiting the attack surface for any miscreants seeking to gain illicit access to the container.

The –mount option is meant to replace the older –volume option. It is a powerful tool for attaching external data storage to a container. In this case, we are attaching a host directory, userauth-data, to the /var/lib/mysql directory inside the container. This ensures that the database is not inside the container, and that it will last beyond the lifetime of the container. For example, while creating this example, we deleted this container several times to fine-tune the command line, and it kept using the same data directory.

We should also mention that the –mount option requires the src= option be a full pathname to the file or directory that is mounted. We are using `pwd` to determine the full path to the file. However, this is, of course, specific to Unix-like OSes. If you are on Windows, the command should be run in PowerShell and you can use the $PSScriptRoot variable. Alternatively, you can hardcode an absolute pathname.

It is possible to inject a custom my.cnf file into the container by adding this option to the docker run command:

–mount type=bind,src=’pwd’/my.cnf,dst=/etc/my.cnf

In other words, Docker lets you mount not only a directory but also a single file. The command line follows this pattern:

$ docker run \

docker run options\

mysql/mysql-server:8.0\

mysqld options 

So far, we have talked about the options for the docker run command. Those options configure the characteristics of the container. Next on the command line is the image name—in this case, mysql/mysql-server:8.0. Any command-line tokens appearing after the image name are passed into the container. In this case, they are interpreted as arguments to the MySQL server, meaning we can configure this server using any of the extensive sets of command-line options it supports. While we can mount a my.cnf file in the container, it is possible to achieve most configuration settings this way.

The first of these options, –bind_address, tells the server to listen for connections from any IP address.

The second, –socket=/tmp/mysql.sock, serves two purposes. One is security, to ensure that the MySQL Unix domain socket is accessible only from inside the container. By default, the scripts inside the MySQL container put this socket in the /var/lib/mysql directory, and when we attach the data directory, the socket is suddenly visible from outside the container.

On Windows, if this socket is in /var/lib/mysql, when we attach a data directory to the container, that would put the socket in a Windows directory. Since Windows does not support Unix domain sockets, the MySQL container will mysteriously fail to start and give a misleadingly obtuse error message. The –socket option ensures that the socket is instead on a filesystem that supports Unix domain sockets, avoiding the possibility of this failure.

When experimenting with different options, it is important to delete the mounted data directory each time you recreate the container to try a new setting. If the MySQL container sees a populated data directory, it skips over most of the container initialization scripts and will not run. A common mistake when trying different container MySQL configuration options is to rerun docker run without deleting the data directory. Since the MySQL initialization doesn’t run, nothing will have changed and it won’t be clear why the behavior isn’t changing.

Therefore, to try a different set of MySQL options, execute the following command:

$ rm -rf userauth-data

$ mkdir userauth-data

$ docker run … options … mysql/mysql-server:8.0 … 

This will ensure that you are starting with a fresh database each time, as well as ensuring that the container initialization runs.

This also suggests an administrative pattern to follow. Any time you wish to update to a later MySQL release, simply stop the container, leaving the data directory in place. Then, delete the container and re-execute the docker run command with a new mysql/mysql-server tag. That will cause Docker to recreate the container using a different image, but using the same data directory. Using this technique, you can update the MySQL version by pulling down a newer image.

Once you have the MySQL container running, type this command:

This will show the current container status. If we use docker ps -a, we see that the PORTS column says 0.0.0.0:3306->3306/tcp, 33060/tcp. That says that the container is listening to access from anywhere (0.0.0.0) to port 3306, and this traffic will connect to port 3306 inside the container. Additionally, there is a port 33060 that is available, but it is not exposed outside the container.

Even though it is configured to listen to the whole world, the container is attached to authnet, which limits where connections can come from. Limiting the scope of processes that can attach to the database is a good thing. However, since we used the -p option, the database port is exposed to the host, and it’s not as secure as we want. We’ll fix this later.

4.1. Security in the database container

A question to ask is whether setting the root password like this is a good idea.

The root user has broad access to the entire MySQL server, where other users, such as userauth, have limited access to the given database. Since one of our goals is security, we must consider whether this has created a secure or insecure database container.

We can log in as the root user with the following command:

$ docker exec -it db-userauth mysql -u root -p Enter password:

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 115

Server version: 8.0.19 MySQL Community Server – GPL

 

This executes the MySQL CLI client inside the newly created container. There are a few commands we can run to check the status of the root and userauth user IDs. These include the following:

A connection to a MySQL server includes a user ID, a password, and the source of the connection. This connection might come from inside the same computer, or it might come over a TCP/IP socket from another computer. To approve the connection, the server looks in the mysql.user table for a row matching the user, host (source of connection), and password fields. The username and password are matched as a simple string comparison, but the host value is a more complex comparison. Local connections to the MySQL server are matched against rows where the host value is localhost.

For remote connections, MySQL compares the IP address and domain name of the connection against entries in the host column. The host column can contain IP addresses, hostnames, or wildcard patterns. The wildcard character for SQL is %. A single % character matches against any connection source, while a pattern of 172.% matches any IP address where the first IPv4 octet is 172, or 172.20.%.% matches any IP address in the 172.20.x.x range.

Therefore, since the only row for userauth specifies a host value of %, we can use userauth from anywhere. By contrast, the root user can only be used with a localhost connection.

The next task is to examine the access rights for the userauth and root user IDs:

This says that the userauth user has full access to the userauth database.

The root user, on the other hand, has full access to every database and has so many permissions that the output of that does not fit here. Fortunately, the root user is only allowed to connect from localhost.

To verify this, try connecting from different locations using these commands:

$ docker exec -it db-userauth mysql -u userauth -p

Enter password:

Server version: 8.0.19 MySQL Community Server – GPL

$ docker run -it –rm –network authnet mysql/mysql-server:8.0 mysql – u userauth -h db-userauth -p

Enter password:

Server version: 8.0.19 MySQL Community Server – GPL

$ docker run -it –rm –network authnet mysql/mysql-server:8.0 mysql – u root -h db-userauth -p

Enter password: 

ERROR 1045 (28000): Access denied for user ‘root’@’172.20.0.4’ (using password: YES) 

We’ve demonstrated four modes of accessing the database, showing that indeed, the userauth ID can be accessed either from the same container or from a remote container, while the root ID can only be used from the local container.

Using docker run –it –rm … container-name .. starts a container, runs the command associated with the container, and then exits the container and automatically deletes it when it’s done.

Therefore, with those last two commands, we created a separate mysql/mysql- server:8.0 container, connected to authnet, to run the mysql CLI program.

The mysql arguments are to connect using the given username (root or userauth) to the MySQL server on the host named db-userauth. This demonstrates connecting to the database from a separate connector and shows that we can connect remotely with the userauth user, but not with the root user.

Then, the final access experiment involves leaving off the –network option:

$ docker run -it –rm mysql/mysql-server:8.0 mysql -u userauth -h db- userauth -p

[Entrypoint] MySQL Docker Image 8.0.19-1.1.15

Enter password:

ERROR 2005 (HY000): Unknown MySQL server host ‘db-userauth’ (0) 

This demonstrates that if the container is not attached to authnet, it cannot access the MySQL server because the db-userauth hostname is not even known.

Where did the db-userauth hostname come from? We can find out by inspecting a few things:

$ docker network inspect authnet

“Config”: [

{ “Subnet”: “172.20.0.0/16”,

“Gateway”: “172.20.0.1” } ]

“Containers”: {

“7c3836505133fc145743cd74b7220be72fd53ddd408227e961392e881d3b81b8”:

{

“Name”: “db-userauth”, “EndpointID”:

“6005381b72caed482c699a3b00cf2e0019c

e4edd666b45e35be2afc6192314e4″,

“MacAddress”: “02:42:ac:14:00:02”, 

“IPv4Address”: “172.20.0.2/16”,

“IPv6Address”: “”

} },

 

In other words, the authnet network has the 172.20.0.0/16 network number, and the db-userauth container was assigned the 172.20.0.2 IP address. This level of detail is rarely important, but it is useful on the first occasion to carefully examine the setup so that we understand what we’re dealing with.

There is a gaping security issue that violates our design. Namely, the database port is visible to the host, and therefore, anyone with access to the host can access the database. This happened because we used -p 3306:3306 in a misguided belief that this was required so that svc-userauth, which we’ll build in the next section, can access the database. We’ll fix this later by removing that option.

Now that we have the database instance set up for the authentication service, let’s see how to Dockerize it.

5. Dockerizing the authentication service

The word Dockerize means to create a Docker image for a piece of software. The Docker image can then be shared with others or be deployed to a server. In our case, the goal is to create a Docker image for the user authentication service. It must be attached to authnet so that it can access the database server we just configured in the db-userauth container.

We’ll name this new container svc-userauth to indicate that this is the user authentication REST service, while the db-userauth container is the database.

Docker images are defined using Dockerfiles, which are files to describe the installation of an application on a server. They document the setup of the Linux OS, installed software, and configuration required in the Docker image. This is literally a file named Dockerfile, containing Dockerfile commands. Dockerfile commands are used to describe how the image is constructed.

5.1. Creating the authentication service Dockerfile

In the users directory, create a file named Dockerfile containing the following content:

FROM node:14

RUN apt-get update -y \

&& apt-get upgrade -y \

&& apt-get -y install curl python build-essential git ca

-certificates

ENV DEBUG=”users:*” ENV PORT=”5858″

ENV SEQUELIZE_CONNECT=”sequelize-docker-mysql.yaml”

ENV REST_LISTEN=”0.0.0.0″

RUN mkdir -p /userauth

COPY package.json *.yaml *.mjs /userauth/

WORKDIR /userauth

RUN npm install –unsafe-perm

EXPOSE 5858

CMD [ “node”, “./user-server.mjs” ]

The FROM command specifies a pre-existing image, called the base image, from which to derive a given image. Frequently, you define a Docker image by starting from an existing image. In this case, we’re using the official Node.js Docker image (https://hub.docker.com/_/node/), which, in turn, is derived from debian.

Because the base image, node, is derived from the debian image, the commands available are what are provided on a Debian OS. Therefore, we use apt-get to install more packages.

The RUN commands are where we run the shell commands required to build the container. The first one installs required Debian packages, such as the build- essential package, which brings in compilers required to install native-code Node.js packages.

It’s recommended that you always combine apt-get update, apt-get upgrade, and apt-get install in the same command line like this because of the Docker build cache. Docker saves each step of the build to avoid rerunning steps unnecessarily. When rebuilding an image, Docker starts with the first changed step. Therefore, in the set of Debian packages to install changes, we want all three of those commands to run.

Combining them into a single command ensures that this will occur. For a complete discussion, refer to the documentation at https://docs.docker.com/develop/ develop-images/dockerfile_best-practices/.

The ENV commands define environment variables. In this case, we’re using the same environment variables that were defined in the package.json script for launching the user authentication service.

Next, we have a sequence of lines to create the /userauth directory and to populate it with the source code of the user authentication service. The first line creates the /userauth directory. The COPY command, as its name implies, copies the files for the authentication service into that directory. The WORKDIR command changes the working directory to /userauth. This means that the last RUN command, npm install, is executed in /userauth, and therefore, it installs the packages described in /userauth/package.json in /userauth/node_modules.

There is a new SEQUELIZE_CONNECT configuration file mentioned: sequelize- docker-mysql.yaml. This will describe the Sequelize configuration required to connect to the database in the db-userauth container.

Create a new file named users/sequelize-docker-mysql.yaml containing the following:

dbname: userauth

username: userauth

password: userauth

params:

host: db-userauth

port: 3306

dialect: mysql

The difference is that instead of localhost as the database host, we use db- userauth. Earlier, we explored the db-userauth container and determined that this was the hostname of the container. By using db-userauth in this file, the authentication service will use the database in the container.

The EXPOSE command informs Docker that the container listens on the named TCP port. This does not expose the port beyond the container. The -p flag is what exposes a given port outside the container.

Finally, the CMD command documents the process to launch when the container is executed. The RUN commands are executed while building the container, while CMD says what’s executed when the container starts.

We could have installed PM2 in the container, and then used a PM2 command to launch the service. However, Docker is able to fulfill the same function because it automatically supports restarting a container if the service process dies.

5.2. Building and running the authentication service Docker container

Now that we’ve defined the image in a Dockerfile, let’s build it.

In users/package.json, add the following line to the scripts section:

“docker-build”: “docker build -t svc-userauth .” 

As has been our habit, this is an administrative task that we can record in package.json, making it easier to automate this task.

We can build the authentication service as follows:

$ npm run docker-build 

> user-auth-server@1.0.0 docker-build /home/david/Chapter10/users

> docker build -t svc-userauth . 

Sending build context to Docker daemon 32.03MB

Step 1/12 : FROM node:14

—> 07e774543bdf

Step 2/12 : RUN apt-get update -y && apt-get upgrade -y && apt-get -y install curl python build-essential git ca-certificates

—> Using cache

—> eb28eaee8517

Step 3/12 : ENV DEBUG=”users:*”

—> Using cache

—> 99ae7f4bde83

Step 4/12 : ENV PORT=”5858″

—> Using cache

—> e7f7567a0ce4

… more output 

The docker build command builds an image from a Dockerfile. Notice that the build executes one step at a time, and that the steps correspond exactly to the commands in the Dockerfile.

Each step is stored in a cache so that it doesn’t have to be rerun. On subsequent builds, the only steps executed are the step that changed and all subsequent steps.

In authnet/package.json, we require quite a few scripts to manage the user authentication service:

{

“name”: “authnet”,

“version”: “1.0.0”,

“description”: “Scripts to define and manage AuthNet”, “scripts”: {

“build-authnet”: “docker network create –driver bridge authnet”,

“prebuild-db-userauth”: “mkdir userauth-data”,

“build-db-userauth”: “docker run –detach –name db-userauth –env

MYSQL_USER=userauth –env MYSQL_PASSWORD=userauth –env

MYSQL_DATABASE=userauth –mount type=bind,src=’pwd’/userauth- 

data,dst=/var/lib/mysql –network authnet –env

MYSQL_ROOT_PASSWORD=w0rdw0rd –env DATABASE_HOST=

db-userauth mysql/mysql-server:8.0 — bind_address=0.0.0.0 –socket=/tmp/mysql.sock”,

“stop-db-userauth”: “docker stop db-userauth”,

“start-db-userauth”: “docker start db-userauth”,

“build-userauth”: “cd ../users && npm run docker-build”,

“postbuild-userauth”: “docker run –detach –name svc-userauth

–network authnet svc-userauth”,

“start-userauth”: “docker start svc-userauth”,

“stop-userauth”: “docker stop svc-userauth”,

“start-user-service”: “npm run start-db-userauth && npm run start

-userauth”,

“stop-user-service”: “npm run stop-db-userauth && npm run stop

-userauth”

},

“license”: “ISC”

}

This is the set of commands that were found to be useful to manage building the images, starting the containers, and stopping the containers.

Look carefully and you will see that we’ve added –detach to the docker run commands. So far, we’ve used docker run without that option, and the container remained in the foreground. While this was useful to see the logging output, it’s not so useful for deployment. With the –detach option, the container becomes a background task.

On Windows, for the –mount option, we need to change the src= parameter (as discussed earlier) to use a Windows-style hard-coded path. That means it should read:

-mount type=bind,src=C:/Users/path/to/Chapter11/authnet/userauth- data,dst=/var/lib/mysql

This option requires absolute pathnames and specifying the path this way works on Windows.

Another thing to notice is the absence of the -p 3306:3306 option. It was determined that this was not necessary for two reasons. First, the option exposed the database in db-userauth to the host, when our security model required otherwise, and so removing the option got us the desired security. Second, svc-userauth was still able to access the db-userauth database after this option was removed.

With these commands, we can now type the following to build and then run the containers:

$ npm run build-authnet

$ npm run build-db-userauth

$ npm run build-userauth 

These commands build the pieces required for the user authentication service. As a side effect, the containers are automatically executed and will launch as background tasks.

Once it is running, you can test it using the cli.mjs script as before. You can shell into the svc-userauth container and run cli.mjs there; or, since the port is visible to the host computer, you can run it from outside the container.

Afterward, we can manage the whole service as follows:

$ npm run stop-user-service

$ npm run start-user-service 

This stops and starts both containers making up the user authentication service.

We have created the infrastructure to host the user authentication service, plus a collection of scripts to manage the service. Our next step is to explore what we’ve created and learn a few things about the infrastructure Docker creates for us.

6. Exploring AuthNet

Remember that AuthNet is the connection medium for the authentication service. To understand whether this network provides the security gains we’re looking for, let’s explore what we just created:

$ docker network inspect authnet 

This prints out a large JSON object describing the network, along with its attached containers, which we’ve looked at before. If everything went well, we will see that there are now two containers attached to authnet where there’d previously have just been one.

Let’s go into the svc-userauth container and poke around:

$ docker exec -it svc-userauth bash

root@ba75699519ef:/userauth# ls

cli.mjs             package-lock.json sequelize-docker-mysql.yaml

users-sequelize.mjs node_modules        package.json

user-server.mjs 

The /userauth directory is inside the container and contains the files placed in the container using the COPY command, plus the installed files in node_modules:

root@ba75699519ef:/userauth# node cli.mjs list-users [ {

id: ‘me’, username: ‘me’, provider: ‘local’,

familyName: ‘Einarsdottir’,

givenName: ‘Ashildr’, middleName: null,

emails: [ ‘me@stolen.tardis’ ],

photos: []

}, {

id: ‘snuffy-smith’, username: ‘snuffy-smith’, provider: ‘local’,

familyName: ‘Smith’,

givenName: ‘John’,

middleName: ‘Snuffy’,

emails: [ ‘snuffy@example.com’ ],

photos: []

} ] 

We can run the cli.mjs script to test and administer the service. To get these database entries set up, use the add command with the appropriate options:

root@4996275c4030:/userauth# ps -eafw

UID    PID    PPID   C   STIME   TTY   TIME   CMD
root    1       0         2    00:08          ? 00:00:01 node ./user-server.mjs
root   19       0         0    00:09   pts/0 00:00:00 bash
root   27      19         0    00:09   pts/0 00:00:00 ps -eafw

root@ba75699519ef:/userauth# ping db-userauth

PING db-userauth (172.20.0.3) 56(84) bytes of data.

64 bytes from db-userauth.authnet (172.20.0.3): icmp_seq=1 ttl=64 time=0.163 ms

^C

— db-userauth ping statistics —

1 packet transmitted, 1 received, 0% packet loss, time 1003ms

root@ba75699519ef:/userauth# ping svc-userauth

PING svc-userauth (172.20.0.2) 56(84) bytes of data.

64 bytes from ba75699519ef (172.20.0.2): icmp_seq=1 ttl=64 time=0.073 ms

^C 

— svc-userauth ping statistics —

1 packet transmitted, 1 received, 0% packet loss, time 2051ms 

The process listing is interesting to study. Process PID 1 is the node ./user- server.mjs command in the Dockerfile. The format we used for the CMD line ensured that the node process ended up as process 1. This is important so that process signals are handled correctly, allowing Docker to manage the service process correctly. The tail end of the following blog post has a good discussion of the issue:

https://www.docker.com/blog/keep-nodejs-rockin-in-docker/

A ping command proves that the two containers are available as hostnames matching the container names:

$ ping 172.20.0.2

PING 172.20.0.2 (172.20.0.2): 56 data bytes

Request timeout for icmp_seq 0

^C

— 172.20.0.2 ping statistics —

2 packets transmitted, 0 packets received, 0% packet loss

$ ping 172.20.0.3

PING 172.20.0.3 (172.20.0.3): 56 data bytes

Request timeout for icmp_seq 0

^C

— 172.20.0.3 ping statistics —

2 packets transmitted, 0 packets received, 100.0% packet loss 

From outside the containers, on the host system, we cannot ping the containers. That’s because they are attached to authnet and are not reachable.

We have successfully Dockerized the user authentication service in two containers—db-userauth and svc-userauth. We’ve poked around the insides of a running container and found some interesting things. However, our users need the fantastic Notes application to be running, and we can’t afford to rest on our laurels.

Since this was our first time setting up a Docker service, we went through a lot of details. We started by launching a MySQL database container, and what is required to ensure that the data directory is persistent. We then set up a Dockerfile for the authentication service and learned how to connect containers to a common Docker network and how containers can communicate with each other over the network. We also studied the security benefits of this network infrastructure, since we can easily wall off the service and its database from intrusion.

Let’s now move on and Dockerize the Notes application, making sure that it is connected to the authentication server.

Source: Herron David (2020), Node.js Web Development: Server-side web development made easy with Node 14 using practical examples, Packt Publishing.

Leave a Reply

Your email address will not be published. Required fields are marked *