Unit Testing and Functional Testing: Using Docker Swarm to deploy test infrastructure

We had a great experience using Docker Compose and Swarm to orchestrate Notes application deployment on both our laptop and our AWS infrastructure. The whole system, with five independent services, is easily described in compose- local/docker-compose.yml and compose-swarm/docker-compose.yml. What we’ll do is duplicate the Stack file, then make a couple of small changes required to support test execution in a local swarm.

To configure the Docker installation on our laptop for swarm mode, simply type the following:

$ docker swarm init 

As before, this will print a message about the join token. If desired, if you have multiple computers in your office, it might be interesting for you to experiment with setting up a local Swarm. But for this exercise, that’s not important. This is because we can do everything required with a single-node Swarm.

This isn’t a one-way street, meaning that when you’re done with this exercise, it is easy to turn off swarm mode. Simply shut down anything deployed to your local Swarm and run the following command:

$ docker swarm leave –force 

Normally, this is used for a host that you wish to detach from an existing swarm. If there is only one host remaining in a swarm, the effect will be to shut down the swarm.

Now that we know how to initialize swarm mode on our laptop, let’s set about creating a stack file suitable for use on our laptop.

Create a new directory, compose-stack-test-local, as a sibling to the notes, users, and compose-local directories. Copy compose-stack/docker- compose.yml to that directory. We’ll be making several small changes to this file and no changes to the existing Dockerfiles. As much as it is possible, it is important to test the same containers that are used in the production deployment. This means it’s acceptable to inject test files into the containers, but not modify them.

Make every deploy tag look like this:

deploy:

replicas: 1

This deletes the placement constraints we declared for use on AWS EC2 and sets it to one replica for each service. For a single-node cluster, we don’t worry about placement, of course, and there is no need for more than one instance of any service.

For the database services, remove the volumes tag. Using this tag is required when it’s necessary to persist in the database data directory. For test infrastructure, the data directory is unimportant and can be thrown away at will. Likewise, remove the top- level volumes tag.

For the svc-notes and svc-userauth services, make these changes:

services:

svc-userauth:

image: compose-stack-test-local/svc-userauth

ports:

– “5858:5858”

environment: 

SEQUELIZE_CONNECT: sequelize-docker-mysql.yaml

SEQUELIZE_DBHOST: db-userauth

svc-notes:

image: compose-stack-test-local/svc-notes

volumes:

– type: bind

source: ../notes/test

target: /notesapp/test

– type: bind

source: ../notes/models/schema-sqlite3.sql

target: /notesapp/models/schema-sqlite3.sql

ports:

– “3000:3000”

environment:

TWITTER_CALLBACK_HOST: “http://localhost:3000”

SEQUELIZE_CONNECT: models/sequelize-docker-mysql.yaml

SEQUELIZE_DBHOST: db-notes

NOTES_MODEL: sequelize

This injects the files required for testing into the svc-notes container. Obviously, this is the test directory that we created in the previous section for the Notes service. Those tests also require the SQLite3 schema file since it is used by the corresponding test script. In both cases, we can use bind mounts to inject the files into the running container.

The Notes test suite follows a normal practice for Node.js projects of putting test files in the test directory. When building the container, we obviously don’t include the test files because they’re not required for deployment. But running tests requires having that directory inside the running container. Fortunately, Docker makes this easy. We simply mount the directory into the correct place.

The bottom line is this approach gives us the following advantages:

  • The test code is in notes/test, where it belongs.
  • The test code is not copied into the production container.
  • In test mode, the test directory appears where it belongs.

For Docker (using docker run) and Docker Compose, the volume is mounted from a directory on the localhost. But for swarm mode, with a multi-node swarm, the container could be deployed on any host matching the placement constraints we declare. In a swarm, bind volume mounts like the ones shown here will try to mount from a directory on the host that the container has been deployed in. But we are not using a multi-node swarm; instead, we are using a single-node swarm. Therefore, the container will mount the named directory from our laptop, and all will be fine. But as soon as we decide to run testing on a multi-node swarm, we’ll need to come up with a different strategy for injecting these files into the container.

We’ve also changed the ports mappings. For svc-userauth, we’ve made its port visible to give ourselves the option of testing the REST service from the host computer. For the svc-notes service, this will make it appear on port 3000. In the environment section, make sure you did not set a PORT variable. Finally, we adjust TWITTER_CALLBACK_HOST so that it uses localhost:3000 since we’re deploying on the localhost.

For both services, we’re changing the image tag from the one associated with the AWS ECR repository to one of our own designs. We won’t be publishing these images to an image repository, so we can use any image tag we like.

For both services, we are using the Sequelize data model, using the existing MySQL- oriented configuration file, and setting the SEQUELIZE_DBHOST variable to refer to the container holding the database.

We’ve defined a Docker Stack file that should be useful for deploying the Notes application stack in a Swarm. The difference between the deployment on AWS EC2 and here is simply the configuration. With a few simple configuration changes, we’ve mounted test files into the appropriate container, reconfigured the volumes and the environment variables, and changed the deployment descriptors so that they’re suitable for a single-node swarm running on our laptop.

Let’s deploy this and see how well we did.

1. Executing tests under Docker Swarm

We’ve repurposed our Docker Stack file so that it describes deploying to a single- node swarm, ensuring the containers are set up to be useful for testing. Our next step is to deploy the Stack to a swarm and execute the tests inside the Notes container.

To set it up, run the following commands:

$ docker swarm init

… ignore the output showing the docker swarm join command 

$ printf ‘…’ | docker secret create TWITTER_CONSUMER_SECRET –

$ printf ‘…’ | docker secret create TWITTER_CONSUMER_KEY – 

We run swarm init to turn on swarm mode on our laptop, then add the two TWITTER secrets to the swarm. Since it is a single-node swarm, we don’t need to run a docker swarm join command to add new nodes to the swarm.

Then, in the compose-stack-test-local directory, we can run these commands:

$ docker-compose build

Building svc-userauth

Successfully built 876860f15968

Successfully tagged compose-stack-test-local/svc-userauth:latest

Building svc-notes

Successfully built 1c4651c37a86

Successfully tagged compose-stack-test-local/svc-notes:latest

 

$ docker stack deploy –compose-file docker-compose.yml notes

Ignoring unsupported options: build, restart

Creating network notes_authnet

Creating network notes_svcnet

Creating network notes_frontnet

Creating service notes_db-userauth

Creating service notes_svc-userauth

Creating service notes_db-notes

Creating service notes_svc-notes

Creating service notes_redis 

Because a Stack file is also a Compose file, we can run docker-compose build to build the images. Because of the image tags, this will automatically tag the images so that they match the image names we specified.

Then, we use docker stack deploy, as we did when deploying to AWS EC2. Unlike the AWS deployment, we do not need to push the images to repositories, which means we do not need to use the –with-registry-auth option. This will behave almost identically to the swarm we deployed to EC2, so we explore the deployed services in the same way:

$ docker service ls

… output of current services

$ docker service ps notes_svc-notes

… status information for the named service

$ docker ps

… running container list for local host 

Because this is a single-host swarm, we don’t need to use SSH to access the swarm nodes, nor do we need to set up remote access using docker context. Instead, we run the Docker commands, and they act on the Docker instance on the localhost.

The docker ps command will tell us the precise container name for each service. With that knowledge, we can run the following to gain access:

$ docker exec -it notes_svc-notes.1.c8ojirrbrv2sfbva9l505s3nv bash

root@265672675de1:/notesapp#

root@265672675de1:/notesapp# cd test

root@265672675de1:/notesapp/test# apt-get -y install sqlite3

root@265672675de1:/notesapp/test# rm -rf node_modules/

root@265672675de1:/notesapp/test# npm install

 

Because, in swarm mode, the containers have unique names, we have to run docker ps to get the container name, then paste it into this command to start a Bash shell inside the container.

Inside the container, we see the test directory is there as expected. But we have a couple of setup steps to perform. The first is to install the SQLite3 command-line tools since the scripts in package.json use that command. The second is to remove any existing node_modules directory because we don’t know if it was built for this container or for the laptop. After that, we need to run npm install to install the dependencies.

Having done this, we can run the tests:

root@265672675de1:/notesapp/test# npm run test-all

The tests should execute as they did on our laptop, but they’re running inside the container instead. However, the MySQL test won’t have run because the package.json scripts are not set up to run that one automatically. Therefore, we can add this to package.json:

“test-notes-sequelize-mysql”: “cross-env NOTES_MODEL=sequelize

SEQUELIZE_CONNECT=../models/sequelize-docker-mysql.yaml

SEQUELIZE_DBHOST=db-notes mocha test-model”

This is the command that’s required to execute the test suite against the MySQL database.

Then, we can run the tests against MySQL, like so:

root@265672675de1:/notesapp/test# npm run test-notes-sequelize-mysql

 

The tests should execute correctly against MySQL.

To automate this, we can create a file named run.sh containing the following code:

#!/bin/sh

SVC_NOTES=$1

# docker exec -it ${SVC_NOTES} apt-get -y install sqlite3

docker exec -it –workdir /notesapp/test -e DEBUG= ${SVC_NOTES} \

rm -rf node_modules

docker exec -it –workdir /notesapp/test -e DEBUG= ${SVC_NOTES} \

npm install

docker exec -it –workdir /notesapp/test -e DEBUG= ${SVC_NOTES} \

npm run test-notes-memory

docker exec -it –workdir /notesapp/test -e DEBUG= ${SVC_NOTES} \

npm run test-notes-fs

docker exec -it –workdir /notesapp/test -e DEBUG= ${SVC_NOTES} \

npm run test-level

docker exec -it –workdir /notesapp/test -e DEBUG= ${SVC_NOTES} \

npm run test-notes-sqlite3

docker exec -it –workdir /notesapp/test -e DEBUG= ${SVC_NOTES} \

npm run test-notes-sequelize-sqlite

docker exec -it –workdir /notesapp/test -e DEBUG= ${SVC_NOTES} \

npm run test-notes-sequelize-mysql

# docker exec -it –workdir /notesapp/test -e DEBUG= ${SVC_NOTES} \

npm run test-notes-mongodb

The script executes each script in notes/test/package.json individually. If you prefer, you can replace these with a single line that executes npm run test-all.

This script takes a command-line argument for the container name holding the svc- notes service. Since the tests are located in that container, that’s where the tests must be run. The script can be executed like so:

$ sh run.sh notes_svc-notes.1.c8ojirrbrv2sfbva9l505s3nv 

This runs the preceding script, which will run each test combination individually and also make sure the DEBUG variable is not set. This variable is set in the Dockerfile and causes debugging information to be printed among the test results output. Inside the script, the –workdir option sets the current directory of the command’s execution in the test directory to simplify running the test scripts.

Of course, this script won’t execute as-is on Windows. To convert this for use on PowerShell, save the text starting at the second line into run.ps1, and then change SVC_NOTES references into %SVC_NOTES% references.

We have succeeded in semi-automating test execution for most of our test matrix. However, there is a glaring hole in the test matrix, namely the lack of testing on MongoDB. Plugging that hole will let us see how we can set up MongoDB under Docker.

1.1. MongoDB setup under Docker and testing Notes against MongoDB

In Chapter 7, Data Storage and Retrieval, we developed MongoDB support for Notes. Since then, we’ve focused on Sequelize. To make up for that slight, let’s make sure we at least test our MongoDB support. Testing on MongoDB simply requires defining a container for the MongoDB database and a little bit of configuration.

Visit https://hub.docker.com/_/mongo/ for the official MongoDB container. You’ll be able to retrofit this in order to deploy the Notes application running on MongoDB.

Add the following code to compose-stack-test-local/docker-compose.yml:

# Uncomment this for testing MongoDB

db-notes-mongo:

image: mongo:4.2

container_name: db-notes-mongo

networks:

– frontnet

# volumes:

# – ./db-notes-mongo:/data/db

That’s all that’s required to add a MongoDB container to a Docker Compose/Stack file. We’ve connected it to frontnet so that the database is accessible by svc-notes. If we wanted the svc-notes container to use MongoDB, we’d need some environment variables (MONGO_URL, MONGO_DBNAME, and NOTES_MODEL) to tell Notes to use MongoDB.

But we’d also run into a problem that we created for ourselves in Chapter 9, Dynamic Client/Server Interaction with Socket.IO. In that chapter, we created a messaging subsystem so that our users can leave messages for each other. That messaging system is currently implemented to store messages in the same Sequelize database where the Notes are stored. But to run Notes with no Sequelize database would mean a failure in the messaging system. Obviously, the messaging system can be rewritten, for instance, to allow storage in a MongoDB database, or to support running both MongoDB and Sequelize at the same time.

Because we were careful, we can execute code in models/notes-mongodb.mjs without it being affected by other code. With that in mind, we’ll simply execute the Notes test suite against MongoDB and report the results.

Then, in notes/test/package.json, we can add a line to facilitate running tests on MongoDB:

“test-notes-mongodb”: “cross-env MONGO_URL=mongodb://db-notes-mongo/

MONGO_DBNAME=chap13-test NOTES_MODEL=mongodb mocha –no-timeouts

test-

model”

We simply added the MongoDB container to frontnet, making the database available at the URL shown here. Hence, it’s simple to now run the test suite using the Notes MongoDB model.

The –no-timeouts option was necessary to avoid a spurious error while testing the suite against MongoDB. This option instructs Mocha to not check whether a test case execution takes too long.

The final requirement is to add the following line to run.sh (or run.ps1 for Windows):

docker exec -it –workdir /notesapp/test -e DEBUG= notes-test \

npm run test-notes-mongodb

This ensures MongoDB can be tested alongside the other test combinations. But when we run this, an error might crop up:

(node:475) DeprecationWarning: current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option

{ useUnifiedTopology: true } to the MongoClient constructor.

The problem is that the initializer for the MongoClient object has changed slightly. Therefore, we must modify notes/models/notes-mongodb.mjs with this new connectDB function:

const connectDB = async () => {

if (!client) {

client = await MongoClient.connect(process.env.MONGO_URL, {

useNewUrlParser: true, useUnifiedTopology: true

});

}

}

This adds a pair of useful configuration options, including the option explicitly named in the error message. Otherwise, the code is unchanged.

To make sure the container is running with the updated code, rerun the docker- compose build and docker stack deploy steps shown earlier. Doing so rebuilds the images, and then updates the services. Because the svc-notes container will relaunch, you’ll need to install the Ubuntu sqlite3 package again.

Once you’ve done that, the tests will all execute correctly, including the MongoDB combination.

We can now report the final test results matrix to the manager:

  • models-fs: PASS
  • models-memory: PASS
  • models-levelup: 1 failure, now fixed, PASS
  • models-sqlite3: Two failures, now fixed, PASS
  • models-sequelize with SQLite3: 1 failure, now fixed, PASS
  • models-sequelize with MySQL: PASS
  • models-mongodb: PASS

The manager will tell you “good job” and then remember that the models are only a portion of the Notes application. We’ve left two areas completely untested:

  • The REST API for the user authentication service
  • Functional testing of the user interface

In this section, we’ve learned how to repurpose a Docker Stack file so that we can launch the Notes stack on our laptop. It took a few simple reconfigurations of the Stack file and we were ready to go, and we even injected the files that are useful for testing. With a little bit more work, we finished testing against all configuration combinations of the Notes database modules.

Our next task is to handle testing the REST API for the user authentication service.

Source: Herron David (2020), Node.js Web Development: Server-side web development made easy with Node 14 using practical examples, Packt Publishing.

Leave a Reply

Your email address will not be published. Required fields are marked *