Deploying the Notes stack file to the swarm

We have prepared all the elements required to set up a Docker Swarm on the AWS EC2 infrastructure, we have run the scripts required to set up that infrastructure, and we have created the stack file required to deploy Notes to the swarm.

What’s required next is to run docker stack deploy from our laptop, to deploy Notes on the swarm. This will give us the chance to test the stack file created earlier. You should still have the Docker context configured for the remote server, making it possible to remotely deploy the stack. However, there are four things to handle first, as follows:

  1. Install the secrets in the newly deployed swarm.
  2. Update the svc-notes environment configuration for the IP address of notes-public.
  3. Update the Twitter application for the IP address of notes-public.
  4. Log in to the ECR instance.

Let’s take care of those things and then deploy the Notes stack.

1. Preparing to deploy the Notes stack to the swarm

We are ready to deploy the Notes stack to the swarm that we’ve launched. However, we have realized that we have a couple of tasks to take care of.

The environment variables for svc-notes configuration require a little adjustment. Have a look at the following code block:






# DEBUG: notes:*,express:*





SEQUELIZE_CONNECT: models/sequelize-docker-mysql.yaml


NOTES_MODEL: sequelize

Our primary requirement is to adjust the TWITTER_CALLBACK_HOST variable. The domain name for the notes-public instance changes every time we deploy the AWS infrastructure. Therefore, TWITTER_CALLBACK_HOST must be updated to match.

Similarly, we must go to the Twitter developers’ dashboard and update the URLs in the application settings. As we already know, this is required every time we have hosted Notes on a different IP address or domain name. To use the Twitter login, we must change the list of URLs recognized by Twitter.

Updating TWITTER_CALLBACK_HOST and the Twitter application settings will let us log in to Notes using a Twitter account.

While here, we should review the other variables and ensure that they’re correct as well.

The last preparatory step is to log in to the ECR repository. To do this, simply execute the following commands:

$ cd ../ecr

$ sh ./

 This has to be rerun every so often since the tokens that are downloaded time out after a few hours.

We only need to run, and none of the other scripts in the ecr directory.

In this section, we prepared to run the deployment. We should now be ready to deploy Notes to the swarm, so let’s do it.

2. Deploying the Notes stack to the swarm

We just did the final preparation for deploying the Notes stack to the swarm. Take a deep breath, yell out Smoke Test, and type the following command:

$ cd ../compose-stack

$ docker stack deploy –with-registry-auth –compose-file docker- compose.yml notes

Creating network notes_svcnet

Creating network notes_frontnet

Creating network notes_authnet

Creating service notes_svc-userauth

Creating service notes_db-notes

Creating service notes_svc-notes

Creating service notes_redis

Creating service notes_db-userauth 

This deploys the services, and the swarm responds by attempting to launch each service. The –with-registry-auth option sends the Docker Registry authentication to the swarm so that it can download container images from the ECR repositories. This is why we had to log in to the ECR first.

2.1. Verifying the correct launch of the Notes application stack

It will be useful to monitor the startup process using these commands:

$ docker service ls

ID           NAME               MODE       REPLICAS IMAGE PORTS

l7up46slg32g notes_db-notes    replicated 1/1 mysql/mysql-server:8.0

ufw7vwqjkokv notes_db-userauth replicated 1/1 mysql/mysql-server:8.0

45p6uszd9ixt notes_redis       replicated 1/1 redis:5.0

smcju24hvdkj notes_svc-notes   replicated 1/1


iws2ff265sqb notes_svc-userauth replicated 1/1 

$ docker service ps notes_svc-notes     # And.. for other service names


nt5rmgv1cf0q notes_svc-notes.1 notes-public Running Running

18 seconds ago 

The service ls command lists the services, with a high-level overview. Remember that the service is not the running container and, instead, the services are declared by entries in the services tag in the stack file. In our case, we declared one replica for each service, but we could have given a different amount. If so, the swarm will attempt to distribute that number of containers across the nodes in the swarm.

Notice that the pattern for service names is the name of the stack that was given in the docker stack deploy command, followed by the service name listed in the stack file. When running that command, we named the stack notes; so, the services are notes_db-notes, notes_svc-userauth, notes_redis, and so on.

The service ps command lists information about the tasks deployed for the service. Remember that a task is essentially the same as a running container. We see here that one instance of the svc-notes container has been deployed, as expected, on the notes-public host.

Sometimes, the notes_svc-notes service doesn’t launch, and instead, we’ll see the following message:

$ docker service ps notes_svc-notes



nt5rmgv1cf0q notes_svc-notes.1 Running Pending 9 minutes ago

“no suitable node (scheduling

The error, no suitable node, means that the swarm was not able to find a node that matches the placement criteria. In this case, the type=public label might not have been properly set.

The following command is helpful:

$ docker node inspect notes-public [


“Spec”: {

“Labels”: {},

“Role”: “manager”,

“Availability”: “active”




Notice that the Labels entry is empty. In such a case, you can add the label by running this command:

$ docker node update –label-add type=public notes-public


As soon as this is run, the swarm will place the svc-notes service on the notes- public node.

If this happens, it may be useful to add the following command to the user_data script for aws_instance.public (in, just ahead of setting the type=public label:

“sleep 20”,

It would appear that this provides a small window of opportunity to allow the swarm to establish itself.

2.2. Diagnosing a failure to launch the database services

Another possible deployment problem is that the database services might fail to launch, and the notes-public-db1 node might become Unavailable. Refer back to the docker node ls output and you will see a column marked Status. Normally, this column says Reachable, meaning that the swarm can reach and communicate with the swarm agent on that node. But with the deployment as it stands, this node might instead show an Unavailable status, and in the docker service ls output, the database services might never show as having deployed.

With remote access from our laptop, we can run the following command:

$ docker service ps notes_db-notes 

The output will tell you the current status, such as any error in deploying the service. However, to investigate connectivity with the EC2 instances, we must log in to the notes-public instance as follows:

$ ssh ubuntu@PUBLIC-IP-ADDRESS 

That gets us access to the public EC2 instance. From there, we can try to ping the notes-private-db1 instance, as follows:

ubuntu@notes-public:~$ ping PRIVATE-IP-ADDRESS

PING ( 56(84) bytes of data.

64 bytes from icmp_seq=1 ttl=64 time=0.481 ms


This should work, but the output from docker node ls may show the node as Unreachable. Ask yourself: what happens if a computer runs out of memory? Then, recognize that we’ve deployed two database instances to an EC2 instance that has only 1 GB of memory—the memory capacity of t2.micro EC2 instances as of the time of writing. Ask yourself whether it is possible that the services you’ve deployed to a given server have overwhelmed that server.

To test that theory, make the following change in

resource “aws_instance” “private-db1” {

instance_type = “t2.medium” // var.instance_type


This changes the instance type from t2.micro to t2.medium, or even t2.large, thereby giving the server more memory.

To implement this change, run terraform apply to update the configuration. If the swarm does not automatically correct itself, then you may need to run terraform destroy and then run through the setup again, starting with terraform apply.

Once the notes-private-db1 instance has sufficient memory, the databases should successfully deploy.

In this section, we deployed the Notes application stack to the swarm cluster on AWS. We also talked a little about how to verify the fact that the stack deployed correctly, and how to handle some common problems.

Next, we have to test the deployed Notes stack to verify that it works on AWS.

3. Testing the deployed Notes application

Having set up everything required to deploy Notes to AWS using Docker Swarm, we have done so. That means our next step is to put Notes through its paces. We’ve done enough ad hoc testing on our laptop to have confidence it works, but the Docker swarm deployment might show up some issues.

In fact, the deployment we just made very likely has one or two problems. We can learn a lot about AWS and Docker Swarm by diagnosing those problems together.

The first test is obviously to open the Notes application in the browser. In the outputs from running terraform apply was a value labeled ec2-public-dns. This is the domain name for the notes-public EC2 instance. If we simply paste that domain name into our browser, the Notes application should appear.

However, we cannot do anything because there are no user IDs available to log in with.

3.1. Logging in with a regular account on Notes

Obviously, in order to test Notes, we must log in and add some notes, make some comments, and so forth. It will be instructive to log in to the user authentication service and use cli.mjs to add a user ID.

The user authentication service is on one of the private EC2 instances, and its port is purposely not exposed to the internet. We could change the configuration to expose its port and then run cli.mjs from our laptop, but that would be a security problem and we need to learn how to access the running containers anyway.

We can find out which node the service is deployed on by using the following command:

$ docker service ps notes_svc-userauth



b8jf5q8xlbs5 notes_svc-userauth.1 notes-private-svc1 Running

Running 31 minutes ago 

The notes_svc-userauth task has been deployed to notes-private-svc1, as expected.

To run cli.mjs, we must get shell access inside the container. Since it is deployed on a private instance, this means that we must first SSH to the notes-public instance; from there, SSH to the notes-private-svc1 instance; and from there, run the docker exec command to launch a shell in the running container, as illustrated in the following code block:

$ ssh ubuntu@PUBLIC-IP-ADDRESS

ubuntu@notes-public:~$ ssh -i notes-app-key-pair.pem ubuntu@PRIVATE- IP-ADDRESS

ubuntu@notes-private-svc1:~$ docker ps | grep userauth


userauth:latest “docker-entrypoint.s” 37 minutes ago Up 37 minutes

5858/tcp notes_svc-userauth.1.b8jf5q8xlbs5b8xk7qpkz9a3w 

ubuntu@notes-private-svc1:~$ docker exec -it notes_svc-

userauth.1.b8jf5q8xlbs5b8xk7qpkz9a3w bash


We SSHd to the notes-public server and, from there, SSHd to the notes- private-svc1 server. On that server, we ran docker ps to find out the name of the running container. Notice that Docker generated a container name that includes a coded string, called a nonce, that guarantees the container name is unique. With that container name, we ran docker exec -it … bash to get a root shell inside the container.

Once there, we can run the following command:

root@e7398953b808:/userauth# node cli.mjs add –family-name

Einarsdottir –given-name Ashildr –email me@stolen.tardis –password w0rd me

Created {

id: ‘me’,

username: ‘me’,

provider: ‘local’,

familyName: ‘Einarsdottir’,

givenName: ‘Ashildr’,

middleName: null,

emails: [ ‘me@stolen.tardis’ ],

photos: []


This verifies that the user authentication server works and that it can communicate with the database. To verify this even further, we can access the database instance, as follows:

ubuntu@notes-public:~$ ssh -i notes-app-key-pair.pem ubuntu@

ubuntu@notes-private-db1:~$ docker exec -it notes_db-

userauth.1.0b274ges82otektamyq059x7w mysql -u userauth -p –socket


Enter password:

From there, we can explore the database and see that, indeed, Ashildr’s user ID exists.

With this user ID set up, we can now use our browser to visit the Notes application and log in with that user ID.

3.2. Diagnosing an inability to log in with Twitter credentials

The next step will be to test logging in with Twitter credentials. Remember that earlier, we said to ensure that the TWITTER_CALLBACK_HOST variable has the domain name of the EC2 instance, and likewise that the Twitter application configuration does as well.

Even with those settings in place, we might run into a problem. Instead of logging in, we might get an error page with a stack trace, starting with the message: Failed to obtain request token.

There are a number of possible issues that can cause this error. For example, the error can occur if the Twitter authentication tokens are not deployed. However, if you followed the directions correctly, they will be deployed correctly.

In notes/appsupport.mjs, there is a function, basicErrorHandler, which will be invoked by this error. In that function, add this line of code:

debug(‘basicErrorHandler err= ‘, err);

This will print the full error, including the originating error that caused the failure. You may see the following message printed: getaddrinfo EAI_AGAIN That may be puzzling because that domain name is certainly available. However, it might not be available inside the svc-notes container due to the DNS configuration.

From the notes-public instance, we will be able to ping that domain name, as follows:

ubuntu@notes-public:~$ ping

PING ( 56(84) bytes of data.

64 bytes from icmp_seq=1 ttl=38 time=22.1 ms 

However, if we attempt this inside the svc-notes container, this might fail, as illustrated in the following code snippet:

ubuntu@notes-public:~$ docker exec -it notes_svc-

notes.1.et3b1obkp9fup5tj7bdco3188 bash

root@c2d002681f61:/notesapp# ping

… possible failure

Ideally, this will work from inside the container as well. If this fails inside the container, it means that the Notes service cannot reach Twitter to handle the OAuth dance required to log in with Twitter credentials.

The problem is that, in this case, Docker set up an incorrect DNS configuration, and the container was unable to make DNS queries for many domain names. In the Docker Compose documentation, it is suggested to use the following code in the service definition:






These two DNS servers are operated by Google, and indeed this solves the problem. Once this change has been made, you should be able to log in to Notes using Twitter credentials.

In this section, we tested the Notes application and discussed how to diagnose and remedy a couple of common problems. While doing so, we learned how to navigate our way around the EC2 instances and the Docker Swarm.

Let’s now see what happens if we change the number of instances for our services.

4. Scaling the Notes instances

By now, we have deployed the Notes stack to the cluster on our EC2 instances. We have tested everything and know that we have a correctly functioning system deployed on AWS. Our next task is to increase the number of instances and see what happens.

To increase the instances for svc-notes, edit compose-swarm/docker- compose.yml as follows:




replicas: 2

This increases the number of replicas. Because of the existing placement constraints, both instances will deploy to the node with a type label of public. To update the services, it’s just a matter of rerunning the following command:

$ docker stack deploy –with-registry-auth –compose-file docker- compose.yml notes

Ignoring unsupported options: build, restart

Updating service notes_svc-userauth (id: wjugeeaje35v3fsgq9t0r8t98)

Updating service notes_db-notes (id: ldfmq3na5e3ofoyypub3ppth6)

Updating service notes_svc-notes (id: pl94hcjrwaa1qbr9pqahur5aj)

Updating service notes_redis (id: lrjne8uws8kqocmr0ml3kw2wu)

Updating service notes_db-userauth (id: lkbj8ax2cj2qzu7winx4kbju0)


Earlier, this command described its actions with the word Creating, and this time it used the word Updating. This means that the services are being updated with whatever new settings are in the stack file.

After a few minutes, you may see this:

$ docker service ls

ID           NAME              MODE    REPLICAS IMAGE PORTS

ldfmq3na5e3o notes_db-notes replicated 1/1 mysql/mysql-server:8.0

lkbj8ax2cj2q notes_db-userauth replicated 1/1 mysql/mysql-server:8.0

lrjne8uws8kq notes_redis replicated 1/1 redis:5.0

pl94hcjrwaa1 notes_svc-notes   replicated 2/2 *:80->3000/tcp

wjugeeaje35v notes_svc-userauth replicated 1/1 

And indeed, it shows two instances of the svc-notes service. The 2/2 notation says that two instances are currently running out of the two instances that were requested.

To view the details, run the following command:

$ docker service ps notes_svc-notes


As we saw earlier, this command lists to which swarm nodes the service has been deployed. In this case, we’ll see that both instances are on notes-public, due to the placement constraints.

Another useful command is the following:

$ docker ps



notes:latest “docker-entrypoint.s” 7 minutes ago Up 7 minutes

3000/tcp notes_svc-notes.2.zo2mdxk9fuy33ixe0245y7uii


notes:latest “docker-entrypoint.s” 15 minutes ago Up 15 minutes

3000/tcp notes_svc-notes.1.cc34q3yfeumx0b57y1mnpskar 

Ultimately, each service deployed to a Docker swarm contains one or more running containers.

You’ll notice that this shows svc-notes listening on port 3000. In the environment setup, we did not set the PORT variable, and therefore svc-notes will default to listening to port 3000. Refer back to the output for docker service ls, and you should see this: *:80->3000/tcp, meaning that there is mapping being handled in Docker from port 80 to port 3000.

That is due to the following setting in docker-swarm/docker-compose.yml:




– “80:3000”

This says to publish port 80 and to map it to port 3000 on the containers.

In the Docker documentation (, we learned that services deployed in a swarm are reachable by the so-called routing mesh. Connecting to a published port routes the connection to one of the containers handling that service. As a result, Docker acts as a load balancer, distributing traffic among the service instances you configure.

In this section, we have—finally—deployed the Notes application stack to a cloud hosting environment we built on AWS EC2 instances. We created a Docker swarm, configured the swarm, created a stack file with which to deploy our services, and we deployed to that infrastructure. We then tested the deployed system and saw that it functioned well.

With that, we can wrap up this chapter.

Source: Herron David (2020), Node.js Web Development: Server-side web development made easy with Node 14 using practical examples, Packt Publishing.

Leave a Reply

Your email address will not be published. Required fields are marked *