Setting up ECR repositories for Notes Docker images

We have created Docker images to encapsulate the services making up the Notes application. So far, we’ve used those images to instantiate Docker containers on our laptop. To deploy containers on the AWS infrastructure will require the images to be hosted in a Docker image repository.

This requires a build procedure by which the svc-notes and svc-userauth images are correctly pushed to the container repository on the AWS infrastructure. We will go over the commands required and create a few shell scripts to record those commands.

A site such as Docker Hub is what’s known as a Docker Registry. Registries are web services that store Docker images by hosting Docker image repositories. When we used the redis or mysql/mysql-server images earlier, we were using Docker image repositories located on the Docker Hub Registry.

The AWS team offers a Docker image registry, ECR. An ECR instance is available for each account in each AWS region. All we have to do is log in to the registry, create repositories, and push images to the repositories.

Because it is important to not run Docker build commands on the Swarm infrastructure, execute this command:

$ docker context use default 

This command switches the Docker context to the local system.

To hold the scripts and other files related to managing AWS ECR repositories, create a directory named ecr as a sibling to notes, users, and terraform-swarm.

There are several commands required for a build process to create Docker images, tag them, and push them to a remote repository. To simplify things, let’s create a few shell scripts, as well as PowerShell scripts, to record those commands.

The first task is to connect with the AWS ECR service. To this end, create a file named containing the following:

aws ecr get-login-password –profile $AWS_PROFILE –region $AWS_REGION


| docker login –username AWS \

–password-stdin $AWS_USER.dkr.ecr.$

This command, and others, are available in the ECR dashboard. If you navigate to that dashboard and then create a repository there, a button labeled View Push Command is available. This and other useful commands are listed there, but we have substituted a few variable names to make this configurable.

If you are instead using Windows PowerShell, AWS recommends the following:

(Get-ECRLoginCommand).Password | docker login –username AWS —


This relies on the AWS Tools for PowerShell package (see, which appears to offer some powerful tools that are useful with AWS services. In testing, however, this command was not found to work very well.

Instead, the following command was found to work much better, which you can put in a file named login.ps1:

aws ecr get-login-password –region %AWS_REGION% | docker login —

username AWS –password-stdin 

This is the same command as is used for Unix-like systems, but with Windows-style references to environment variables.

Several environment variables are being used, but just what are those variables being used and how do we set them?

3. Using environment variables for AWS CLI commands

Look carefully and you will see that some environment variables are being used. The AWS CLI commands know about those environment variables and will use them instead of command-line options. The environment variables we’re using are the following:

  • AWS_PROFILE: The AWS profile to use with this project.
  • AWS_REGION: The AWS region to deploy the project to.
  • AWS_USER: The numeric user ID for the account being used. This ID is available on the IAM dashboard page for the account.

The AWS command-line tools will use those environment variables in place of the command-line options. Earlier, we discussed using the AWS_PROFILE variable instead of the –profile option. The same holds true for other command-line options.

This means that we need an easy way to set those variables. These Bash commands can be recorded in a shell script like this, which you could store as env-us-west-2:

export AWS_REGION=us-west-2

export AWS_PROFILE=notes-app

export AWS_USER=09E1X6A8MPLE

This script is, of course, following the syntax of the Bash shell. For other command environments, you must transliterate it appropriately. To set these variables in the Bash shell, run the following command:

$ chmod +x env-us-west-2

$ . ./env-us-west-2 

For other command environments, again transliterate appropriately. For example, in Windows and in PowerShell, the variables can be set with these commands:

$env:AWS_USER = “09E1X6A8MPLE”

$env:AWS_PROFILE = “notes-app”

$env:AWS_REGION = “us-west-2” 

These should be the same values, just in a syntax recognized by Windows.

We have defined the environment variables being used. Let’s now get back to defining the process to build Docker images and push them to the ECR.

4. Defining a process to build Docker images and push them to the AWS ECR

We were exploring a build procedure for pushing Docker containers to ECR repositories until we started talking about environment variables. Let’s return to the task at hand, which is to easily build Docker images, create ECR repositories, and push the images to the ECR.

As mentioned at the beginning of this section, make sure to switch to the default Docker context. We must do so because it is a policy with Docker Swarm to not use the swarm hosts for building Docker images.

To build the images, let’s add a file named containing the following:

( cd ../notes && npm run docker-build )

( cd ../users && npm run docker-build )

This handles running docker build commands for both the Notes and user authentication services. It is expected to be executed in the ecr directory and takes care of executing commands in both the notes and users directories.

Let’s now create and delete a pair of registries to hold our images. We have two images to upload to the ECR, and therefore we create two registries.

Create a file named containing the following:

aws ecr create-repository –repository-name svc-notes –image-

scanning-configuration scanOnPush=true

aws ecr create-repository –repository-name svc-userauth –image-

scanning-configuration scanOnPush=true

Also, create a companion file named containing the following:

aws ecr delete-repository –force –repository-name svc-notes

aws ecr delete-repository –force –repository-name svc-userauth

Between these scripts, we can create and delete the ECR repositories for our Docker images. These scripts are directly usable on Windows; simply change the filenames to create.ps1 and delete.ps1.

In aws ecr delete-repository, the –force option means to delete the repositories even if they contain images.

With the scripts we’ve written so far, they are executed in the following order:

$ sh Login Succeeded

$ sh


“repository”: {

“repositoryArn”: “arn:aws:ecr:us-


“registryId”: “098106984154”,

“repositoryName”: “svc-notes”,

“repositoryUri”: “”,

“createdAt”: “2020-06-07T12:34:03-07:00”,

“imageTagMutability”: “MUTABLE”,

“imageScanningConfiguration”: {

“scanOnPush”: true





“repository”: {

“repositoryArn”: “arn:aws:ecr:us-


“registryId”: “098106984154”,

“repositoryName”: “svc-userauth”,

“repositoryUri”: “”,

“createdAt”: “2020-06-07T12:34:05-07:00”,

“imageTagMutability”: “MUTABLE”,

“imageScanningConfiguration”: {

“scanOnPush”: true




The aws ecr create-repository command outputs these descriptors for the image repositories. The important piece of data to note is the repositoryUri value. This will be used later in the Docker stack file to name the image to be retrieved.

The script only needs to be executed once. Beyond creating the repositories, the workflow is as follows:

  • Build the images, for which we’ve already created a script named
  • Tag the images with the ECR repository Uniform Resource Identifier (URI).
  • Push the images to the ECR repository.

For the latter two steps, we still have some scripts to create. Create a file named containing the following:

docker tag svc-notes:latest


docker tag svc-userauth:latest


The docker tag command we have here takes svc-notes:latest, or svc- userauth:latest, and adds what’s called a target image to the local image storage area. The target image name we’ve used is the same as what will be stored in the ECR repository.

For Windows, you should create a file named tag.ps1 using the same commands, but with Windows-style environment variable references.

Then, create a file named containing the following:

docker push $AWS_USER.dkr.ecr.$ notes:latest

docker push $AWS_USER.dkr.ecr.$ userauth:latest

The docker push command causes the target image to be sent to the ECR repository. And again, for Windows, create a file named push.ps1 containing the same commands but with Windows-style environment variable references.

In both the tag and push scripts, we are using the repository URI value, but have plugged in the two environment variables. This will make it generalized in case we deploy Notes to another AWS region.

We have the workflow implemented as scripts, so let’s see now how it is run, as follows:

$ sh -x

+ cd ../notes

+ npm run docker-build 

> notes@0.0.0 docker-build /Users/David/Chapter12/notes

> docker build -t svc-notes . 

Sending build context to Docker daemon 84.12MB

Step 1/25 : FROM node:14

—> a5a6a9c32877

Step 2/25 : RUN apt-get update -y && apt-get -y install curl python build-essential git ca-certificates

—> Using cache

—> 7cf57f90c8b8

Step 3/25 : ENV DEBUG=”notes:*,messages:*”

—> Using cache

—> 291652c87cce

Successfully built e2f6ec294016 Successfully tagged svc-notes:latest

+ cd ../users

+ npm run docker-build 

> user-auth-server@1.0.0 docker-build /Users/David/Chapter12/users

> docker build -t svc-userauth . 

Sending build context to Docker daemon 11.14MB

Successfully built 294b9a83ada3 Successfully tagged svc-userauth:latest

This builds the Docker images. When we run docker build, it stores the built image in an area on our laptop where Docker maintains images. We can inspect that area using the docker images command, like this:

$ docker images svc-userauth


svc-userauth latest b74f92629ed1 3 hours ago 1.11GB


The docker build command automatically adds the tag, latest, if we do not specify a tag.

Then, to push the images to the ECR repositories, we execute these commands:

$ sh

$ sh

The push refers to repository []

6005576570e9: Pushing 18.94kB

cac3b3d9d486: Pushing 7.014MB/96.89MB

107afd8db3a4: Pushing 14.85kB

df143eb62095: Pushing 17.41kB

6b61442be5f8: Pushing 3.717MB

0c719438462a: Waiting

8c98a57451eb: Waiting

latest: digest: sha256:1ea31c507e9714704396f01f5cdad62525d9694e5b09e2e7b08c3cb2ebd6d6f

f size: 4722

The push refers to repository []

343a794bb161: Pushing 9.12MB/65.13MB

51f07622ae50: Pushed

b12bef22bccb: Pushed


Since the images are rather large, it will take a long time to upload them to the AWS ECR. We should add a task to the backlog to explore ways to trim Docker image sizes. In any case, expect this to take a while.

After a period of time, the images will be uploaded to the ECR repositories, and you can inspect the results on the ECR dashboard.

Once the Docker images are pushed to the AWS ECR repository, we no longer need to stay with the default Docker context. You will be free to run the following command at any time:

$ docker context use ec2

Remember that swarm hosts are not to be used for building Docker images. At the beginning of this section, we switched to the default context so that builds would occur on our laptop.

In this section, we learned how to set up a build procedure to push our Docker images to repositories on the AWS ECR service. This included using some interesting tools that simplify building complex build procedures in package.json scripts.

Our next step is learning how to use Docker compose files to describe deployment on Docker Swarm.

Source: Herron David (2020), Node.js Web Development: Server-side web development made easy with Node 14 using practical examples, Packt Publishing.

Leave a Reply

Your email address will not be published. Required fields are marked *