Deploy Node JS application using Docker


5 min read

Deploy Node JS application using Docker

In this post, I'll show you how you can easily create a docker container for your Node JS app. But first, what is Docker?

Docker is a platform that allows you to create, run, and manage applications using containers. Containers are isolated environments that contain everything an application needs to run, such as code, libraries, and configuration files. Containers are portable, scalable, and efficient, making them ideal for developing distributed applications that work in different environments. You can use Docker to build, test, and deploy your applications faster and easier.

Once you containerized your application code, you can deploy it anywhere you want. Whether it's an EC2 instance, Serverless or Kubernetes. In this post, I'll show you the basic deployment in a local system or EC2 instance (Linux based).

Let's get started!

First, you need to have a functional node.js app and docker & docker-compose cli installed on the host machine. You need to create 2 files in root of your directory: Dockerfile (without any extension) and docker-compose.yml

The Dockerfile will look like this:

# Dockerfile
FROM node:16.14.2-alpine AS base


COPY [ "package.json", "yarn.lock*", "./" ]

FROM base AS dev
RUN yarn install --frozen-lockfile
COPY . .

RUN yarn global add pm2

CMD [ "pm2-runtime", "index.js" ]

Our Dockerfile begins by pulling a base image of Node.js version 16.14.2 running on Alpine Linux. We name this stage of our build base.

Next, we set the working directory in our Docker image to /app. All subsequent commands in our Dockerfile will be run from this directory.

We then copy our package.json and yarn.lock files into our Docker image. These files list our application’s dependencies.

We start a new stage of our build, which we name dev. This stage is based on our base image.

We set an environment variable NODE_ENV to dev. This can be used by our application to determine the current environment and adjust its behavior accordingly.

We run yarn install to install our application’s dependencies. The --frozen-lockfile option ensures that yarn doesn’t generate a yarn.lock file and instead uses the existing one. This is important for ensuring that the installed dependencies match those specified in the yarn.lock file.

We copy all the files from our local directory into our Docker image. This includes our application code and any other files in our local directory.

We installed pm2 globally in our Docker image. PM2 is a process manager for Node.js applications and provides features like keeping applications alive forever and reloading them without downtime.

Finally, we specify the command to run when our Docker container starts. In this case, we’re using pm2-runtime to run our index.js file.

Now, we need to define how do we run this Dockerfile or the docker image on a container. We'll use docker-compose.yml file. Let's assume that the node.js app depends on a mongodb to store the data. We can isolate that too. This is not good practice, but I want to show you how we can combine multiple images in a single container. So, here's the docker-compose.yml file:

version: "3.9"
    image: mongo
    container_name: mongodb
    restart: always
      - mongodb_data:/data/db
      - app-network
      - 27017:27017

      context: .
      dockerfile: Dockerfile
      target: dev
      - 8001:8001
      - ./src:/app/src
      - mongodb
      - .env.${NODE_ENV:-development}
      - app-network



Let’s break down this docker-compose.yml file:


The version field specifies the version of the Docker Compose file format. This version (3.9) supports most recent features.

The services section is where we define the services or containers to be created. In our example, we have two services: mongodb and app.

Mongo service

The mongodb service is based on the mongo Docker image. The container_name field specifies the name of the container when it is launched. The restart: always configuration ensures that the container will always restart if it stops.

The environment section is used to set environment variables in the container. In this case, we’re setting the root username and password for MongoDB.

The volumes field maps the volume mongodb_data to /data/db in the container, which is where MongoDB stores its data.

The networks field indicates that this service is part of the app-network network.

Finally, the ports field maps port 27017 of the container to port 27017 of the host.

App service

The app service is built using a Dockerfile in the current directory, targeting the dev stage of a multi-stage build.

The ports field maps port 8001 of the container to port 8001 of the host.

The volumes field maps the src directory from the host to the /app/src directory in the container.

The depends_on field indicates that this service depends on the mongodb service.

The env_file field specifies a file from which to read environment variables.

Finally, the networks field indicates that this service is part of the app-network network.

Volume & Network

The volumes section defines the volumes used in services. In this case, we have one volume: mongodb_data.

The networks section defines the networks used in services. In this case, we have one network: app-network.

Last step

Now that we have these 2 essential files, we just need to run this docker image on the host machine, to do that push all your code to the desired machine (i.e. EC2), go to the root project directory from terminal & write the below commands:

docker-compose build
docker-compose up -d // to run the docker image in daemon mode add -d

To check the application logs, write this command first to list the running containers:

docker ps

Find the desired container from the list of containers, and copy the container id, then write this command to get the container logs:

docker container logs -f <container-id> // to get continuous and current logs add -f

To stop the running container, write this command from the root of the project directory:

docker-compose dowm

PS: you may need to add sudo before this command based on the system and docker permissions.

That's how you can easily deploy a node.js application to the host machine using Docker.

Did you find this article valuable?

Support Maulik Sompura by becoming a sponsor. Any amount is appreciated!