In this post, I'll show you how you can easily create a docker container for your Node JS app. But first, what is Docker?
Once you containerized your application code, you can deploy it anywhere you want. Whether it's an EC2 instance, Serverless or Kubernetes. In this post, I'll show you the basic deployment in a local system or EC2 instance (Linux based).
Let's get started!
First, you need to have a functional node.js app and docker & docker-compose cli installed on the host machine. You need to create 2 files in root of your directory: Dockerfile
(without any extension) and docker-compose.yml
The Dockerfile
will look like this:
# Dockerfile
FROM node:16.14.2-alpine AS base
WORKDIR /app
COPY [ "package.json", "yarn.lock*", "./" ]
FROM base AS dev
ENV NODE_ENV=dev
RUN yarn install --frozen-lockfile
COPY . .
RUN yarn global add pm2
CMD [ "pm2-runtime", "index.js" ]
Our Dockerfile begins by pulling a base image of Node.js version 16.14.2 running on Alpine Linux. We name this stage of our build base
.
Next, we set the working directory in our Docker image to /app
. All subsequent commands in our Dockerfile will be run from this directory.
We then copy our package.json
and yarn.lock
files into our Docker image. These files list our application’s dependencies.
We start a new stage of our build, which we name dev
. This stage is based on our base
image.
We set an environment variable NODE_ENV
to dev
. This can be used by our application to determine the current environment and adjust its behavior accordingly.
We run yarn install
to install our application’s dependencies. The --frozen-lockfile
option ensures that yarn
doesn’t generate a yarn.lock
file and instead uses the existing one. This is important for ensuring that the installed dependencies match those specified in the yarn.lock
file.
We copy all the files from our local directory into our Docker image. This includes our application code and any other files in our local directory.
We installed pm2
globally in our Docker image. PM2 is a process manager for Node.js applications and provides features like keeping applications alive forever and reloading them without downtime.
Finally, we specify the command to run when our Docker container starts. In this case, we’re using pm2-runtime
to run our index.js
file.
Now, we need to define how do we run this Dockerfile
or the docker image on a container. We'll use docker-compose.yml
file. Let's assume that the node.js app depends on a mongodb to store the data. We can isolate that too. This is not good practice, but I want to show you how we can combine multiple images in a single container. So, here's the docker-compose.yml
file:
version: "3.9"
services:
mongodb:
image: mongo
container_name: mongodb
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: 123456
volumes:
- mongodb_data:/data/db
networks:
- app-network
ports:
- 27017:27017
app:
build:
context: .
dockerfile: Dockerfile
target: dev
ports:
- 8001:8001
volumes:
- ./src:/app/src
depends_on:
- mongodb
env_file:
- .env.${NODE_ENV:-development}
networks:
- app-network
volumes:
mongodb_data:
networks:
app-network:
Let’s break down this docker-compose.yml
file:
Version
The version
field specifies the version of the Docker Compose file format. This version (3.9) supports most recent features.
The services
section is where we define the services or containers to be created. In our example, we have two services: mongodb
and app
.
Mongo service
The mongodb
service is based on the mongo
Docker image. The container_name
field specifies the name of the container when it is launched. The restart: always
configuration ensures that the container will always restart if it stops.
The environment
section is used to set environment variables in the container. In this case, we’re setting the root username and password for MongoDB.
The volumes
field maps the volume mongodb_data
to /data/db
in the container, which is where MongoDB stores its data.
The networks
field indicates that this service is part of the app-network
network.
Finally, the ports
field maps port 27017 of the container to port 27017 of the host.
App service
The app
service is built using a Dockerfile in the current directory, targeting the dev
stage of a multi-stage build.
The ports
field maps port 8001 of the container to port 8001 of the host.
The volumes
field maps the src
directory from the host to the /app/src
directory in the container.
The depends_on
field indicates that this service depends on the mongodb
service.
The env_file
field specifies a file from which to read environment variables.
Finally, the networks
field indicates that this service is part of the app-network
network.
Volume & Network
The volumes
section defines the volumes used in services. In this case, we have one volume: mongodb_data
.
The networks
section defines the networks used in services. In this case, we have one network: app-network
.
Last step
Now that we have these 2 essential files, we just need to run this docker image on the host machine, to do that push all your code to the desired machine (i.e. EC2), go to the root project directory from terminal & write the below commands:
docker-compose build
docker-compose up -d // to run the docker image in daemon mode add -d
To check the application logs, write this command first to list the running containers:
docker ps
Find the desired container from the list of containers, and copy the container id, then write this command to get the container logs:
docker container logs -f <container-id> // to get continuous and current logs add -f
To stop the running container, write this command from the root of the project directory:
docker-compose dowm
PS: you may need to add sudo
before this command based on the system and docker permissions.
That's how you can easily deploy a node.js application to the host machine using Docker.