Docker is an open-source platform that automates the deployment, scaling, and management of applications in lightweight, portable containers. Containers package an application and its dependencies together, ensuring consistency across different environments.
Docker simplifies the process of creating, deploying, and running applications by using containers, which allow developers to package up an application with all the parts it needs, such as libraries and other dependencies, and ship it all out as one package.
Containerization is a lightweight form of virtualization that involves encapsulating an application and its dependencies into a container. Unlike traditional virtual machines, containers share the host system’s kernel but run in isolated user spaces. This approach provides several advantages:
Docker’s architecture is based on a client-server model, which includes several key components: Docker Engine, Docker Daemon, Docker Client, Docker Images, and Docker Containers.
dockerd
)
docker
)
Docker Engine is the core component of Docker, providing the runtime environment for containers. It consists of several smaller components:
dockerd
)The Docker Daemon runs on the host machine and is responsible for:
docker
)The Docker Client is the primary way users interact with Docker. Key commands include:
docker build
: Builds an image from a Dockerfile.docker pull
: Downloads an image from a registry.docker run
: Creates and starts a container from an image.docker ps
: Lists running containers.docker stop
: Stops a running container.Docker Images are templates used to create containers. Key concepts include:
FROM
(base image), RUN
(execute a command), and COPY
(copy files into the image).Docker Containers are instances of Docker Images. Key concepts include:
docker build
to create an image from the Dockerfile.docker run
to start a container from the image.docker ps
, docker stop
, and docker rm
to manage the container’s lifecycle.Docker images are created from a series of instructions written in a Dockerfile. These instructions define what goes into the image, including the operating system, application code, dependencies, and any other necessary files.
docker build
:
docker build
command is used to create an image from a Dockerfile.docker build -t <image_name>:<tag> <path_to_dockerfile>
.docker build -t myapp:latest .
-t
option.docker build -t myapp:v1.0 .
A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. By default, Docker reads this file to automate the steps for creating a Docker image.
FROM
instruction.
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y python3
COPY . /app
WORKDIR /app
CMD ["python3", "app.py"]
EXPOSE 5000
# Use an official Python runtime as a parent image
FROM python:3.8-slim-buster
# Set the working directory in the container
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]
FROM python:3.8-slim-buster
RUN
commands into a single command to reduce the number of layers.
RUN apt-get update && apt-get install -y \
package1 \
package2 && \
rm -rf /var/lib/apt/lists/*
COPY requirements.txt /app/
RUN pip install --no-cache-dir -r /app/requirements.txt
.dockerignore
File:
.gitignore
, this file tells Docker which files and directories to ignore when building an image. This reduces the image size and build time.
.git
node_modules
Dockerfile
.dockerignore
RUN apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
package1 \
package2
FROM
statements in your Dockerfile, reducing the final image size by copying only the necessary artifacts.
# Builder stage
FROM golang:1.16 as builder
WORKDIR /app
COPY . .
RUN go build -o myapp
# Final stage
FROM alpine:latest
COPY --from=builder /app/myapp /app/myapp
CMD ["/app/myapp"]
FROM python:3.8.5-slim-buster
Running containers is the primary way to use Docker images. You create and start a container from an image using the docker run
command.
docker run
command creates and starts a container from a specified image.docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
docker run -d --name mycontainer nginx
-d
: Run the container in detached mode (in the background).--name
: Assign a name to the container.nginx
: The image to use.-p
: Map container ports to host ports.
docker run -d -p 8080:80 nginx
-v
: Mount host directories or volumes into the container.
docker run -d -v /host/path:/container/path nginx
-e
: Set environment variables in the container.
docker run -d -e MY_ENV_VAR=value nginx
-it
option.
docker run -it ubuntu /bin/bash
-i
: Keep STDIN open even if not attached.-t
: Allocate a pseudo-TTY (terminal).Managing containers involves starting, stopping, restarting, and removing them. Docker provides various commands to handle these operations.
docker start
command to start a stopped container.
docker start mycontainer
docker stop
command to stop a running container.
docker stop mycontainer
docker restart
command to restart a container.
docker restart mycontainer
docker rm
command to remove a stopped container.
docker rm mycontainer
-f
option.
docker rm -f mycontainer
docker ps
command to list running containers.
docker ps
-a
option to list all containers, including stopped ones.
docker ps -a
docker logs
command to view the logs of a container.
docker logs mycontainer
The container lifecycle includes the various states a container can be in, from creation to removal. Understanding these states helps in managing containers effectively.
docker create
command.
docker create --name mycontainer nginx
docker run
or docker start
commands.docker pause
command.
docker pause mycontainer
docker unpause
command.
docker unpause mycontainer
docker stop
command.docker rm
command.docker run -d --name webserver -p 8080:80 nginx
docker ps
docker stop webserver
docker start webserver
docker rm webserver
Docker networking allows containers to communicate with each other and with the outside world. Docker provides several networking drivers to manage container networks, each with different characteristics and use cases.
Docker provides several commands to manage and inspect networks. These commands allow you to create, inspect, and manage networks for your containers.
docker network ls
command to list all networks.
docker network ls
docker network create
command to create a custom network.
docker network create my_bridge_network
docker network create --driver overlay my_overlay_network
docker network inspect
command to get detailed information about a network.
docker network inspect my_bridge_network
docker network connect
command to connect a container to a network.
docker network connect my_bridge_network mycontainer
docker network disconnect
command to disconnect a container from a network.
docker network disconnect my_bridge_network mycontainer
docker network rm
command to remove a network.
docker network rm my_bridge_network
docker run -d --name container1 nginx
docker network create my_bridge_network
docker run -d --name container2 --network my_bridge_network nginx
docker exec -it container2 ping container1
docker run -d --name host_container --network host nginx
docker run -d --network host --name fast_network_app my_network_app
docker network create --driver overlay my_overlay_network
docker service create --name web_service --network my_overlay_network nginx
docker network create -d macvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o parent=eth0 my_macvlan_network
docker run --network none --name isolated_container busybox
docker network create my_custom_bridge
docker run -d --name container1 --network my_custom_bridge nginx
docker run -d --name container2 --network my_custom_bridge httpd
docker network inspect my_custom_bridge
docker run -d --name container3 alpine
docker network connect my_custom_bridge container3
docker exec -it container3 ping container1
docker network rm my_custom_bridge
Volumes are the preferred mechanism for persisting data generated and used by Docker containers. They are managed by Docker and can be used to share data between containers or between a container and the host system.
docker volume create my_volume
docker volume ls
docker volume inspect my_volume
docker volume rm my_volume
-v
or --mount
option.
docker run -d --name my_container -v my_volume:/data nginx
--mount
option:
docker run -d --name my_container --mount source=my_volume,target=/data nginx
Bind mounts allow you to mount a directory or file from the host filesystem into a container. Unlike volumes, bind mounts are not managed by Docker and provide more flexibility at the cost of portability.
docker run -d --name my_container -v /host/path:/container/path nginx
--mount
option:
docker run -d --name my_container --mount type=bind,source=/host/path,target=/container/path nginx
Persisting data in Docker containers ensures that data is not lost when containers are stopped or removed. There are several strategies for managing data persistence in Docker.
docker run -d --name my_container -v my_volume:/data nginx
docker run -d --name my_container -v /host/path:/container/path nginx
docker run
command to create a backup of a volume by copying its contents to a tar file.
docker run --rm -v my_volume:/data -v /host/backup:/backup busybox tar cvf /backup/my_volume_backup.tar /data
docker run
command to restore a volume from a tar file by copying its contents to the volume.
docker run --rm -v my_volume:/data -v /host/backup:/backup busybox tar xvf /backup/my_volume_backup.tar -C /data
docker volume create
command. This command creates a new volume that can be used by containers.
docker volume create my_volume
docker volume inspect
command to get detailed information about a volume, including its mount point on the host system.
docker volume inspect my_volume
docker volume rm
command to remove a volume. Note that you cannot remove a volume that is in use by a container.
docker volume rm my_volume
-v
or --mount
option.
docker run -d --name my_container -v /host/path:/container/path nginx
--mount
option provides a more verbose syntax for bind mounts.
docker run -d --name my_container --mount type=bind,source=/host/path,target=/container/path nginx
docker run -d --name my_container -v my_volume:/data nginx
docker run -d --name my_container -v /host/path:/container/path nginx
docker run --rm -v my_volume:/data -v /host/backup:/backup busybox tar cvf /backup/my_volume_backup.tar /data
docker run --rm -v my_volume:/data -v /host/backup:/backup busybox tar xvf /backup/my_volume_backup.tar -C /data
docker volume create my_data_volume
docker run -d --name my_app -v my_data_volume:/app/data nginx
docker volume ls
docker volume inspect my_data_volume
docker run --rm -v my_data_volume:/data -v /host/backup:/backup busybox tar cvf /backup/my_data_backup.tar /data
docker run --rm -v my_data_volume:/data -v /host/backup:/backup busybox tar xvf /backup/my_data_backup.tar -C /data
Docker Compose is a tool for defining and running multi-container Docker applications. It uses a YAML file to configure the application’s services, networks, and volumes, making it easy to manage complex applications with multiple interconnected containers.
docker-compose.yml
file.docker-compose.yml
file is the core configuration file used by Docker Compose.version: '3.8'
services:
web:
image: nginx
ports:
- "80:80"
db:
image: postgres
environment:
POSTGRES_DB: exampledb
POSTGRES_USER: exampleuser
POSTGRES_PASSWORD: examplepass
networks:
frontend:
backend:
volumes:
db_data:
version: '3.8'
services:
web:
image: nginx
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
networks:
- frontend
db:
image: postgres
environment:
POSTGRES_DB: exampledb
POSTGRES_USER: exampleuser
POSTGRES_PASSWORD: examplepass
volumes:
- db_data:/var/lib/postgresql/data
networks:
- backend
networks:
frontend:
backend:
volumes:
db_data:
docker-compose up
command to start the application.
docker-compose up
docker-compose down
command to stop the application and remove the containers.
docker-compose down
-d
flag with docker-compose up
to run the containers in the background.
docker-compose up -d
docker-compose logs
command to view the logs of the running services.
docker-compose logs
docker-compose exec
command to execute commands in a running service.
docker-compose exec web bash
docker-compose.yml
:
version: '3.8'
services:
web:
image: nginx
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
networks:
- frontend
db:
image: postgres
environment:
POSTGRES_DB: exampledb
POSTGRES_USER: exampleuser
POSTGRES_PASSWORD: examplepass
volumes:
- db_data:/var/lib/postgresql/data
networks:
- backend
networks:
frontend:
backend:
volumes:
db_data:
docker-compose up -d
docker-compose ps
docker-compose logs web
docker-compose down
Docker Compose provides a variety of commands to manage multi-container applications. Here are some of the essential commands and their usage:
docker-compose up
:
docker-compose.yml
file.
docker-compose up
-d
flag.
docker-compose up -d
docker-compose down
:
up
.
docker-compose down
docker-compose logs
:
docker-compose logs
docker-compose logs web
docker-compose ps
:
docker-compose ps
docker-compose scale
:
docker-compose scale web=3
--scale
flag with up
.
docker-compose up -d --scale web=3
docker-compose exec
:
docker-compose exec web bash
docker-compose build
:
docker-compose.yml
file.
docker-compose build
docker-compose pull
:
docker-compose pull
docker-compose stop
:
docker-compose stop
docker-compose rm
:
docker-compose rm
Environment variables can be used to customize service configuration and control various aspects of the application’s behavior.
docker-compose.yml
file under the environment
key.
services:
web:
image: nginx
environment:
- NGINX_HOST=localhost
- NGINX_PORT=80
.env
File:
.env
file in the same directory as the docker-compose.yml
file. Docker Compose automatically reads this file.
NGINX_HOST=localhost
NGINX_PORT=80
docker-compose.yml
file using the ${VARIABLE_NAME}
syntax.
services:
web:
image: nginx
environment:
- NGINX_HOST=${NGINX_HOST}
- NGINX_PORT=${NGINX_PORT}
--env-file
option with docker-compose
commands to specify a file containing environment variables.
docker-compose --env-file ./custom.env up
Docker Compose provides powerful networking capabilities to define and manage communication between services.
docker-compose.yml
file. Services can communicate with each other using their service names as hostnames.networks:
frontend:
backend:
services:
web:
image: nginx
networks:
- frontend
db:
image: postgres
networks:
- backend
services:
web:
image: nginx
networks:
frontend:
aliases:
- webserver
db:
image: postgres
networks:
backend:
aliases:
- database
bridge
, host
, none
, or custom networks.
services:
web:
image: nginx
network_mode: bridge
db:
image: postgres
network_mode: host
docker-compose.yml
File:
version: '3.8'
services:
web:
image: nginx
ports:
- "80:80"
networks:
- frontend
environment:
- NGINX_HOST=${NGINX_HOST}
- NGINX_PORT=${NGINX_PORT}
db:
image: postgres
environment:
POSTGRES_DB: exampledb
POSTGRES_USER: exampleuser
POSTGRES_PASSWORD: examplepass
volumes:
- db_data:/var/lib/postgresql/data
networks:
- backend
networks:
frontend:
backend:
volumes:
db_data:
.env
File:
NGINX_HOST=localhost
NGINX_PORT=80
docker-compose up -d
docker-compose ps
docker-compose logs
docker-compose exec web bash
docker-compose down
Docker Hub is a cloud-based repository where you can find, share, and store container images. It provides a centralized resource for container image discovery, distribution, and management.
docker pull
command is used to download an image from Docker Hub.
docker pull <image-name>
docker pull nginx
docker push
command.
docker tag <local-image> <username>/<repository>:<tag>
docker push <username>/<repository>:<tag>
docker tag myapp:latest mouhamaddev/myapp:latest
docker push mouhamaddev/myapp:latest
docker login
.
docker login
docker pull nginx
docker tag myapp:latest mouhamaddev/myapp:latest
docker push mouhamaddev/myapp:latest
docker tag myapp:latest mouhamaddev/private-repo:latest
docker login
docker push mouhamaddev/private-repo:latest
docker pull mouhamaddev/private-repo:latest
A private Docker registry allows you to host your own images securely, providing control over who can access and interact with them. Here’s how to set up a private Docker registry:
registry
image that you can use to run a private registry.docker run -d -p 5000:5000 --name registry registry:2
localhost:5000
:
docker tag myapp:latest localhost:5000/myapp:latest
docker push localhost:5000/myapp:latest
docker pull localhost:5000/myapp:latest
Once the registry is running, you can configure and manage it to meet your requirements.
config.yml
file. You can specify various options such as storage backend, authentication, and TLS settings.docker run -d -p 5000:5000 --name registry -v /path/to/config.yml:/etc/docker/registry/config.yml registry:2
mkdir -p certs
openssl req -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key -x509 -days 365 -out certs/domain.crt
docker run -d -p 5000:5000 --name registry \
-v /path/to/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
.htpasswd
file.htpasswd
utility and create the password file:
apt-get install apache2-utils
htpasswd -cB /path/to/auth/htpasswd myuser
docker run -d -p 5000:5000 --name registry \
-v /path/to/auth:/auth \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e "REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd" \
registry:2
config.yml
:
version: 0.1
storage:
s3:
accesskey: <your-access-key>
secretkey: <your-secret-key>
region: us-east-1
bucket: <your-bucket-name>
curl -X GET http://localhost:5000/v2/_catalog
server {
listen 443 ssl;
server_name myregistrydomain.com;
ssl_certificate /etc/nginx/certs/domain.crt;
ssl_certificate_key /etc/nginx/certs/domain.key;
location / {
proxy_pass http://localhost:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
docker run -d -p 5000:5000 --name registry registry:2
docker tag myapp:latest localhost:5000/myapp:latest
docker push localhost:5000/myapp:latest
docker pull localhost:5000/myapp:latest
mkdir -p certs
openssl req -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key -x509 -days 365 -out certs/domain.crt
docker run -d -p 5000:5000 --name registry \
-v $(pwd)/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
.htpasswd
file and run the registry with authentication:
apt-get install apache2-utils
htpasswd -cB /path/to/auth/htpasswd myuser
docker run -d -p 5000:5000 --name registry \
-v /path/to/auth:/auth \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e "REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd" \
registry:2
Docker Swarm is Docker’s native clustering and orchestration tool, allowing you to manage a cluster of Docker nodes as a single virtual system. It enables you to deploy and manage multi-container applications across multiple Docker hosts.
Orchestration involves automating the deployment, scaling, and management of containerized applications. Docker Swarm provides built-in orchestration capabilities, allowing you to:
docker swarm init
docker swarm join --token <worker-token> <manager-ip>:2377
docker swarm init
command.docker node ls
docker node promote <node-id>
docker node demote <node-id>
docker node rm <node-id>
docker service create
command:
docker service create --name <service-name> --replicas <number-of-replicas> <image>
docker service create --name web --replicas 3 nginx
docker service ls
docker service inspect <service-name>
docker service update --replicas <new-number-of-replicas> <service-name>
docker service update --replicas 5 web
docker service scale <service-name>=<number-of-replicas>
docker service scale web=10
docker service rm <service-name>
docker swarm init
docker swarm join --token <worker-token> <manager-ip>:2377
docker service create --name web --replicas 3 nginx
docker service scale web=5
docker service update --image nginx:latest web
docker service rm web
Securing Docker is crucial to protecting your containerized applications and the underlying host system. Docker provides several features and best practices to help you secure your containers and their interactions.
alpine
) to reduce the attack surface.USER
directive in your Dockerfile to specify a non-root user.
USER nonrootuser
--user
flag to override the default user at runtime.
docker run --user nonrootuser myapp
docker run --cap-drop=ALL --cap-add=NET_ADMIN myapp
bridge
, overlay
, or macvlan
for isolation.export DOCKER_CONTENT_TRUST=1
User namespaces provide an additional layer of security by mapping container users to different users on the host system. This prevents container processes from having direct access to the host’s user and group IDs.
/etc/docker/daemon.json
) to enable user namespaces.
{
"userns-remap": "default"
}
sudo systemctl restart docker
{
"userns-remap": "myuser"
}
Docker provides mechanisms to securely manage and distribute secrets, such as passwords and API keys, used by applications running in containers.
echo "mysecretpassword" | docker secret create my_secret -
docker service create --name myservice --secret my_secret myimage
/run/secrets/
.my_secret
:
cat /run/secrets/my_secret
docker secret ls
docker secret rm my_secret
docker-compose.yml
file:
version: '3.8'
services:
web:
image: myimage
secrets:
- my_secret
secrets:
my_secret:
file: ./my_secret.txt
docker-compose up
echo "mysecretpassword" | docker secret create my_secret -
docker service create --name myservice --secret my_secret myimage
cat /run/secrets/my_secret
daemon.json
:
{
"userns-remap": "default"
}
sudo systemctl restart docker
Deploying Docker containers in production environments requires careful planning and implementation to ensure reliability, performance, and security. Here’s a guide to best practices, monitoring, logging, and performance tuning for Docker in production.
FROM node:14 AS builder WORKDIR /app COPY package*.json ./ RUN npm install COPY . . RUN npm run build
FROM nginx:alpine COPY –from=builder /app/build /usr/share/nginx/html ```
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s \
CMD curl -f http://localhost/ || exit 1
docker run --memory="500m" --cpus="1.0" myimage
docker run --log-driver=syslog myimage
docker stats
dockerd --storage-driver=overlay2
docker run --memory="1g" --cpus="2.0" myapp
FROM node:14 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx:alpine
COPY --from=builder /app/build /usr/share/nginx/html
docker run --memory="512m" --cpus="1.0" myapp
docker run --log-driver=syslog myapp
docker stats
dockerd --storage-driver=overlay2
Hands-on projects are essential for solidifying your Docker skills and understanding real-world applications. Here are some practical exercises and projects that will help you apply what you’ve learned:
Building and Deploying a Web Application
Objective: Create a simple web application, containerize it with Docker, and deploy it to a local or cloud environment.
Steps:
// server.js
const express = require('express');
const app = express();
const PORT = process.env.PORT || 3000;
app.get('/', (req, res) => {
res.send('Hello, World!');
});
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});
# Use the official Node.js image
FROM node:14
# Set the working directory
WORKDIR /app
# Copy application files
COPY package*.json ./
RUN npm install
COPY . .
# Expose the port the app runs on
EXPOSE 3000
# Run the application
CMD ["node", "server.js"]
docker build -t my-web-app .
docker run -p 3000:3000 my-web-app
Multi-Container Applications with Docker Compose
Objective: Create a multi-container application using Docker Compose to define and run multiple services (e.g., a web server and a database).
Steps:
# Dockerfile for Flask App
FROM python:3.8
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 5000
CMD ["python", "app.py"]
docker-compose.yml
File:
version: '3.8'
services:
web:
build:
context: ./web
ports:
- "5000:5000"
depends_on:
- db
environment:
- DATABASE_URL=postgresql://user:password@db/mydatabase
db:
image: postgres:13
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: mydatabase
volumes:
- db_data:/var/lib/postgresql/data
volumes:
db_data:
docker-compose up
http://localhost:5000
.Setting Up a CI/CD Pipeline with Docker
Objective: Implement a CI/CD pipeline that automates the building, testing, and deployment of Dockerized applications.
Steps:
Create CI/CD Pipeline Configuration:
Example with GitHub Actions:
.github/workflows/ci-cd.yml
file:
name: CI/CD Pipeline
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Build Docker image
run: |
docker build -t my-web-app .
- name: Run tests
run: |
docker run my-web-app npm test
- name: Push Docker image
run: |
docker tag my-web-app mydockerhubusername/my-web-app:latest
echo "$" | docker login -u "$" --password-stdin
docker push mydockerhubusername/my-web-app:latest