Kubernetes is an open-source platform designed to automate deploying, scaling, and operating containerized applications. It helps us manage and orchestrate those containers at scale. It mainly consists of:
A Pod is the smallest and most basic deployable unit in Kubernetes. It represents a single instance of a running process in your cluster. A pod can contain one or more containers that share the same network namespace and storage.
Let’s say you have a Django application and a Redis instance that must always run together. Instead of deploying them as separate pods, you could place them in a single pod. This way, they share the same IP address and can communicate with each other more efficiently.
apiVersion: v1
kind: Pod
metadata:
name: django-redis-pod
spec:
containers:
- name: django-container
image: your-django-image
ports:
- containerPort: 8000
- name: redis-container
image: redis
ports:
- containerPort: 6379
Nodes are the physical or virtual machines that make up the Kubernetes cluster. Each node is responsible for running the pods assigned to it.
Components on a Node: Each node runs several key components:
Types of Nodes:
Suppose you have a Kubernetes cluster with three worker nodes. When you deploy your Django app, Kubernetes might spread the pods across these nodes. If one node fails, Kubernetes can automatically reschedule the pods on another available node, ensuring high availability.
Definition:
A Kubernetes cluster is a set of nodes (master and worker) that are orchestrated and managed together to run your containerized applications.
The cluster is the overarching infrastructure that holds all the resources—pods, nodes, services, storage, etc. The control plane manages the cluster and ensures that the desired state of the system (as defined by the user) is maintained.
Multi-Zone Clusters: Kubernetes can manage clusters that span multiple availability zones (AZs) or regions, offering higher availability and fault tolerance.
If you have a Django application that needs to be highly available, you might deploy a Kubernetes cluster across three different availability zones. The cluster will ensure that if one zone goes down, your application continues running in the other zones.
The control plane is the central nervous system of a Kubernetes cluster. It manages and controls the cluster, making decisions about scheduling, scaling, and maintaining the desired state of the system.
When you deploy a Django app using a Deployment
, the control plane’s scheduler determines which nodes have enough resources to run the pods. The API server processes your request, and the controller manager ensures the pods remain running as intended.
A Service in Kubernetes is an abstraction that defines a logical set of pods and a policy by which to access them. Services provide a stable endpoint (IP address or DNS name) to access pods, even as the underlying pods might be changing due to scaling or updates.
Types of Services:
You have a Django app running in a Kubernetes cluster. The pods can come and go as you scale up or down. By creating a Service, you provide a stable IP or DNS name that clients can use to access your app, regardless of the underlying pods.
apiVersion: v1
kind: Service
metadata:
name: django-service
spec:
selector:
app: django
ports:
- protocol: TCP
port: 80
targetPort: 8000
type: LoadBalancer
A Deployment in Kubernetes is a higher-level abstraction that manages a group of pods and ensures that your application runs correctly. It controls the creation and scaling of pods and handles updates to the application with rolling updates.
Suppose you have a Django app and you want to run three instances (pods) of it for high availability. You would create a Deployment that specifies this desired state. Kubernetes ensures that three pods are always running, even if one fails or is updated.
apiVersion: apps/v1
kind: Deployment
metadata:
name: django-deployment
spec:
replicas: 3
selector:
matchLabels:
app: django
template:
metadata:
labels:
app: django
spec:
containers:
- name: django
image: your-django-image
ports:
- containerPort: 8000
Creating a Deployment: A Deployment in Kubernetes is defined using a YAML manifest file. Here’s a simple example for a Django app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: django-deployment
spec:
replicas: 3
selector:
matchLabels:
app: django
template:
metadata:
labels:
app: django
spec:
containers:
- name: django
image: my-django-app:latest
ports:
- containerPort: 8000
Managing Deployments:
replicas
field or using the kubectl scale
command:
kubectl scale deployment django-deployment --replicas=5
kubectl get deployments
Rolling Updates: Kubernetes allows us to update our application without downtime. When we update a Deployment, Kubernetes incrementally replaces old pods with new ones. For example, changing the image version in the Deployment YAML file triggers a rolling update.
Rollback: If something goes wrong during an update, you can easily roll back to a previous version:
kubectl rollout undo deployment/django-deployment
Kubernetes Services expose your pods to external traffic and other services within the cluster.
ClusterIP (Default):
apiVersion: v1
kind: Service
metadata:
name: django-service
spec:
selector:
app: django
ports:
- protocol: TCP
port: 80
targetPort: 8000
NodePort:
spec:
type: NodePort
ports:
- port: 80
targetPort: 8000
nodePort: 30007
LoadBalancer:
spec:
type: LoadBalancer
Ingress:
Example Ingress configuration for a Django app:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: django-ingress
spec:
rules:
- host: django.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: django-service
port:
number: 80
Managing configurations and secrets is crucial for deploying applications in different environments (development, staging, production).
ConfigMaps:
apiVersion: v1
kind: ConfigMap
metadata:
name: django-config
data:
DJANGO_SETTINGS_MODULE: myproject.settings
Secrets:
apiVersion: v1
kind: Secret
metadata:
name: django-secret
type: Opaque
data:
POSTGRES_PASSWORD: cGFzc3dvcmQ=
ConfigMaps and Secrets can be mounted as environment variables or files inside the pod:
env:
- name: DJANGO_SETTINGS_MODULE
valueFrom:
configMapKeyRef:
name: django-config
key: DJANGO_SETTINGS_MODULE
Scaling is one of Kubernetes’ most powerful features, allowing your application to handle varying amounts of traffic.
Manual Scaling:
kubectl scale deployment django-deployment --replicas=10
Horizontal Pod Autoscaler (HPA):
Example HPA configuration:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: django-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: django-deployment
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 50
Before deploying your Django application, you need to set up a Kubernetes cluster.
Minikube: Ideal for local development, Minikube runs a single-node Kubernetes cluster on your local machine.
minikube start
K3s: A lightweight Kubernetes distribution that’s perfect for edge and IoT environments.
curl -sfL https://get.k3s.io | sh -
Cloud Providers (GKE, EKS): Use Google Kubernetes Engine (GKE) or Amazon Elastic Kubernetes Service (EKS) for a managed Kubernetes service. This is best for production environments.
gcloud container clusters create django-cluster --num-nodes=3
Once your cluster is up, you can manage nodes using kubectl
:
View Nodes:
kubectl get nodes
Add or Remove Nodes: Cloud providers allow you to scale the number of nodes in your cluster through their respective dashboards or CLI tools.
Integrating a CI/CD pipeline automates your build, test, and deployment processes, making your deployments more reliable and repeatable.
Jenkins:
GitLab CI/CD:
.gitlab-ci.yml
file.Example:
stages:
- build
- deploy
build:
script:
- docker build -t my-django-app:latest .
- docker push my-django-app:latest
deploy:
script:
- kubectl apply -f k8s/
GitHub Actions:
Example Workflow:
name: CI/CD Pipeline
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Build Docker image
run: docker build -t my-django-app:latest .
- name: Push Docker image
run: docker push my-django-app:latest
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy to Kubernetes
run: kubectl apply -f k8s/
Now, let’s create a full Django and DRF application, containerize it, and deploy it to your Kubernetes cluster.
Your Dockerfile should define the environment for your Django application. Here’s an example:
# Dockerfile
FROM python:3.10-slim
ENV PYTHONUNBUFFERED 1
WORKDIR /app
COPY requirements.txt /app/
RUN pip install -r requirements.txt
COPY . /app/
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "myproject.wsgi:application"]
requirements.txt: List all the necessary Python packages.
Django>=4.0,<5.0
djangorestframework
gunicorn
psycopg2-binary
Deployment Manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: django-deployment
spec:
replicas: 3
selector:
matchLabels:
app: django
template:
metadata:
labels:
app: django
spec:
containers:
- name: django
image: my-django-app:latest
ports:
- containerPort: 8000
Service Manifest:
apiVersion: v1
kind: Service
metadata:
name: django-service
spec:
selector:
app: django
ports:
- protocol: TCP
port: 80
targetPort: 8000
type: LoadBalancer
Ingress Manifest:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: django-ingress
spec:
rules:
- host: my-django-app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: django-service
port:
number: 80
Horizontal Pod Autoscaler (HPA):
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: django-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: django-deployment
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 50
Kubernetes will automatically scale the number of pods based on CPU usage or other metrics defined in the HPA.
Managing a live application in Kubernetes involves rolling updates, handling database migrations, and preparing for disaster recovery.
Rolling Updates:
Handling Migrations:
kubectl exec -it <pod_name> -- python manage.py migrate
Backups:
apiVersion: batch/v1
kind: CronJob
metadata:
name: backup-cron
spec:
schedule: "0 2 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: backup
image: my-backup-image
args:
- /bin/sh
- -c
- "pg_dumpall -c > /backups/all_databases.sql"
restartPolicy: OnFailure
Disaster Recovery: