How to Deploy and Manage Microservices on Debian Using Kubernetes on Debian 12 Bookworm

Learn how to deploy and manage microservices using Kubernetes on a Debian 12 Bookworm system.

Microservices architecture has become a standard in modern application development. It allows you to break down your application into smaller, loosely coupled services that can be developed, deployed, and scaled independently. Kubernetes (often abbreviated as K8s) is one of the most powerful and widely used platforms to manage these microservices in containerized environments.

In this article, we’ll walk through how to deploy and manage microservices using Kubernetes on a Debian 12 Bookworm system. We’ll explore the required installations, basic setup, and practical management techniques for maintaining a microservices architecture in production.


1. Why Use Kubernetes for Microservices

Kubernetes automates the deployment, scaling, and management of containerized applications. When it comes to microservices, it provides essential features such as:

  • Service discovery and load balancing
  • Automated rollouts and rollbacks
  • Self-healing
  • Storage orchestration
  • Secret and configuration management

Kubernetes helps reduce the complexity of managing multiple microservices by handling much of the orchestration automatically.


2. Prerequisites

Before starting, ensure you have the following:

  • A system running Debian 12 Bookworm
  • At least 2 CPUs, 2 GB RAM, and 20 GB disk space (minimum per node)
  • Root or sudo privileges
  • Internet access to download packages

It’s also beneficial to have some knowledge of Docker and Linux networking concepts.


3. Installing Kubernetes on Debian 12 Bookworm

Kubernetes requires a container runtime. We’ll use containerd as it’s lightweight and officially supported.

Step 1: Disable Swap

Kubernetes requires swap to be turned off:

sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab

Step 2: Install containerd

sudo apt update
sudo apt install -y containerd

Configure and enable containerd:

sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd

Step 3: Configure Kernel Modules and sysctl

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sudo sysctl --system

Step 4: Install Kubernetes Components

sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl gpg

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/kubernetes.gpg

echo "deb [signed-by=/etc/apt/trusted.gpg.d/kubernetes.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

4. Setting Up a Kubernetes Cluster

We’ll use kubeadm to initialize the cluster.

Step 1: Initialize the Control Plane

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

After a successful setup, copy the kubeconfig file:

mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 2: Install a Pod Network Add-on (Flannel)

kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

Now your Kubernetes control plane is ready. You can add more worker nodes using the kubeadm join command shown during initialization.


5. Deploying Microservices on Kubernetes

Now that we have a working Kubernetes cluster, let’s deploy a basic microservices example.

Example Architecture

We’ll deploy a sample e-commerce application with:

  • Frontend (React app)
  • Backend API (Node.js)
  • MongoDB (database)

Each service will be defined in its own Deployment and exposed via Service.

Step 1: MongoDB Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongo
  template:
    metadata:
      labels:
        app: mongo
    spec:
      containers:
        - name: mongo
          image: mongo:6
          ports:
            - containerPort: 27017
apiVersion: v1
kind: Service
metadata:
  name: mongo
spec:
  selector:
    app: mongo
  ports:
    - port: 27017
      targetPort: 27017

Apply it:

kubectl apply -f mongo-deployment.yaml
kubectl apply -f mongo-service.yaml

Step 2: Backend API Deployment

Configure your backend to use mongo as the database host.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
        - name: backend
          image: myrepo/backend:latest
          ports:
            - containerPort: 3000
apiVersion: v1
kind: Service
metadata:
  name: backend
spec:
  selector:
    app: backend
  ports:
    - port: 3000
      targetPort: 3000

Step 3: Frontend Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
        - name: frontend
          image: myrepo/frontend:latest
          ports:
            - containerPort: 80
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  selector:
    app: frontend
  type: NodePort
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30007

Now, access your app via <Node-IP>:30007.


6. Managing Microservices on Kubernetes

Kubernetes gives you full control over microservices management:

Scaling Services

Scale your frontend or backend based on traffic:

kubectl scale deployment frontend --replicas=4

Rolling Updates

To update a backend image:

kubectl set image deployment/backend backend=myrepo/backend:v2

Kubernetes performs a rolling update by default.

Rollback

If something goes wrong:

kubectl rollout undo deployment/backend

Resource Limits

Set CPU and memory limits:

resources:
  requests:
    memory: "256Mi"
    cpu: "250m"
  limits:
    memory: "512Mi"
    cpu: "500m"

7. Monitoring and Logging

Metrics Server

Install the metrics server to enable resource usage metrics:

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

Use it with:

kubectl top nodes
kubectl top pods

Logging

Use kubectl logs to check logs:

kubectl logs deployment/backend

For centralized logging, consider using:

  • EFK Stack (Elasticsearch, Fluentd, Kibana)
  • Prometheus + Grafana for monitoring

8. Conclusion

Deploying and managing microservices on Debian 12 Bookworm with Kubernetes offers a scalable, resilient, and maintainable architecture for modern applications. From setting up the Kubernetes cluster to deploying microservices and managing them efficiently, Debian proves to be a robust environment for container orchestration.

Whether you’re experimenting locally or managing production environments, this combination provides a stable foundation. Going forward, consider integrating CI/CD pipelines, secrets management (via Vault or Kubernetes Secrets), and security policies to further enhance your deployments.