How to Configure Load Balancing with HAProxy on Debian 12 Bookworm

Learn how to set up load balancing with HAProxy on Debian 12 Bookworm. A detailed guide covering installation, configuration, testing, and optimization for efficient traffic management.

As web applications scale, distributing traffic efficiently across multiple backend servers becomes critical to ensure high availability and performance. One of the most popular and efficient open-source solutions for this task is HAProxy. Short for High Availability Proxy, HAProxy is a fast and reliable TCP/HTTP load balancer that excels in handling large volumes of traffic.

In this guide, we will walk through the process of installing and configuring HAProxy on Debian 12 Bookworm. You’ll learn how to set up HAProxy as a load balancer to distribute traffic across multiple backend web servers, enabling better load distribution and high availability.


1. Introduction to HAProxy

HAProxy is widely used for its robustness and versatility in high-performance environments. It supports various load-balancing algorithms and provides health checks, SSL termination, stickiness, and more.

Key Features

  • Layer 4 (TCP) and Layer 7 (HTTP) load balancing
  • SSL termination and passthrough
  • Session persistence (stickiness)
  • Health checks for backend servers
  • High performance and low latency

2. Prerequisites

To follow this tutorial, you’ll need:

  • A Debian 12 Bookworm system
  • Root or sudo privileges
  • At least two backend servers running a web service (e.g., Apache or Nginx)
  • Basic familiarity with the command line

3. Installing HAProxy on Debian 12

HAProxy is available in the official Debian 12 repositories, so installation is straightforward.

Step 1: Update Your System

sudo apt update && sudo apt upgrade -y

Step 2: Install HAProxy

sudo apt install haproxy -y

Once installed, verify the version:

haproxy -v

You should see output similar to:

HA-Proxy version 2.6.12 2023/06/13

4. Understanding HAProxy Configuration

HAProxy’s main configuration file is located at:

/etc/haproxy/haproxy.cfg

This file contains multiple sections, such as:

  • global: Global settings like logging and process limits.
  • defaults: Default settings inherited by frontend and backend sections.
  • frontend: Defines how clients connect to HAProxy.
  • backend: Defines how HAProxy connects to backend servers.

Before we dive into configuration, let’s prepare some backend servers.


5. Setting Up Backend Web Servers

For demonstration purposes, assume we have two backend servers with the following IP addresses:

  • Server 1: 192.168.1.101
  • Server 2: 192.168.1.102

Both servers should be running a basic web server. If you want to test locally, you can simulate this by using different ports on localhost.

Example for Server 1:

sudo apt install apache2 -y
echo "Web Server 1" | sudo tee /var/www/html/index.html

On Server 2:

sudo apt install apache2 -y
echo "Web Server 2" | sudo tee /var/www/html/index.html

Ensure both web servers are accessible from the HAProxy server.

Test access:

curl http://192.168.1.101
curl http://192.168.1.102

6. Configuring HAProxy for Load Balancing

Now, let’s edit the HAProxy configuration.

Step 1: Backup the Original Config

sudo cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak

Step 2: Edit the Configuration File

Open the file with a text editor:

sudo nano /etc/haproxy/haproxy.cfg

Replace or update the contents with:

global
    log /dev/log local0
    log /dev/log local1 notice
    daemon
    maxconn 2048
    user haproxy
    group haproxy

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5s
    timeout client  50s
    timeout server  50s
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
    errorfile 504 /etc/haproxy/errors/504.http

frontend http_front
    bind *:80
    default_backend http_back

backend http_back
    balance roundrobin
    option httpchk
    server web1 192.168.1.101:80 check
    server web2 192.168.1.102:80 check

Explanation:

  • balance roundrobin: Distributes requests evenly across servers.
  • option httpchk: Enables HTTP health checking.
  • server web1 ... check: Adds backend servers with health checks.

Save and close the file.


7. Starting and Enabling HAProxy

After configuring, restart the HAProxy service:

sudo systemctl restart haproxy

Enable it to start at boot:

sudo systemctl enable haproxy

Check status:

sudo systemctl status haproxy

8. Verifying Load Balancing Setup

Now open a browser and navigate to the IP of your HAProxy server:

http://<haproxy-server-ip>

You should see either Web Server 1 or Web Server 2. Refresh the page multiple times. If round-robin is working correctly, the server messages will alternate.

Alternatively, use curl repeatedly:

for i in {1..10}; do curl http://<haproxy-server-ip>; done

9. Optional: Enabling HAProxy Statistics Page

HAProxy provides a useful web interface for monitoring.

Add the following section to the bottom of your /etc/haproxy/haproxy.cfg:

listen stats
    bind *:8404
    stats enable
    stats uri /stats
    stats refresh 10s
    stats auth admin:strongpassword

Restart HAProxy:

sudo systemctl restart haproxy

Access the stats page at:

http://<haproxy-server-ip>:8404/stats

Login with the configured username and password.


10. Conclusion

In this guide, we explored how to set up HAProxy on Debian 12 Bookworm as a load balancer for web servers. HAProxy provides a powerful, flexible, and scalable way to handle increasing traffic loads and ensure high availability for your applications.

Summary of What We Did

  • Installed HAProxy on Debian 12
  • Set up backend web servers
  • Configured HAProxy for round-robin load balancing
  • Verified that traffic is properly distributed
  • Optionally enabled a statistics dashboard

This setup can be extended in many ways, including adding SSL termination, sticky sessions, rate limiting, and integration with monitoring systems.

Whether you’re managing a small cluster or a production-scale deployment, HAProxy remains a go-to tool in the sysadmin and DevOps toolkit.