Ben the DBA

Deploying Patroni with etcd and HAProxy for High Availability

This article explains how to set up a high-availability PostgreSQL cluster using Patroni, etcd and HAProxy.

Architecture Overview

Each PostgreSQL instance is managed by a local Patroni process. Patroni uses etcd for leader election and cluster state coordination. HAProxy queries the Patroni REST API to route traffic to the current leader.

Infrastructure Overview

🔷 etcd nodes (dedicated):

🟢 PostgreSQL nodes (with Patroni):

🟣 HAProxy node (optional redundancy):

Building etcd from Source with Go 1.23+

If you're on a system where etcd is not available via dnf/yum, you can compile it manually using the required version of Go.

1. 📦 Install Go 1.23+

Remove older version if installed via dnf:

sudo dnf remove -y golang

Download and install Go 1.23 manually:

cd /usr/local
sudo curl -LO https://go.dev/dl/go1.23.0.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.23.0.linux-amd64.tar.gz

Update your PATH:

export PATH=$PATH:/usr/local/go/bin

Make it permanent (optional):

echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.bashrc

Check version:

go version

You should see: go version go1.23.0 linux/amd64

2. 📁 Build etcd

Install git:

sudo dnf install -y git

Set up Go workspace:

mkdir -p ~/go/src/go.etcd.io/etcd
cd ~/go/src/go.etcd.io/etcd
git clone -b v3.5.21 https://github.com/etcd-io/etcd.git .

Run the build:

./build.sh

After success, binaries are in bin/

3. ✅ Make etcd usable system-wide

sudo cp bin/etcd bin/etcdctl /usr/local/bin/

Verify:

etcd --version
etcdctl version

You are now ready to configure etcd on your system.

Deploying the etcd Cluster

Example (on node1): Edit this file:

vi /etc/systemd/system/etcd.service

Past this (adapt for all the nodes):

[Unit]
Description=etcd key-value store
After=network.target

[Service]
ExecStart=/usr/local/bin/etcd \
  --name etcd-1 \
  --data-dir=/var/lib/etcd \
  --initial-advertise-peer-urls http://192.168.1.101:2380 \
  --listen-peer-urls http://192.168.1.101:2380 \
  --listen-client-urls http://192.168.1.101:2379,http://127.0.0.1:2379 \
  --advertise-client-urls http://192.168.1.101:2379 \
  --initial-cluster etcd-1=http://192.168.1.101:2380,etcd-2=http://192.168.1.102:2380,etcd-3=http://192.168.1.103:2380 \
  --initial-cluster-token patroni-cluster \
  --initial-cluster-state new
Restart=always
RestartSec=5s
LimitNOFILE=40000

[Install]
WantedBy=multi-user.target

Activate the service

sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd

Repeat on the other nodes with appropriate names and IPs.

Install prerequisites on PostgreSQL/Patroni nodes

sudo dnf install -y git gcc python3-devel libpq-devel python3-pip
sudo pip3 install --upgrade pip
sudo pip3 install 'patroni[etcd]' psycopg2-binary psycopg

Install PostgreSQL 13 on each pg-node

sudo dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-9-x86_64/pgdg-redhat-repo-latest.noarch.rpm
sudo dnf -qy module disable postgresql
sudo dnf install -y postgresql13 postgresql13-server
sudo /usr/pgsql-13/bin/postgresql-13-setup initdb

Patroni Configuration

Sample /etc/patroni.yml:

scope: postgres-cluster
name: pg-node1

​restapi:
  listen: 0.0.0.0:8008
  connect_address: 192.168.1.111:8008

​etcd:
  host: 192.168.1.101:2379,192.168.1.102:2379,192.168.1.103:2379

​postgresql:
  listen: 0.0.0.0:5432
  connect_address: 192.168.1.111:5432
  data_dir: /var/lib/pgsql/13/data
  bin_dir: /usr/pgsql-13/bin
  ​authentication:
    ​replication:
      username: replicator
      password: replpass
    ​superuser:
      username: postgres
      password: mypassword

​tags:
  nofailover: false
  noloadbalance: false
  clonefrom: false

Then clean up and restart:

sudo systemctl stop patroni
sudo rm -rf /var/lib/pgsql/13/data/*
sudo systemctl start patroni

Monitor the logs:

journalctl -u patroni -f

Expected output:

launching bootstrap script... ok
executing post-bootstrap init... ok
Lock owner: None; I am pg-node1
bootstrap in progress

It might also briefly show:

not healthy enough for leader race

Just wait a few seconds, and Patroni will eventually declare itself leader:

patronictl -c /etc/patroni.yml list

| pg-node1 | 192.168.1.111 | Leader | running | ... |

You can now proceed with adding the replica nodes.

Setting Up HAProxy

On the HAProxy node (haproxy01), install the package:

sudo dnf install -y haproxy

Edit the configuration file:

sudo nano /etc/haproxy/haproxy.cfg

Paste the following configuration:

global

        maxconn 100
        log     127.0.0.1 local2

defaults
        log global
        mode tcp
        retries 2
        timeout client 30m
        timeout connect 4s
        timeout server 30m
        timeout check 5s

listen stats
    mode http
    bind *:7000
    stats enable
    stats uri /

listen postgres
    bind *:5000
    option httpchk
    http-check expect status 200
    default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
    server pg-node1 192.168.1.111:5432 check port 8008
    server pg-node2 192.168.1.112:5432 check port 8008
    server pg-node3 192.168.1.113:5432 check port 8008

Then enable and start HAProxy:

sudo systemctl enable haproxy
sudo systemctl start haproxy

You can now configure clients to connect to the HAProxy node at 192.168.1.21:5000. HAProxy will automatically route traffic to the current PostgreSQL leader managed by Patroni. To test connectivity:

psql -h 192.168.1.121 -p 5000 -U postgres

Switchover Test

If everything is OK, as the PostgreSQL user on one of the nodes:

patronictl -c /etc/patroni.yml switchover

Failover Test

create database test;
\c test
CREATE TABLE customer(name text,age integer);
INSERT INTO CUSTOMER VALUES('john',30);
INSERT INTO CUSTOMER VALUES('dawson',35);

SELECT * FROM CUSTOMER;

The data is properly stored here. Stop Patroni on the primary server:

systemctl stop patroni

Check the cluster status:

patronictl -c /etc/patroni.yml list

Verify the data:

psql -h  -p 5000 -U postgres


\c test
SELECT * FROM CUSTOMER;