High-availability with HAProxy and keepalived on Ubuntu 12.04

‘Lo there !

Here is a little post on how you can easily setup a highly available HAProxy service on Ubuntu 12.04 ! I tend to use more and more HAProxy these times, adding more backends and connections on it. Then, I thought, what if it goes down? How can I ensure high availability on that service?

Here enters keepalived, which allows to setup another HAProxy node to create a active/passive cluster. If the main HAProxy node goes down, the second one will take the relay.

In the following examples, I assume the following:

  • Master node address: 10.10.1.1
  • Slave node address: 10.10.1.2
  • Highy available HAProxy virtual address: 10.10.1.3

Install HAProxy

You’ll need to install it on both nodes:

$ sudo apt-get install haproxy

Now, edit the file /etc/default/haproxy and set the property ENABLED to 1.

Start the service, and you’re done 🙂

$ sudo service haproxy start

Install keepalived

Prerequisite

You’ll need to update your sysctl configuration to allow non-local addresses binding:

$ echo "net.ipv4.ip_nonlocal_bind = 1" | sudo tee -a /etc/sysctl.conf
$ sudo sysctl -p

Setup

Install the package:

$ sudo apt-get install keepalived

Create a configuration /etc/keepalived/keepalived.conf file for the master node:

/etc/keepalived/keepalived.conf

global_defs {
 # Keepalived process identifier
 lvs_id haproxy_KA
}

# Script used to check if HAProxy is running
vrrp_script check_haproxy {
 script "killall -0 haproxy"
 interval 2
 weight 2
}

# Virtual interface
vrrp_instance VIP_01 {
 state MASTER
 interface eth0
 virtual_router_id 7
 priority 101

 virtual_ipaddress {
 10.10.1.3
 }

 track_script {
 check_haproxy
 }
}

Do the same for the slave node, with a few changes:

/etc/keepalived/keepalived.conf

global_defs {
 # Keepalived process identifier
 lvs_id haproxy_KA_passive
}

# Script used to check if HAProxy is running
vrrp_script check_haproxy {
 script "killall -0 haproxy"
 interval 2
 weight 2
}

# Virtual interface
vrrp_instance VIP_01 {
 state SLAVE
 interface eth0
 virtual_router_id 7
 priority 100

 virtual_ipaddress {
 10.10.1.3
 }

 track_script {
 check_haproxy
 }
}

WARNING: Be sure to assign a unique virtual_router_id for that keepalived configuration on the subnet 10.10.1.0.

Last step, start the service keepalived service on the master node first and then on the slave.

$ sudo service keepalived start

You can check that the virtual IP address is created with the following command on the master node:

$ ip a | grep eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 inet 10.10.1.1/25 brd 10.10.1.127 scope global eth0
 inet 10.10.1.3/32 scope global eth0

If you stop the HAProxy service on the master node or shutdown the node, the virtual IP will be transfered on the passive node, you can use the last command to verify that the VIP has been transfered.

 

Advertisements

4 thoughts on “High-availability with HAProxy and keepalived on Ubuntu 12.04

  1. Thanks for the instruction. I followed the every step to setup my HAProxy cluster. However, my HAProxy is not able to failover to the other one. Here is what I have the at last step of my master node:
    root@i-9fed:/etc/keepalived# ip a | grep eth0
    2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 10.109.29.47/27 brd 10.109.29.63 scope global eth0
    inet 10.109.29.60/32 scope global eth0

    Here is what I have at the last step of my slave node:
    root@i-3eb7:/home/cloudadmin# ip a | grep eth0
    2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 10.109.29.48/27 brd 10.109.29.63 scope global eth0
    inet 10.109.29.60/32 scope global eth0

    Then I did a ‘service haproxy stop’ on the master node. The slave node was not able to take over. Any clue what went wrong?

    1. Hey Susan,

      No idea but it seems that the virtual IP you’ve setted (10.109.29.60) is shared by both HAProxy nodes, it should be only present on the master node. Maybe an issue with the priority in the keepalived configuration, did you set a different priority for each node?

  2. This can happen if your virtual_router_id is NOT configured to the same ID. The virtual_router_id must be the same.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s