Clustering
Functionality available
Baruwa is capable of running in a cluster.
Full Baruwa functionality is available from any member within a Baruwa cluster and all cluster members have equal status. This allows you to provide round robin access either using Load Balancers or DNS configuration. This makes the running of a cluster totally transparent to the end users.
Cluster wide as well as node status information is visible via Global status and Scanner node status
Requirements
Baruwa stores client session information in Memcached, so all the nodes in the cluster should be configured to use the same Memcached server.
All nodes should be configured to either use a clustered MQ broker or use the same MQ broker as the other nodes. The nodes should be aware of the other nodes queues to enable them to submit tasks to those queues.
All the nodes with in a cluster should be configured to write to a single database and index data to a single or distributed sphinx server.
The full requirements are:
- Shared Memcached server
- Shared PostgreSQL server
- Shared MQ broker or clustered broker
- Shared Sphinx server or distributed sphinx servers
The recommended setup is to have Memcached, PostgreSQL, RabbitMQ, Sphinx running on a separate server. This called the Distributed Backend Distributed Frontend topology.
Note
If installed using the correct System Profiles the correct ports will be open on the host firewall. You may however wish to make the firewall more restrictive by allowing only your cluster machines to connect to the ports.
The firewall on the server hosting the above shared services needs to be configured to allow the following connections from the cluster nodes.
- TCP 9312, 9306 - Sphinx
- TCP 5432 - PostgreSQL or 6432 Pgbouncer
- TCP 4369 - RabbitMQ EPMD
- TCP 11211 - Memcached
- TCP 1027 - Quarantine syncronization
Limitations
Host specific quarantines
Note
This limitation is not present when using a shared quarantine.
Quarantines are node specific, so messages quarantined on a failed node will not be accessible until the node is restored.
Management traffic
Given that the primary function of the Baruwa System is processing of email, full high availability is limited to the mail processing function.
In event of backend server connectivity or functionality failure, email processing will NOT be disrupted and will continue functioning normally.
The management interface how ever will be unaccessible in event of backend server connectivity or functionality failure.
When the backend server connectivity or functionality is restored, resyncronization of the system will take place and the management interface will return to normal functionality.
Load Balancers
Baruwa Enterprise Edition can be setup to use load balancers that support the Proxy-protocol, the most popular being Haproxy.
To use Baruwa Enterprise Edition SMTP servers with these load balancers you
need to specify the load balancer IP addresses in the Load Balancer IP’s
field on the MTA Settings
screen in baruwa-setup
A sample configuration for haproxy with both HTTP and SMTP being load balanced is below.
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
option redispatch
retries 3
maxconn 2000
timeout connect 5000
timeout client 50000
timeout server 50000
listen http :80
mode tcp
option tcplog
balance roundrobin
server web1 192.168.1.20:80 check
server web2 192.168.1.23:80 check
listen https :443
mode tcp
option tcplog
balance roundrobin
server web1 192.168.1.20:443 check
server web2 192.168.1.23:443 check
listen smtp :25
mode tcp
no option http-server-close
option tcplog
timeout server 1m
timeout connect 5s
balance roundrobin
server smtp1 192.168.1.22:25 send-proxy
server smtp2 192.168.1.24:25 send-proxy