Build RabbitMQ Server High Availability Cluster

Read the table of contents:

  • Ready to work
  • Build RabbitMQ Server stand-alone version
  • RabbitMQ Server high availability cluster related concepts
  • Build RabbitMQ Server High Availability Cluster
  • Build HAProxy Load Balancer

Because the company's test server is temporarily unavailable, we can only re-build the RabbitMQ Server high-availability cluster on our own computer, just record this process for future review.

For the RabbitMQ cluster on the company's test server, I built three servers. Because of my limited computer space, I can only build two servers here as a high-availability cluster, using the Vagrant virtual machine management tool.

Environment introduction:

RabbitMQ node IP address Operating mode
node1 192.168.1.50 DISK CentOS 7.0 - 64-bit
node2 192.168.1.51 DISK CentOS 7.0 - 64-bit

Overall structure:

1. Preparations

First, on the node1server, modify vi /etc/hostname:

node1

On the node2server, modify vi /etc/hostname:

node2

Then on the node1server, modify vi /etc/hosts:

node1 192.168.1.50
node2 192.168.1.51
127.0.0.1   node1
::1         node1

On the node2server, modify vi /etc/hosts:

192.168.1.50 node1
192.168.1.51 node2
127.0.0.1   node2
::1         node2

Then check it hostnamectl status, if it is not correct, you need to set it again:

[root@node1 ~]# hostnamectl status
   Static hostname: node1
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 241163503ce842c489360d0a48a606fc
           Boot ID: cdb59c025cb447e3afed7317af78979e
    Virtualization: oracle
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-229.el7.x86_64
      Architecture: x86_64
[root@node1 ~]# hostnamectl --static set-hostname node1

In order for our installation to go smoothly, we'd better configure the proxy again:

[root@node1 ~]# export http_proxy=http://192.168.1.44:1087;export https_proxy=http://192.168.1.44:1087;
[root@node1 ~]# curl ip.cn
当前 IP:104.245.13.31 来自:美国 Linost

2. Build RabbitMQ Server stand-alone version

The node1server is used as a demonstration example below.

First, update packages and repositories:

[root@node1 ~]# yum -y update

Then install Erlang (the Erlang environment is required for RabbitMQ to run):

[root@node1 ~]# vi /etc/yum.repos.d/rabbitmq-erlang.repo
[root@node1 ~]# [rabbitmq-erlang]
name=rabbitmq-erlang
baseurl=https://dl.bintray.com/rabbitmq/rpm/erlang/20/el/7
gpgcheck=1
gpgkey=https://dl.bintray.com/rabbitmq/Keys/rabbitmq-release-signing-key.asc
repo_gpgcheck=0
enabled=1

[root@node1 ~]# yum -y install erlang socat

Then install RabbitMQ Server:

[root@node1 ~]# mkdir -p ~/download && cd ~/download
[root@node1 download]# wget https://www.rabbitmq.com/releases/rabbitmq-server/v3.6.10/rabbitmq-server-3.6.10-1.el7.noarch.rpm
[root@node1 download]# rpm --import https://www.rabbitmq.com/rabbitmq-release-signing-key.asc
[root@node1 download]# rpm -Uvh rabbitmq-server-3.6.10-1.el7.noarch.rpm

Uninstall RabbitMQ command:

[root@node1 ~]# rpm -e rabbitmq-server-3.6.10-1.el7.noarch

Once installed, you can start RabbitMQ Server:

[root@node1 download]# systemctl start rabbitmq-server

It can also be added to system services to start:

[root@node1 download]# systemctl enable rabbitmq-server
Created symlink from /etc/systemd/system/multi-user.target.wants/rabbitmq-server.service to /usr/lib/systemd/system/rabbitmq-server.service.

After the startup is successful, we can check the status of RabbitMQ Server:

[root@node1 download]# systemctl status rabbitmq-serverrabbitmq-server.service - RabbitMQ broker
   Loaded: loaded (/usr/lib/systemd/system/rabbitmq-server.service; disabled)
   Active: active (running) since 五 2018-04-27 04:44:31 CEST; 3min 27s ago
  Process: 17216 ExecStop=/usr/sbin/rabbitmqctl stop (code=exited, status=0/SUCCESS)
 Main PID: 17368 (beam.smp)
   Status: "Initialized"
   CGroup: /system.slice/rabbitmq-server.service
           ├─17368 /usr/lib64/erlang/erts-9.3/bin/beam.smp -W w -A 64 -P 1048576 -t 5000000 -stbt db -zdbbl 32000 -K true -- -root /usr/lib64/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa /usr...
           ├─17521 /usr/lib64/erlang/erts-9.3/bin/epmd -daemon
           ├─17655 erl_child_setup 1024
           ├─17675 inet_gethost 4
           └─17676 inet_gethost 4

4月 27 04:44:30 node1 rabbitmq-server[17368]: RabbitMQ 3.6.10. Copyright (C) 2007-2017 Pivotal Software, Inc.
4月 27 04:44:30 node1 rabbitmq-server[17368]: ##  ##      Licensed under the MPL.  See http://www.rabbitmq.com/
4月 27 04:44:30 node1 rabbitmq-server[17368]: ##  ##
4月 27 04:44:30 node1 rabbitmq-server[17368]: ##########  Logs: /var/log/rabbitmq/[email protected]
4月 27 04:44:30 node1 rabbitmq-server[17368]: ######  ##        /var/log/rabbitmq/[email protected]
4月 27 04:44:30 node1 rabbitmq-server[17368]: ##########
4月 27 04:44:30 node1 rabbitmq-server[17368]: Starting broker...
4月 27 04:44:31 node1 rabbitmq-server[17368]: systemd unit for activation check: "rabbitmq-server.service"
4月 27 04:44:31 node1 systemd[1]: Started RabbitMQ broker.
4月 27 04:44:31 node1 rabbitmq-server[17368]: completed with 0 plugins.
[root@node1 download]# systemctl enable rabbitmq-server
ln -s '/usr/lib/systemd/system/rabbitmq-server.service' '/etc/systemd/system/multi-user.target.wants/rabbitmq-server.service'

Then start the RabbitMQ web management console:

[root@node1 download]# rabbitmq-plugins enable rabbitmq_management
The following plugins have been enabled:
  amqp_client
  cowlib
  cowboy
  rabbitmq_web_dispatch
  rabbitmq_management_agent
  rabbitmq_management

Applying plugin configuration to rabbit@node1... started 6 plugins.

guestThe default user of RabbitMQ Server can only be localhostaccessed by address. We also need to create an administrative user:

[root@node1 download]# rabbitmqctl add_user admin admin123 && 
rabbitmqctl set_user_tags admin administrator && 
rabbitmqctl set_permissions -p / admin ".*" ".*" ".*"

Then add the port that the firewall runs to access:

[root@node1 download]# firewall-cmd --zone=public --permanent --add-port=4369/tcp && 
firewall-cmd --zone=public --permanent --add-port=25672/tcp && 
firewall-cmd --zone=public --permanent --add-port=5671-5672/tcp && 
firewall-cmd --zone=public --permanent --add-port=15672/tcp && 
firewall-cmd --zone=public --permanent --add-port=61613-61614/tcp && 
firewall-cmd --zone=public --permanent --add-port=1883/tcp && 
firewall-cmd --zone=public --permanent --add-port=8883/tcp
success

Restart the firewall:

[root@node1 download]# firewall-cmd --reload
success

The above is done, and the deployment of the RabbitMQ stand-alone version is also completed. We can access `` in the browser:

node2Repeat the above construction process on the server.

3. Concepts related to RabbitMQ Server high availability cluster

Purpose of designing a cluster

  • Allows consumers and producers to continue running in the event of a RabbitMQ node crash.
  • Scale the throughput of message communication by adding more nodes.

Cluster configuration

  • cluster : does not support cross-network segments, it is used for local area networks within the same network segment; it can be dynamically increased or decreased at will; nodes need to run the same version of RabbitMQ and Erlang.
  • federation : Applied to a wide area network, it allows a switch or queue on a single server to receive messages published to a switch or queue on another server, either a single machine or a cluster. A federation queue is similar to a one-way point-to-point connection, and messages are forwarded between federation queues any number of times until they are accepted by the consumer. A federation is usually used to connect to an intermediate server on the internet, serving as a subscription to distribute messages or work queues.
  • shovel : The connection method is similar to that of the federation, but it works at a lower level. Can be applied to wide area network.

Node type

  • RAM node : The memory node stores all the metadata definitions of queues, exchanges, bindings, users, permissions and vhosts in memory. The benefit is that operations such as exchange and queue declarations can be made faster.
  • Disk node : Metadata is stored on disk. A single-node system only allows disk-type nodes to prevent system configuration information from being lost when RabbitMQ is restarted.

Problem description: RabbitMQ requires at least one disk node in the cluster, and all other nodes can be memory nodes. When a node joins or leaves the cluster, it must notify at least one disk node of the change. If the only disk node in the cluster crashes, the cluster can still keep running, but no other operations (add, delete, modify, check) can be performed until the node recovers.
Solution: Set up two disk nodes, at least one of which is available and can hold metadata changes.

Erlang Cookie is a key that ensures that different nodes can communicate with each other. To ensure that different nodes in the cluster communicate with each other, they must share the same Erlang Cookie. The specific directory is stored in /var/lib/rabbitmq/.erlang.cookie.

Note: This starts with the working principle of the rabbitmqctl command. The bottom layer of RabbitMQ is implemented through the Erlang architecture, so rabbitmqctl will start the Erlang node and use the Erlang system to connect to the RabbitMQ node based on the Erlang node. Erlang cookie and node name, Erlang nodes exchange Erlang cookie for authentication.

mirror queue

RabbitMQ's Cluster cluster mode is generally divided into two types, normal mode and mirror mode.

  • Normal mode : The default cluster mode. Take two nodes (rabbit01, rabbit02) as an example to illustrate. For Queue, the message entity only exists in one of the nodes rabbit01 (or rabbit02), and the two nodes of rabbit01 and rabbit02 only have the same metadata, that is, the structure of the queue. After the message enters the queue of the rabbit01 node, when the consumer consumes from the rabbit02 node, RabbitMQ will temporarily transmit the message between rabbit01 and rabbit02, take out the message entity in A and send it to the consumer through B. Therefore, the consumer should try to connect to each node and get messages from it. That is, for the same logical queue, physical queues must be established on multiple nodes. Otherwise, regardless of whether the consumer is connected to rabbit01 or rabbit02, the exit will always be at rabbit01, which will cause a bottleneck. When the rabbit01 node fails, the rabbit02 node cannot obtain the unconsumed message entities in the rabbit01 node. If the message is persisted, it will have to wait for the rabbit01 node to recover before it can be consumed; if it is not persisted, the message will be lost.
  • Mirror mode : The queue that needs to be consumed is changed into a mirror queue, which exists on multiple nodes, so that the HA high availability of RabbitMQ can be achieved. The effect is that the message entity will actively synchronize between the mirror nodes, instead of temporarily reading it when the consumer consumes data like the normal mode. The disadvantage is that synchronous communication within the cluster consumes a lot of network bandwidth.

The mirror queue implements the high availability (HA) of RabbitMQ. The specific implementation strategy is as follows:

ha-mode ha-params Function
all null The mirrored queue will be replicated across the cluster. When a new node is added, a copy is also made on this node.
exactly count The mirrored queue will be replicated count copies across the cluster. If the number of clusters is less than count, the queue will be replicated on all nodes. If the cluster is larger than Count, after a node crashes, the newly entered node will not do a new image.
nodes node name The mirror queue is replicated in node name. This will not trigger an error if the name is not one of the clusters. If no node in this node list is online, then the queue will be declared on the node the client is connected to.

Examples are listed:

queue_args("x-ha-policy":"all") //定义字典来设置额外的队列声明参数
channel.queue_declare(queue="hello-queue",argument=queue_args)

If you need to set a specific node (for rabbit@localhostexample), add one more parameter:

queue_args("x-ha-policy":"nodes",
           "x-ha-policy-params":["rabbit@localhost"])
channel.queue_declare(queue="hello-queue",argument=queue_args)

You can check which master node has been synchronized through the command line:

$ rabbitmqctl list_queue name slave_pids synchronised_slave_pids

The main reference for the above content: RabbitMQ distributed cluster architecture

4. Build a RabbitMQ Server high-availability cluster

After understanding the above concepts, it is very easy for us to build a RabbitMQ Server high-availability cluster.

The default .erlang.cookiefiles are hidden lsand cannot be viewed by commands. You can also manually search for the following files:

[root@node1 ~]# find / -name ".erlang.cookie"
/var/lib/rabbitmq/.erlang.cookie
[root@node1 ~]# cat /var/lib/rabbitmq/.erlang.cookie
LBOTELUJAMXDMIXNTZMB

Copy node1the .erlang.cookiefiles from the server to the node2server:

[root@node1 ~]# scp /var/lib/rabbitmq/.erlang.cookie root@node2:/var/lib/rabbitmq

Stop the running node first, then start RabbitMQ Server in background mode ( node1and node2execute separately):

[root@node1 ~]# rabbitmqctl stop
[root@node1 ~]# rabbitmq-server -detached

Then we use it node1as the cluster center and node2execute the join cluster center command (the node type is disk node):

[root@node1 ~]# rabbitmqctl stop_app
[root@node1 ~]# rabbitmqctl reset 
[root@node1 ~]# rabbitmqctl join_cluster rabbit@node1
//默认是磁盘节点,如果是内存节点的话,需要加--ram参数
[root@node1 ~]# rabbitmqctl start_app

View the status of the cluster (contains node1and node2nodes):

[root@node1 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@node1
[{nodes,[{disc,[rabbit@node1,rabbit@node2]}]},
 {running_nodes,[rabbit@node2,rabbit@node1]},
 {cluster_name,<<"rabbit@node1">>},
 {partitions,[]},
 {alarms,[{rabbit@node2,[]},{rabbit@node1,[]}]}]

We can see cluster information from the RabbitMQ web management interface:

5. Build HAProxy Load Balancer

HAProxy is a free load balancing software that can run on most mainstream Linux operating systems.

HAProxy provides both L4 (TCP) and L7 (HTTP) load balancing capabilities with rich functions. HAProxy has a very active community and fast version updates (the latest stable version 1.7.2 was released on 2017/01/13). Most importantly, HAProxy has performance and stability comparable to commercial load balancers. It is currently not only the first choice for free load balancing software, but almost the only choice.

Because RabbitMQ itself does not provide load balancing, let's build HAProxy as the load balancing of RabbitMQ cluster.

HAProxy is installed on the node1server, the installation command:

[root@node1 ~]# rpm -ivh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-5.noarch.rpm//
[root@node1 ~]# yum -y install haproxy

Configure HAProxy:

[root@node1 ~]# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
[root@node1 ~]# vi /etc/haproxy/haproxy.cfg

Add the following configuration to the /etc/haproxy/haproxy.cfgfile:

global
    log     127.0.0.1  local0 info
    log     127.0.0.1  local1 notice
    daemon
    maxconn 4096

defaults
    log     global
    mode    tcp
    option  tcplog
    option  dontlognull
    retries 3
    option  abortonclose
    maxconn 4096
    timeout connect  5000ms
    timeout client  3000ms
    timeout server  3000ms
    balance roundrobin

listen private_monitoring
    bind    0.0.0.0:8100
    mode    http
    option  httplog
    stats   refresh  5s
    stats   uri  /stats
    stats   realm   Haproxy
    stats   auth  admin:admin

listen rabbitmq_admin
    bind    0.0.0.0:8102
    server  node1 node1:15672
    server  node2 node2:15672

listen rabbitmq_cluster
    bind    0.0.0.0:8101
    mode    tcp
    option  tcplog
    balance roundrobin
    server  node1  node1:5672  check  inter  5000  rise  2  fall  3
    server  node2  node2:5672  check  inter  5000  rise  2  fall  3

Then start HAProxy:

[root@node1 ~]# haproxy -f /etc/haproxy/haproxy.cfg

For external access, you need to close the firewall:

[root@node1 ~]# systemctl disable firewalld.service
rm '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service'
rm '/etc/systemd/system/basic.target.wants/firewalld.service'
[root@node1 ~]# systemctl stop firewalld.service

HAProxy is configured with three addresses:

  • http://node1:8100/stats: HAProxy load balancing information address, account password: admin/admin.
  • http://node1:8101: RabbitMQ Server web management interface (based on load balancing).
  • http://node1:8102: RabbitMQ Server service address (based on load balancing).

http://node1:8100/statsView the HAProxy load balancing information by visiting :

References:

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324973200&siteId=291194637