[Microservices] Advanced RabbitMQ Deployment

1. Stand-alone deployment

We use Docker to install in Centos7 virtual machine.

1.1. Download mirror

Method 1: online pull

docker pull rabbitmq:3.8-management

Method 2: Load from local

After downloading the image package and uploading it to the virtual machine, use the command to load the image:

docker load -i mq.tar

1.2. Install MQ

Execute the following command to run the MQ container:

docker run \
 -e RABBITMQ_DEFAULT_USER=root \
 -e RABBITMQ_DEFAULT_PASS=root \
 -v mq-plugins:/plugins \
 --name mq \
 --hostname mq1 \
 -p 15672:15672 \
 -p 5672:5672 \
 -d \
 rabbitmq:3.8-management

2. Install the DelayExchange plugin

The official installation guide address is: https://blog.rabbitmq.com/posts/2015/04/scheduling-messages-with-rabbitmq

The above document is based on installing RabbitMQ natively on linux, and then installing the plug-in.

Because we installed RabbitMQ based on Docker before, we will explain how to install the RabbitMQ plugin based on Docker.

2.1. Download plugin

RabbitMQ has an official plugin community at: https://www.rabbitmq.com/community-plugins.html

It contains various plugins, including the DelayExchange plugin we're going to use:

insert image description here

You can go to the corresponding GitHub page to download the 3.8.9 version of the plug-in, the address is https://github.com/rabbitmq/rabbitmq-delayed-message-exchange/releases/tag/3.8.9 This corresponds to RabbitMQ 3.8.5 and above Version.

2.2. Upload plugin

Because we are installing based on Docker, we need to check the data volume corresponding to the RabbitMQ plugin directory first. If you are not based on Docker, please refer to the first chapter to recreate the Docker container.

The data volume name of RabbitMQ we set before is mq-plugins, so we use the following command to view the data volume:

docker volume inspect mq-plugins

The following results can be obtained:

insert image description here

Next, upload the plugin to this directory:

insert image description here

2.3. Install the plug-in

Finally, it is installed, and you need to enter the inside of the MQ container to perform the installation. My container is named mq, so execute the following command:

docker exec -it mq bash

When executing, -itplease mqreplace the following with your own container name.

After entering the container, execute the following command to enable the plugin:

rabbitmq-plugins enable rabbitmq_delayed_message_exchange

The result is as follows:

insert image description here

3. Cluster deployment

Next, let's see how to install a RabbitMQ cluster.

2.1. Cluster classification

In the official documentation of RabbitMQ, two cluster configuration methods are described:

  • Normal mode: Normal mode clusters do not perform data synchronization, and each MQ has its own queue and data information (other metadata information such as switches will be synchronized). For example, we have 2 MQs: mq1 and mq2. If your message is in mq1 and you are connected to mq2, then mq2 will go to mq1 to pull the message and return it to you. If mq1 goes down, the message will be lost.
  • Mirror mode: Different from the normal mode, the queue will be synchronized between the mirror nodes of each mq, so you can get the message when you connect to any mirror node. And if a node goes down, no data will be lost. However, this approach increases bandwidth consumption for data synchronization.

Let's first look at the normal mode cluster. Our plan is to deploy a 3-node mq cluster:

CPU name console port amqp communication port
mq1 8081 —> 15672 8071 —> 5672
mq2 8082 —> 15672 8072 —> 5672
mq3 8083 —> 15672 8073 —> 5672

The default labels of nodes in the cluster are: rabbit@[hostname], so the names of the above three nodes are:

  • rabbit@mq1
  • rabbit@mq2
  • rabbit@mq3

2.2. Get cookies

The bottom layer of RabbitMQ depends on Erlang, and the Erlang virtual machine is a distributed-oriented language that supports cluster mode by default. Each RabbitMQ node in cluster mode uses a cookie to determine if they are allowed to communicate with each other.

For two nodes to be able to communicate, they must have the same shared secret, called an Erlang cookie . A cookie is just a string of alphanumeric characters up to 255 characters.

Every cluster node must have the same cookie . It is also needed between instances to communicate with each other.

We first get a cookie value in the previously started mq container as the cluster cookie. Execute the following command:

docker exec -it mq cat /var/lib/rabbitmq/.erlang.cookie

You can see the cookie value as follows:

FXZMCVGLBIXZCDEMMVZQ

Next, stop and delete the current mq container, and we rebuild the cluster.

docker rm -f mq

insert image description here

2.3. Prepare cluster configuration

Create a new configuration file rabbitmq.conf in the /tmp directory:

cd /tmp
# 创建文件
touch rabbitmq.conf

The content of the file is as follows:

loopback_users.guest = false
listeners.tcp.default = 5672
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_classic_config
cluster_formation.classic_config.nodes.1 = rabbit@mq1
cluster_formation.classic_config.nodes.2 = rabbit@mq2
cluster_formation.classic_config.nodes.3 = rabbit@mq3

Create another file to record cookies

cd /tmp
# 创建cookie文件
touch .erlang.cookie
# 写入cookie
echo "FXZMCVGLBIXZCDEMMVZQ" > .erlang.cookie
# 修改cookie文件的权限
chmod 600 .erlang.cookie

Prepare three directories, mq1, mq2, mq3:

cd /tmp
# 创建目录
mkdir mq1 mq2 mq3

Then copy rabbitmq.conf and cookie files to mq1, mq2, mq3:

# 进入/tmp
cd /tmp
# 拷贝
cp rabbitmq.conf mq1
cp rabbitmq.conf mq2
cp rabbitmq.conf mq3
cp .erlang.cookie mq1
cp .erlang.cookie mq2
cp .erlang.cookie mq3

2.4. Start the cluster

Create a network:

docker network create mq-net

docker volume create

run command

docker run -d --net mq-net \
-v ${
    
    PWD}/mq1/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf \
-v ${
    
    PWD}/.erlang.cookie:/var/lib/rabbitmq/.erlang.cookie \
-e RABBITMQ_DEFAULT_USER=root \
-e RABBITMQ_DEFAULT_PASS=1234 \
--name mq1 \
--hostname mq1 \
-p 8071:5672 \
-p 8081:15672 \
rabbitmq:3.8-management
docker run -d --net mq-net \
-v ${
    
    PWD}/mq2/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf \
-v ${
    
    PWD}/.erlang.cookie:/var/lib/rabbitmq/.erlang.cookie \
-e RABBITMQ_DEFAULT_USER=root \
-e RABBITMQ_DEFAULT_PASS=1234 \
--name mq2 \
--hostname mq2 \
-p 8072:5672 \
-p 8082:15672 \
rabbitmq:3.8-management
docker run -d --net mq-net \
-v ${
    
    PWD}/mq3/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf \
-v ${
    
    PWD}/.erlang.cookie:/var/lib/rabbitmq/.erlang.cookie \
-e RABBITMQ_DEFAULT_USER=root \
-e RABBITMQ_DEFAULT_PASS=1234 \
--name mq3 \
--hostname mq3 \
-p 8073:5672 \
-p 8083:15672 \
rabbitmq:3.8-management

2.5. Testing

Add a queue to the mq1 node:

insert image description here

As shown in the figure, it can also be seen in both mq2 and mq3 consoles:

insert image description here

2.5.1. Data sharing test

Click on this queue to enter the management page:

insert image description here

Then use the console to send a message to this queue:

insert image description here

As a result, this message can be seen on mq2 and mq3:

insert image description here

2.5.2. Usability testing

We let one of the nodes mq1 go down:

docker stop mq1

Then log in to the console of mq2 or mq3, and find that simple.queue is no longer available:

It means that the data is not copied to mq2 and mq3.

4. Mirror Mode

In the case just now, once the host that created the queue goes down, the queue will be unavailable. Does not have high availability capabilities. If you want to solve this problem, you must use the official mirror cluster solution.

Official document address: https://www.rabbitmq.com/ha.html

4.1. Features of mirror mode

By default, queues are only persisted on the node that created the queue. In the mirror mode, the node that creates the queue is called the master node of the queue , and the queue is also copied to other nodes in the cluster, also called the mirror node of the queue.

However, different queues can be created on any node in the cluster, so different queues can have different master nodes. Even, the master node of one queue may be the mirror node of another queue .

All requests sent by users to the queue, such as sending messages and message receipts, will be completed on the master node by default. If the request is received from the slave node, it will also be routed to the master node for completion. The mirror node only plays the role of backing up data .

When the master node receives the consumer's ACK, all mirrors delete the data in the node.

Summarized as follows:

  • The mirror queue structure is one master and multiple slaves (slave is mirror image)
  • All operations are completed by the master node, and then synchronized to the mirror node
  • After the master goes down, the mirror node will replace it as the new master (if the master is down before the master-slave synchronization is completed, data loss may occur)
  • Does not have load balancing function, because all operations will be completed by the master node (but different queues, the master node can be different, you can use this to improve throughput)

4.2. Configuration of mirror mode

There are 3 modes for mirror mode configuration:

ha-mode ha-params Effect
exact mode exactly The number of copies of the queue count The number of queue replicas (sum of primary and mirror servers) in the cluster. If count is 1, it means a single copy: the queue master node. A count value of 2 means 2 copies: 1 queue master and 1 queue mirror. In other words: count = number of mirrors + 1. If there are fewer than count nodes in the cluster, the queue will be mirrored to all nodes. If there is a cluster total greater than count+1, and the node containing the mirror fails, a new mirror will be created on another node.
all (none) Queues are mirrored across all nodes in the cluster. The queue will be mirrored to any newly joined nodes. Mirroring to all nodes will put additional pressure on all cluster nodes, including network I/O, disk I/O, and disk space usage. It is recommended to use exactly, and set the number of replicas to (N / 2 +1).
nodes node names Specify which nodes the queue is created to. If none of the specified nodes exist, an exception will occur. If the specified node exists in the cluster but is temporarily unavailable, a node will be created to the node the current client is connected to.

Here we use the rabbitmqctl command as an example to explain the configuration syntax.

Syntax example:

4.2.1. exactly mode

rabbitmqctl set_policy ha-two "^two\." '{"ha-mode":"exactly","ha-params":2,"ha-sync-mode":"automatic"}'
  • rabbitmqctl set_policy: Fixed wording
  • ha-two: policy name, custom
  • "^two\.": Match the regular expression of the queue, the queue that conforms to the naming rules will take effect, here is any two.queue name starting with
  • '{"ha-mode":"exactly","ha-params":2,"ha-sync-mode":"automatic"}': Policy content
    • "ha-mode":"exactly": strategy mode, here is exactly mode, specify the number of copies
    • "ha-params":2: Policy parameter, here is 2, that is, the number of replicas is 2, 1 master and 1 mirror
    • "ha-sync-mode":"automatic": Synchronization strategy, the default is manual, that is, newly added mirror nodes will not synchronize old messages. If it is set to automatic, the newly added mirror node will synchronize all the messages in the master node, which will bring additional network overhead

4.2.2.all mode

rabbitmqctl set_policy ha-all "^all\." '{"ha-mode":"all"}'
  • ha-all: policy name, custom
  • "^all\.": matches all all.queue names starting with
  • '{"ha-mode":"all"}': policy content
    • "ha-mode":"all": Strategy mode, here is all mode, that is, all nodes will be called mirror nodes

4.2.3.nodes mode

rabbitmqctl set_policy ha-nodes "^nodes\." '{"ha-mode":"nodes","ha-params":["rabbit@nodeA", "rabbit@nodeB"]}'
  • rabbitmqctl set_policy: Fixed wording
  • ha-nodes: policy name, custom
  • "^nodes\.": Match the regular expression of the queue, the queue that conforms to the naming rules will take effect, here is any nodes.queue name starting with
  • '{"ha-mode":"nodes","ha-params":["rabbit@nodeA", "rabbit@nodeB"]}': Policy content
    • "ha-mode":"nodes": Strategy mode, here is the nodes mode
    • "ha-params":["rabbit@mq1", "rabbit@mq2"]: Policy parameter, here specify the name of the node where the copy is located

4.3. Testing

We use mirroring in exactly mode, because the number of cluster nodes is 3, so the number of mirroring is set to 2.

Run the command below:

docker exec -it mq1 rabbitmqctl set_policy ha-two "^two\." '{"ha-mode":"exactly","ha-params":2,"ha-sync-mode":"automatic"}'

Next, we create a new queue:

insert image description here

Check the queue on any mq console:

insert image description here

4.3.1. Test data sharing

Send a message to two.queue:

insert image description here

Then check the message in any console of mq1, mq2, mq3:

insert image description here

4.3.2. Testing High Availability

Now, we let the master node mq1 of two.queue go down:

docker stop mq1

View cluster status:

insert image description here

View queue status:

insert image description here

Discovery is still healthy! And its master node switched to rabbit@mq2

5. Arbitration queue

Starting from version 3.8 of RabbitMQ, a new quorum queue has been introduced. It has similar functions to the mirror team, but it is more convenient to use.

5.1. Add quorum queue

To add a queue in any console, be sure to select the queue type as Quorum type.

insert image description here

View queues on any console:

insert image description here

As you can see, the words + 2 of the quorum queue. It means that this queue has 2 mirror nodes.

Because the default mirror number of the quorum queue is 5. If your cluster has 7 nodes, then the number of mirrors must be 5; and our cluster has only 3 nodes, so the number of mirrors is 3.

5.2. Testing

You can refer to the test on the mirrored cluster, the effect is the same.

5.3. Cluster expansion

5.3.1. Join the cluster

1) Start a new MQ container:

docker run -d --net mq-net \
-v ${
    
    PWD}/.erlang.cookie:/var/lib/rabbitmq/.erlang.cookie \
-e RABBITMQ_DEFAULT_USER=root \
-e RABBITMQ_DEFAULT_PASS=1234 \
--name mq4 \
--hostname mq5 \
-p 8074:15672 \
-p 8084:15672 \
rabbitmq:3.8-management

2) Enter the container console:

docker exec -it mq4 bash

3) Stop the mq process

rabbitmqctl stop_app

4) Reset the data in RabbitMQ:

rabbitmqctl reset

5) Join mq1:

rabbitmqctl join_cluster rabbit@mq1

6) Start the mq process again

rabbitmqctl start_app

insert image description here

5.3.2. Increase the copy of the quorum queue

Let's first check the current copy of the quorum.queue queue and enter the mq1 container:

docker exec -it mq1 bash

Excuting an order:

rabbitmq-queues quorum_status "quorum.queue"

result:

insert image description here

Now, let's add mq4:

rabbitmq-queues add_member "quorum.queue" "rabbit@mq4"

result:

insert image description here

View again:

rabbitmq-queues quorum_status "quorum.queue"

insert image description here

Check the console and find that the number of mirror images of quorum.queue has also changed from +2 to +3:

insert image description here

If there are any deficiencies, please give more advice,
to be continued, continue to update!
Let's make progress together!

Guess you like

Origin blog.csdn.net/qq_40440961/article/details/128890540