Basic operation of RabbitMQ

Basic operation of RabbitMQ: https://blog.csdn.net/Michael_lcf/article/details/124677268
5 core concepts of RabbitMQ: https://blog.csdn.net/Michael_lcf/article/details/126435452
RabbitMQ implements distributed WebSocket Communication: https://blog.csdn.net/Michael_lcf/article/details/126403772

Why use MQ: 1. Flow peak elimination. 2. Application decoupling. 3. Asynchronous processing.

1. Install rabbitmq on linux

Link: https://download.csdn.net/download/Michael_lcf/86262885

=== 安装rabbitmq ===
rpm -ivh erlang-21.3-1.el7.x86_64.rpm
yum install socat -y
rpm -ivh rabbitmq-server-3.8.8-1.el7.noarch.rpm

=== 启动rabbitmq ===
systemctl enable rabbitmq-server
systemctl start rabbitmq-server
systemctl status rabbitmq-server
systemctl stop rabbitmq-server

=== 开启web管理插件 ===
rabbitmq-plugins enable rabbitmq_management
用默认账号密码(guest/guest)访问地址 http://192.168.168.101:15672/出现权限问题;

=== 添加一个新的用户
# 创建账号
rabbitmqctl add_user admin123 admin123
# 设置用户角色
rabbitmqctl set_user_tags admin123 administrator
# 设置用户权限
# set_permissions [-p <vhostpath>] <user> <conf> <write> <read>
rabbitmqctl set_permissions -p "/" admin123 ".*" ".*" ".*"
用户 user_admin123 具有/vhost1 这个 virtual host 中所有资源的配置、写、读权限
# 当前用户和角色
rabbitmqctl list_users


# 创建一个vhost:vhost_name就是想要创建的vhost
rabbitmqctl add_vhost [vhost_name]
rabbitmqctl delete_vhost [vhost_name]
rabbitmqctl list_vhosts

用账号密码(admin123/admin123)访问地址 http://192.168.168.101:15672/正常访问;
=== 重置命令 ===
# 关闭应用的命令
rabbitmqctl stop_app
# 重置的命令
rabbitmqctl reset
# 重新启动命令
rabbitmqctl start_app
# 查看所有队列命令
rabbitmqctl list_queues

2. Install rabbitmq on windows

Install erlang and rabbitmq
separately Download link: https://download.csdn.net/download/Michael_lcf/85332315

Erlang_otp_win64_22.3.exe
rabbitmq-server-3.8.3.exe

Browser access: http://127.0.0.1:15672/ .
guest/guest

3. Three-node rabbitmq cluster

3.1. Install rabbitmq in a three-node cluster

The prerequisite is that rabbitmq has been installed on all three nodes.
1. Modify the host names of the 3 machines

在第1台机器执行
echo node1 > /etc/hostname
在第2台机器执行
echo node2 > /etc/hostname
在第3台机器执行
echo node3 > /etc/hostname

2. Configure the hosts file of each node so that each node can recognize each other

cat >> /etc/hosts << EOF

192.168.168.171 node1
192.168.168.172 node2
192.168.168.173 node3
EOF

3. To ensure that the cookie files of each node use the same value
Execute the remote operation command on node1

scp /var/lib/rabbitmq/.erlang.cookie root@node2:/var/lib/rabbitmq/.erlang.cookie
scp /var/lib/rabbitmq/.erlang.cookie root@node3:/var/lib/rabbitmq/.erlang.cookie

4. Start the RabbitMQ service, and start the Erlang virtual machine and RbbitMQ application service
by the way. Execute the following commands on the three nodes respectively

rabbitmq-server -detached

5. Execute on node2
Here we focus on node1, and then add node2 to node1.

rabbitmqctl stop_app (只关闭RabbitMQ服务)
(rabbitmqctl stop 会将Erlang虚拟机关闭;rabbitmqctl stop_app 只关闭RabbitMQ服务;)
rabbitmqctl reset
rabbitmqctl join_cluster rabbit@node1
rabbitmqctl start_app (只启动应用服务)

6. Execute on node3
Here we focus on node1, and then add node3 to node2. (It is also possible to add node3 to node1.)

rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl join_cluster rabbit@node2
rabbitmqctl start_app

7. Cluster Status

rabbitmqctl cluster_status

8. You need to reset the user

创建账号
rabbitmqctl add_user admin123 admin123
设置用户角色
rabbitmqctl set_user_tags admin123 administrator
设置用户权限
rabbitmqctl set_permissions -p "/" admin123 ".*" ".*" ".*"

9. Release the cluster nodes (node2 and node3 machines are executed separately)

(node2和node3机器分别执行)
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl start_app
rabbitmqctl cluster_status

 (node1机器上执行)
rabbitmqctl forget_cluster_node rabbit@node2
rabbitmqctl forget_cluster_node rabbit@node3

3.2. Configure cluster mirroring queue policy

If there is only one Broker node in the RabbitMQ cluster, the failure of the node will result in the temporary unavailability of the overall service, and may also result in the loss of messages. All messages can be set to be persistent, and the durable attribute of the corresponding queue can also be set to true, but this still cannot avoid problems caused by caching: because there is a gap between the message being sent and being written to the disk and performing the flushing action A brief but problematic window of time. The publisherconfirm mechanism can ensure that the client knows which messages have been stored on disk. However, it is generally not desirable to encounter service unavailability due to a single point of failure.
Introduce the mirror queue (Mirror Queue) mechanism, which can mirror the queue to other Broker nodes in the cluster. If one node in the cluster fails, the queue can be automatically switched to another node in the mirror to ensure service availability.

  1. Start three cluster nodes.

  2. Just find a node to add policy.

  3. Create a queue on node1 to send a message, and there is a mirror queue in the queue.

  4. After stopping node1, it is found that node2 has become a mirror queue.

  5. Even if there is only one machine left in the entire cluster, the messages in the queue can still be consumed, which means that the messages in the queue have been delivered to the corresponding machine by the mirror queue.

4. Haproxy+Keepalive realizes high availability load balancing

HAProxy provides high availability, load balancing, and proxy based on TCPHTTP applications, and supports virtual hosts. It is a free, fast and reliable solution, and is used by many well-known Internet companies including Twitter, Reddit, StackOverflow, and GitHub. HAProxy implements an event-driven, single-process model that supports a very large number of well-connected connections.
Expand the difference between nginx, lvs, haproxy: http://www.ha97.com/5646.html

4.1、Haproxy Install

1. Download haproxy (on node1 and node2)

yum -y install haproxy

2. Modify haproxy.cfg of node1 and node2

vim /etc/haproxy/haproxy.cfg

backend app
    balance     roundrobin
    server  rabbitmq_node1 192.168.168.171:5672 check inter 5000 rise 2 fall 3 weight 1
    server  rabbitmq_node2 192.168.168.172:5672 check inter 5000 rise 2 fall 3 weight 1
    server  rabbitmq_node3 192.168.168.173:5672 check inter 5000 rise 2 fall 3 weight 1

insert image description here

3. Start haproxy on two nodes

haproxy -f /etc/haproxy/haproxy.cfg
ps -ef | grep haproxy

4. Access address

http://192.168.168.171:8888/stats
http://192.168.168.172:8888/stats

4.2. Keepalived realizes dual-machine (main-standby) hot backup

Just imagine if the previously configured HAProxy host suddenly crashes or the network card fails, then although there is no failure in the RbbitMQ cluster, all connections will be disconnected for external clients, and the result will be catastrophic. In order to ensure the reliability of the load balancing service It is also very important. Here we will introduce Keepalived, which can achieve high availability (dual machine hot backup) through its own health check and resource takeover function, and realize failover.

1. Download keepalived

yum -y install keepalived

2. Node node1 configuration file

vim /etc/keepalived/keepalived.conf

Modify the keepalived.conf in the data and replace it

3. The node node2 configuration file
needs to modify the router_id of global_defs, such as: nodeB
and then modify the state in vrrp_instance_VI to "BACKUP";
finally set the priority to a value less than 100

4. Add haproxy_chk.sh
(in order to prevent Keepalived from working normally after the HAProxy service hangs up without switching to Backup, so
here you need to write a script to detect the status of the HAProxy service, when the HAProxy service hangs up, the script will automatically restart HAProxy service, if unsuccessful, close the Keepalived service, so that you can switch to Backup and continue to work)
vim /etc/keepalived/haproxy_chk.sh (you can upload files directly)
modify permissions chmod 777 /etc/keepalived/haproxy_chk.sh

5. Start the keepalive command (node1 and node2 start)

systemctl start keepalived

6. Observe the Keepalived log

tail -f /var/log/messages -n 200

7. Observe the newly added vip

ip add show

8. node1 simulates keepalived off state

systemctl stop keepalived

9. Use the vip address to access the rabbitmq cluster

Guess you like

Origin blog.csdn.net/Michael_lcf/article/details/124677268