ActiveMQ high-availability load balancing +

mq is all in a distributed system message bus system, we must try to ensure the normal operation of mq in a production environment, so usually use master-slave, master hangs slave can guarantee top continue to work, but just master -slave if only to ensure high availability, load balancing can not be done, because if the load is too large mq hang, master-slave can not solve this problem, then you must configure load balancing

activemq deployment of master-slave

  1. shared filesystem Master-Slave deployment, based on a shared data storage directory, multiple broker shared storage file system, who grabbed control of the storage file who is master
  2. shared database Master-Slave mode, based on a shared database, with a similar first
  3. Replicated LevelDB Store manner, based zookeeper + leveldb. leveldb Google is a high-performance database kv. This is the only way to add after ActiveMQ5.9, the principle is to use ZooKeeper coordination elect a node as a master, the selected master broker node open and accepts client connections. If the master hang up, get the latest updates data of slave becomes master. After re-hang node recovery can be added to the network and is connected into the master slave mode.

The first two to ensure high availability and performance of the shared file system or database to ensure high availability mq, sometimes not easy to meet the above, we use a third way to build a master-slave cluster.

surroundings

Since we only have two hosts, we will set up a test dummy master-slave clusters on each host, and then re-connect with each other to complete the two clusters Load Balancing

 

jdk : jdk8
zookeeper: 3.2
activemq: 5.15
主机1:192.168.0.103  
主机2:192.168.0.104 

Build a master-slave cluster

step:

  1. Download -> unpack -> Copy 3 copies activemq (5.9 or later) to ensure that there is no problem running alone

2. Modify the persistence of how each activemq

 

vi  ACTIVEMQ_HOME/conf/activemq.xml

The default kahadb amended as follows, zookeeper cluster I did, too troublesome direct use of a single point does not affect Zookeeper

 

//directoryleveldb数据保存路径
//bind 服务端口
//replicas表示master-slave集群中节点个数
//zkAddress依次指定三台服务器的ip和ZooKeeper的监听端口
//zkPah在zookeeper中注册的节点路径
//hostname为每一台服务器的ip地址,三台服务器根据实际情况修改
<persistenceAdapter>
     <replicatedLevelDB directory="${activemq.data}/leveldb"  replicas="3"
    bind="tcp://0.0.0.0:62621"
    zkAddress="localhost:2181,localhost:2182,localhost:2183"
    hostname="localhost"
    zkPath="/activemq/leveldb-stores"/> 
</persistenceAdapter>

3. Modify the TCP service port activemq
This configuration also ACTIVEMQ_HOME / conf / activemq.xml, default 61616, the same machine can not be repeated, otherwise the port conflict, I am here for the 61616, 61617, 61618
4. Modify the jetty port
activemq run the service using the jetty modifying ACTIVEMQ_HOME / conf / jetty.xml three ports can not be guaranteed the same
sequentially to 8161, 8162, 8163

  1. Start zookeeper, start three instances activemq
  2. Verify
    login Zookeeper client
    execute: ls / activemq / leveldb-stores
    View node found three nodes under / activemq / leveldb-stores, one for each service activemq

    image.png


    View data for each node, which elected not null is the master, for the slave is null

    image.png

Can also be downloaded zookeeper visualization tools:
https://issues.apache.org/jira/secure/attachment/12436620/ZooInspector.zip ;
run ZooInspector \ build \ zookeeper-dev- ZooInspector.jar can see the node data

ps: only accept requests master, slave does not accept the request, you can not use administrative interface

Configure load balancing

  1. We have built a good a cluster, you can use scp command to copy a file to another three activemq can get another cluster host

 

#activemqCluster  是我本机的activemq 集群安装目录
scp -r  activemqCluster  [email protected]:/soft
  1. Link Cluster 2 (configuration before persistenceAdapter label) in activemq.xml cluster 1:

 

<networkConnectors>
    <networkConnector uri="static:(tcp://192.168.0.103:61616,tcp://192.168.0.103:61617,tcp://192.168.0.103:61618)" duplex="false"/>
</networkConnectors>
  1. Link cluster 1 (configuration before persistenceAdapter label) in cluster 2 activemq.xml in:

 

<networkConnectors>
    <networkConnector uri="static:(tcp://192.168.0.104:61616,tcp://192.168.0.104:61617,tcp://192.168.0.104:61618)" duplex="false"/>
</networkConnectors>

This realization ActiveMQ cluster high availability load balancing.
Project activemq configured address should read:
failover :( tcp: //192.168.0.103: 61616, tcp: //192.168.0.103: 61617, tcp: //192.168.0.103: 61618) the Randomize = false?
Or:
failover: ( tcp: //192.168.0.104: 61616, tcp: //192.168.0.104: 61617, tcp: //192.168.0.104: 61618) the Randomize = false?

test

1. Test master-slave, master off (61616), 61617, and automatically switch to reconnect the master (data integrity can not be guaranteed when switching) again to close a service 61617, more than half of hang up, service is unavailable, the client is blocked can no longer send messages

 

image.png

  1. Load balancing test
    final architecture is the master-slave communicate with each other two clusters, two clusters can consume each other's messages, but if the customer hangs up the client-side cluster is connected still can not send a message, that the activemq just do load balancing load balancing consumption, high availability is ensured by the master-slave.

     

     

    We launched two consumer services the same queue name to listen
    mq addresses are configured

 

failover:(tcp://192.168.0.103:61616,tcp://192.168.0.103:61617,tcp://192.168.0.103:61618)?randomize=false
和:
failover:(tcp://192.168.0.104:61616,tcp://192.168.0.104:61617,tcp://192.168.0.104:61618)?randomize=false

After starting kept sending messages to cluster 104, consumers can find two messages sent to the 104 consumer cluster, but the print log some differences, 103 clusters connected consumer services connected to the display matser 104 by FailoverTransport

image.png

 


So far, high-availability load balancing + set up is completed.
On a final note, the test found that consumption in the master-slave switch message may still be a problem, not foolproof
 

https://blog.csdn.net/suifeng629/article/details/93210416

Published 40 original articles · won praise 57 · Views 250,000 +

Guess you like

Origin blog.csdn.net/With__Sunshine/article/details/105314329