ActiveMQ: High Availability Cluster Solution

ActiveMQ high availability cluster solution

In systems with high concurrency and extremely high stability requirements, high availability is essential. Of course, ActiveMQ also has its own cluster solution. Starting from ActiveMQ 5.9, the cluster implementation of ActiveMQ cancels the traditional Master-Slave method and adds the Master-Slave implementation method based on ZooKeeper + LevelDB.

 

http://wosyingjun.iteye.com/blog/2314683

 

Related Articles:
Example Project:  http://wosyingjun.iteye.com/blog/2312553 
Simple and Practical with ActiveMQ: http://wosyingjun.iteye.com/blog/2314681

1. High availability principle of ActiveMQ

Register all ActiveMQ Brokers with ZooKeeper (cluster). Only one of the Brokers can provide services and is regarded as the Master, and the other Brokers are in the standby state and are regarded as Slaves. If the Master fails to provide services, Zookeeper will elect a Broker from the Slave to act as the Master.
Slaves connect to Masters and synchronize their storage state, Slaves do not accept client connections. All storage operations will be replicated to Slaves connected to the Master. If the Master is down, the Slave with the latest update will become the Master. After the faulty node is recovered, it will rejoin the cluster and connect to the Master to enter Slave mode.
Do you think it is very similar to Redis Sentinel's master-slave high-availability method. The role of zookeeper here is similar to that of sentinel in reids.

In addition, a warning from the official documentation is attached, please pay attention to the user. replicated LevelDB does not support delayed or scheduled task messages. These messages are stored in another LevelDB file. If delayed or scheduled task messages are used, they will not be copied to the Slave Broker, and high availability of messages cannot be achieved.

2. Persistence of ActiveMQ

ActiveMQ has three persistence methods (configurable in activemq.xml):
(1) Based on shared file system (KahaDB, default)

<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>

(2) Based on JDBC

<persistenceAdapter>
    <jdbcPersistenceAdapter dataSource="#MySQL-DS"/>
</persistenceAdapter>
<!--注意:需要添加mysql-connector-java相关的jar包到avtivemq的lib包下-->
<bean id="MySQL-DS" class="org.apache.commons.dbcp2.BasicDataSource" destroy-method="close">
    <property name="driverClassName" value="com.mysql.jdbc.Driver"/>
    <property name="url" value="jdbc:mysql://127.0.0.1:3306/beautyssm_mq?useUnicode=true&amp;characterEncoding=UTF-8"/>
    <property name="username" value="root"/>
    <property name="password" value="xxxx"/>
</bean>

(3) Based on replicable LevelDB (commonly used in clusters)

<persistenceAdapter>
  <replicatedLevelDB
    directory="${activemq.data}/leveldb" #数据存储路径
    replicas="3" #节点个数
    bind="tcp://0.0.0.0:62621" #用于各个节点之间的通讯
    zkAddress="localhost:2181,localhost:2182,localhost:2183"
    hostname="localhost"
    zkPath="/activemq/leveldb-stores"/>#在zookeeper中集群相关数据存放路径
</persistenceAdapter>

LevelDB is a set of high-performance class libraries developed by Google for persistent data. LevelDB is not a service, and users need to implement Server by themselves. It is a single-process service that can process billion-level Key-Value data and occupies a small amount of memory.
Here we use the third method, which is also recommended by the official website.

3. High Availability Deployment

1. ActiveMQ's high-availability cluster is based on Zookeeper's high-availability cluster, so the Zookeeper cluster must be deployed first

See: ZooKeeper High Availability Cluster Installation and Configuration

2. Configure the monitoring ports in conf/activemq.xml in the 3 ActiveMQ nodes
节点1:
<property name="port" value="8161"/>
节点2:
<property name="port" value="8162"/>
节点3:
<property name="port" value="8163"/>
3. Configure the persistence adapter in conf/activemq.xml in the 3 ActiveMQ nodes
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
    <persistenceAdapter>
      <replicatedLevelDB
        directory="${activemq.data}/leveldb"
        replicas="3"
        bind="tcp://0.0.0.0:6262?"
        zkAddress="localhost:2181,localhost:2182,localhost:2183"
        hostname="localhost"
        zkPath="/activemq/leveldb-stores"/>
    </persistenceAdapter>
</broker>   

Note: The BrokerName of each ActiveMQ must be the same, otherwise it cannot join the cluster.

4. Modify the message port of each node:
节点1:
<transportConnector name="openwire" uri="tcp://0.0.0.0:61611maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
节点2:
<transportConnector name="openwire" uri="tcp://0.0.0.0:61612maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
节点3:
<transportConnector name="openwire" uri="tcp://0.0.0.0:61613maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
5. Start 3 ActiveMQ nodes in sequence:
$ /usr/local/activemq/activemq-01/bin/activemq start
$ /usr/local/activemq/activemq-02/bin/activemq start
$ /usr/local/activemq/activemq-03/bin/activemq start

Monitor log:

$ tail -f /usr/local/activemq/activemq-01/data/activemq.log
$ tail -f /usr/local/activemq/activemq-02/data/activemq.log
$ tail -f /usr/local/activemq/activemq-03/data/activemq.log

4. Cluster Deployment

The high-availability deployment of ActiveMQ has been implemented before, but the high-availability cluster alone cannot achieve the effect of load balancing. Next, the cluster function that can achieve load balancing can be completed with simple configuration:

Link cluster 2 in activemq.xml of cluster 1 (configured before persistenceAdapter tag):

<networkConnectors>
    <networkConnector uri="static:(tcp://192.168.2.100:61611,tcp://192.168.2.101:61612,tcp://192.168.2.102:61613)" duplex="false"/>
</networkConnectors>

Link cluster 1 in activemq.xml of cluster 2 (configured before persistenceAdapter tag):

<networkConnectors>
    <networkConnector uri="static:(tcp://192.168.1.100:61611,tcp://192.168.1.101:61612,tcp://192.168.1.102:61613)" duplex="false"/>
</networkConnectors>

这样就实现了ActiveMQ的集群高可用负载均衡功能。

三. 客户端连接:

 

ActiveMQ 的客户端只能访问Master的Broker,其他处于Slave的Broker不能访问。所以客户端连接Broker应该使用failover协议。
配置文件地址应为:
failover:(tcp://192.168.1.100:61611,tcp://192.168.1.100:61612,tcp://192.168.1.100:61613)?randomize=false
或:
failover:(tcp://192.168.2.100:61611,tcp://192.168.2.100:61612,tcp://192.168.2.100:61613)?randomize=false

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326267030&siteId=291194637
Recommended