CentOS 7 install Zookeeper + ActiveMQ cluster

principle

Use Master-Slave implementation ZooKeeper achieve, it is highly available ActiveMQ be an effective solution.

Availability principle:

  • Register all ActiveMQ Broker using ZooKeeper (cluster).
  • Only one of the Broker can provide services outside (ie Master node), the other is in a standby state Broker is regarded as Slave.
  • If the Master can not provide service due to a fault, the use of internal mechanisms ZooKeeper election will elect a Master Broker acts as a node from the Slave, continue to provide services.

Preparing the Environment

CPU name system IP addresses zk port MQ version MQ messaging port MQ console port
node01 Centos7.5 172.16.1.11 2181 5.15.6 61616 8161
node02 Centos7.5 172.16.1.12 2181 5.15.6 61616 8161
node03 Centos7.5 172.16.1.13 2181 5.15.6 61616 8161

zookeeper cluster

zookeeper cluster before installation article has been introduced, CentOS 7 Zookeeper presentation and cluster installations , this is also the use of the environment;

ActiveMQ installation

download link:

The following operation is required in the operation of three servers:

cd /opt/soft/

tar xf apache-activemq-5.15.6-bin.tar.gz 

mv apache-activemq-5.15.6 /opt/activemq-5.16.6

ln -s /opt/activemq-5.16.6 /opt/activemq

ls -ld /opt/activemq*

# lrwxrwxrwx  1 root root  20 Mar  1 14:22 /opt/activemq -> /opt/activemq-5.16.6
# drwxr-xr-x 10 root root 193 Sep  4  2018 /opt/activemq-5.16.6

Configuration

A modification,

The following line of brokerNamevalue activemq-cluster, or any custom value here requires two additional same can indicate a cluster.

<broker xmlns="http://activemq.apache.org/schema/core" brokerName="activemq-cluster" dataDirectory="${activemq.data}">

Modify II.

The need to increase the cluster related information, adding information zookeeper, etc., as follows:

You can delete persistenceAdapter tag in the source file, then copy the contents directly down the port-related changes.

Following this configuration, the remaining two will need to be configured, the only difference is the need to hostnamecorrespond in value to the name of each machine modifications.

        <persistenceAdapter>
            <!--<kahaDB directory="${activemq.data}/kahadb"/> -->
            <replicatedLevelDB
                directory="${activemq.data}/leveldb"
                replicas="3"
                bind="tcp://0.0.0.0:62222"
                zkAddress="172.16.1.11:2181,172.16.1.12:2181,172.16.1.13:2181"
                hostname="node01"
                sync="local_disk"
                zkPath="/activemq/leveldb-stores"
            />
        </persistenceAdapter>

Start the test

After a good start the test modified in accordance with the above configuration of three servers

/opt/activemq/bin/activemq start

# INFO: Loading '/opt/activemq-5.16.6//bin/env'
# INFO: Using java '/opt/jdk/bin/java'
# INFO: Starting - inspect logfiles specified in logging.properties and log4j.properties to get details
# INFO: pidfile created : '/opt/activemq-5.16.6//data/activemq.pid' (pid '12405')

Note: It should be noted that, after all three services to start, the cluster is normal, and only one machine to provide services, and the remaining two are not listening port

View:

node01:

[root@node01 conf]# netstat -lntup | egrep '61616|8161|62222'
tcp6       0      0 :::8161                 :::*                    LISTEN      12405/java          
tcp6       0      0 :::62222                :::*                    LISTEN      12405/java          
tcp6       0      0 :::61616                :::*                    LISTEN      12405/java

node02:

[root@node02 data]# netstat -lntup | egrep '61616|8161|62222'
[root@node02 data]#

node03:

[root@node03 data]# netstat -lntup | egrep '61616|8161|62222'
[root@node03 data]#

Guess you like

Origin www.cnblogs.com/winstom/p/12390077.html