rocketmq of those things to build a clustered environment

Getting started on a base part of the rocketmq been a description to explain the basics, before you use we need to set up the environment, today to talk about building a distributed cluster environment rockeketmq

Foreword

Prior to entry have been introduced rocketmq basis, I believe that you have a basic understanding, be a distributed cluster to build today, in fact, may be the source of use and learning through local source, but as a developer or maintenance personnel need to understand distributed cluster the deployment process, to facilitate debugging and testing behind clusters

Configuration parameters

The official note to the default configuration file is the simplest version of the online environment need to be configured according to their needs, described here under which part of the configuration parameters:

#所属集群名字
brokerClusterName=rocketmq‐cluster
#broker名字,注意此处不同的配置文件填写的不一样
brokerName=broker‐a
#0 表示 Master,>0 表示 Slave
brokerId=0
#nameServer地址,分号分割
namesrvAddr=rocketmq1:9876;rocketmq2:9876
#在发送消息时,自动创建服务器不存在的topic,默认创建的队列数
defaultTopicQueueNums=4
#是否允许 Broker 自动创建Topic,建议线上关闭
autoCreateTopicEnable=true
#是否允许 Broker 自动创建订阅组,建议线上关闭
autoCreateSubscriptionGroup=true
#Broker 对外服务的监听端口
listenPort=10911
#删除文件时间点,默认凌晨 4点
deleteWhen=04
#文件保留时间,默认 72 小时
fileReservedTime=72
#commitLog每个文件的大小默认1G
mapedFileSizeCommitLog=1073741824
#ConsumeQueue每个文件默认存30W条,根据业务情况调整
mapedFileSizeConsumeQueue=300000
#检测物理文件磁盘空间
diskMaxUsedSpaceRatio=75
#存储路径
storePathRootDir=/root/rocketmq/store
#commitLog 存储路径
storePathCommitLog=/root/rocketmq/store/commitlog
#消费队列存储路径
storePathConsumeQueue=/root/rocketmq/store/consumequeue
#消息索引存储路径
storePathIndex=/root/rocketmq/store/index
#checkpoint 文件存储路径
storeCheckpoint=/root/rocketmq/store/checkpoint
#abort 文件存储路径
abortFile=/root/rocketmq/store/abort
#限制的消息大小 默认4M
#maxMessageSize=4194304
#Broker 的角色
#‐ ASYNC_MASTER 异步复制Master
#‐ SYNC_MASTER 同步双写Master
#‐ SLAVE
brokerRole=SYNC_MASTER

Group types

Description building official address: https://github.com/apache/rocketmq/tree/master/docs/cn/operation.md

According to the official stand-alone version can be deployed, there is no longer introduced, mainly explained how to build a distributed cluster under today, due to the write mode is synchronous dual corporate environment used today to this model as an example to be set up

Next, in order to build a double-double Master Slave mode - simultaneous dual brush to write asynchronous disk cluster as an example

Preparing the Environment

I can only be here because (the test environment the machine is not enough ...) locally, using two virtual machine environment, here is the start of the two centos7 VMware environment, you first need to be reminded that we can deploy a station, another cloned directly modify these parameters can be done to build, so the first stage of a deployment, if the physical machine environment, the parameters can be deployed directly to copy the source code modifications

The entire cluster environment is this:

  • 机器A: 1 namesrv + 1 broker-master-a + 1 broker-slave-b
  • 机器B: 1 namesrv + 1 broker-master-b + 1 broker-slave-a

We can see that the deployment is to use the two machines mutual backup, because the local can not do too much, of course, the company's test environment is not enough resources, then this can also be set up

Meanwhile, in order to observe the production of cluster consumer convenience, we also need a console, in order to facilitate the deployment of (because I'm too lazy to compile and then download configuration parameters set up), we directly use the docker to deploy the console, which would later say

Build process

Before the formal structures, we need to install the basic environment that you need in centos7:

  • jdk8
  • maven
  • Fixed ip

Ip virtual machine may refer to this fixed address is set: https://segmentfault.com/a/1190000017535131

Here we must pay attention to, first set up the cluster environment on the machine A! Cloned directly modify these parameters can be built, do not do useless work, of course, but here based on a local virtual machine environment, the entire project source code to another machine online and copy the deployment environment like

After configuration is complete, my local machine address of the virtual machine:

Machine A IP: 192.168.211.11
machine B IP: 192.168.211.12 (after the completion of this clone A deployment configuration)

Download compiled

wget https://archive.apache.org/dist/rocketmq/4.5.2/rocketmq-all-4.5.2-source-release.zip
unzip rocketmq-all-4.5.2-source-release.zip
cd rocketmq-all-4.5.2-source-release
mvn -Prelease-all -DskipTests clean install -U
cd distribution/target/rocketmq-4.5.2

Download compiled

Directory is compiled files, then we will be deployed by this cluster environment

DNS modifications

Modify / etc / hosts file and add the following addresses, in order to facilitate configuration, we modify the local DNS virtual machine, easy to operate profile

192.168.211.11 rocketmq1
192.168.211.12 rocketmq2

broker configuration file

Before the official launch we first need to configure a cluster environment requires a configuration file, we need to modify some parameters according to the official website of explanation, here we directly modify configuration files in the conf / 2m-2s-sync directory to complete

broker-a.properties contents of the documents amended as follows:

brokerClusterName=DefaultCluster
brokerName=broker-a
brokerId=0
deleteWhen=04
fileReservedTime=48
brokerRole=SYNC_MASTER
flushDiskType=ASYNC_FLUSH
#nameServer地址,分号分割
namesrvAddr=rocketmq1:9876;rocketmq2:9876
#Broker 对外服务的监听端口
listenPort=10911
#存储路径
storePathRootDir=/root/rocketmq-m/store
#commitLog 存储路径
storePathCommitLog=/root/rocketmq-m/store/commitlog
#消费队列存储路径存储路径
storePathConsumeQueue=/root/rocketmq-m/store/consumequeue
#消息索引存储路径
storePathIndex=/root/rocketmq-m/store/index
#checkpoint 文件存储路径
storeCheckpoint=/root/rocketmq-m/store/checkpoint
#abort 文件存储路径
abortFile=/root/rocketmq-m/store/abort

broker-as.properties contents of the documents amended as follows:

brokerClusterName=DefaultCluster
brokerName=broker-a
brokerId=1
deleteWhen=04
fileReservedTime=48
brokerRole=SLAVE
flushDiskType=ASYNC_FLUSH
#nameServer地址,分号分割
namesrvAddr=rocketmq1:9876;rocketmq2:9876
#Broker 对外服务的监听端口
listenPort=11911
#存储路径
storePathRootDir=/root/rocketmq-s/store
#commitLog 存储路径
storePathCommitLog=/root/rocketmq-s/store/commitlog
#消费队列存储路径存储路径
storePathConsumeQueue=/root/rocketmq-s/store/consumequeue
#消息索引存储路径
storePathIndex=/root/rocketmq-s/store/index
#checkpoint 文件存储路径
storeCheckpoint=/root/rocketmq-s/store/checkpoint
#abort 文件存储路径
abortFile=/root/rocketmq-s/store/abort

broker-b.properties contents of the documents amended as follows:

brokerClusterName=DefaultCluster
brokerName=broker-b
brokerId=0
deleteWhen=04
fileReservedTime=48
brokerRole=SYNC_MASTER
flushDiskType=ASYNC_FLUSH
#nameServer地址,分号分割
namesrvAddr=rocketmq1:9876;rocketmq2:9876
#Broker 对外服务的监听端口
listenPort=10911
#存储路径
storePathRootDir=/root/rocketmq-m/store
#commitLog 存储路径
storePathCommitLog=/root/rocketmq-m/store/commitlog
#消费队列存储路径存储路径
storePathConsumeQueue=/root/rocketmq-m/store/consumequeue
#消息索引存储路径
storePathIndex=/root/rocketmq-m/store/index
#checkpoint 文件存储路径
storeCheckpoint=/root/rocketmq-m/store/checkpoint
#abort 文件存储路径
abortFile=/root/rocketmq-m/store/abort

broker-bs.properties contents of the documents amended as follows:

brokerClusterName=DefaultCluster
brokerName=broker-b
brokerId=1
deleteWhen=04
fileReservedTime=48
brokerRole=SLAVE
flushDiskType=ASYNC_FLUSH
#nameServer地址,分号分割
namesrvAddr=rocketmq1:9876;rocketmq2:9876
#Broker 对外服务的监听端口
listenPort=11911
#存储路径
storePathRootDir=/root/rocketmq-s/store
#commitLog 存储路径
storePathCommitLog=/root/rocketmq-s/store/commitlog
#消费队列存储路径存储路径
storePathConsumeQueue=/root/rocketmq-s/store/consumequeue
#消息索引存储路径
storePathIndex=/root/rocketmq-s/store/index
#checkpoint 文件存储路径
storeCheckpoint=/root/rocketmq-s/store/checkpoint
#abort 文件存储路径
abortFile=/root/rocketmq-s/store/abort

brokerName, brokerId different makes and brokerRole the same set of code via different configuration file starts as a cluster in a different role, because here we have a machine on the deployment of two broker, so the file storage path to distinction

The compiled files are copied into the modified / root / rocketmqbuild / namesrv, / root / rocketmqbuild / rocketmq-m (primary Broker) and / root / rocketmqbuild / rocketmq-s (Preparation Broker), it is convenient execution file as follows:

[root@rocketmq1 rocketmqbuild]# tree -L 2
.
├── namesrv
│   ├── benchmark
│   ├── bin
│   ├── conf
│   ├── lib
│   ├── LICENSE
│   ├── NOTICE
│   └── README.md
├── rocketmq-m
│   ├── benchmark
│   ├── bin
│   ├── conf
│   ├── lib
│   ├── LICENSE
│   ├── NOTICE
│   └── README.md
└── rocketmq-s
    ├── benchmark
    ├── bin
    ├── conf
    ├── lib
    ├── LICENSE
    ├── NOTICE
    └── README.md

Logging Configuration

After completion of the above, we need to pay attention conf in logback. *. Xml configuration file $ {user.home} needs to be replaced for their directories specified otherwise, the default log address, here for convenience, we will file it and store in the same a parent folder, to be replaced by sed:

cd /root/rocketmqbuild/namesrv/conf
sed -i 's#${user.home}#/root/namesrv#g' logback_namesrv.xml

cd /root/rocketmqbuild/rocketmq-m/conf
sed -i 's#${user.home}#/root/rocketmq-m#g' logback_broker.xml

cd /root/rocketmqbuild/rocketmq-s/conf
sed -i 's#${user.home}#/root/rocketmq-s#g' logback_broker.xml

jvm parameters

End modify log configuration should be set jvm parameters, the previous article I have explained part of the script, here be set according to individual needs, because locally, set up a small point

#/root/rocketmqbuild/namesrv/bin/runserver.sh
-server -Xms200m -Xmx200m -Xmn60m -XX:MetaspaceSize=40m -XX:MaxMetaspaceSize=80m
#/root/rocketmqbuild/rocketmq-m/bin/runbroker.sh
-server -Xms500m -Xmx500m -Xmn200m
#/root/rocketmqbuild/rocketmq-s/bin/runbroker.sh
-server -Xms500m -Xmx500m -Xmn200m

A machine -NameServer

The entire cluster environment, you need to start NameServer, the command is as follows:

nohup sh /root/rocketmqbuild/namesrv/bin/mqnamesrv &

See if successful log

tail -f /root/namesrv/logs/rocketmqlogs/namesrv.log

2019-12-01 16:39:30 INFO main - tls.client.keyPassword = null
2019-12-01 16:39:30 INFO main - tls.client.certPath = null
2019-12-01 16:39:30 INFO main - tls.client.authServer = false
2019-12-01 16:39:30 INFO main - tls.client.trustCertPath = null
2019-12-01 16:39:31 INFO main - Using OpenSSL provider
2019-12-01 16:39:32 INFO main - SSLContext created for server
2019-12-01 16:39:32 INFO main - Try to start service thread:FileWatchService started:false lastThread:null
2019-12-01 16:39:32 INFO NettyEventExecutor - NettyEventExecutor service started
2019-12-01 16:39:32 INFO main - The Name Server boot success. serializeType=JSON
2019-12-01 16:39:32 INFO FileWatchService - FileWatchService service started

ok, success

A machine -broker-a-master

Start the broker-a master, the following command:

nohup sh /root/rocketmqbuild/rocketmq-m/bin/mqbroker ‐c /root/rocketmqbuild/rocketmq-m/conf/2m‐2s‐sync/broker‐a.properties >/dev/null 2>&1 &

View the boot log:

tail -f /root/rocketmq-m/logs/rocketmqlogs/broker.log

2019-12-01 18:06:47 INFO brokerOutApi_thread_2 - register broker[0]to name server rocketmq1:9876 OK
2019-12-01 18:06:48 WARN brokerOutApi_thread_1 - registerBroker Exception, rocketmq2:9876

ok, connected on namesrv this machine, but the other being given, right, because another has not been started, after starting retries, while confirming the configuration parameters are in force in the log, a successful start Broker

A machine -broker-b-slave

Start broker-b of the slave, the following command:

nohup sh /root/rocketmqbuild/rocketmq-s/bin/mqbroker -c /root/rocketmqbuild/rocketmq-s/conf/2m-2s-sync/broker-b-s.properties >/dev/null 2>&1 &

View the boot log:

tail -f /root/rocketmq-s/logs/rocketmqlogs/broker.log

2019-12-01 18:39:14 INFO brokerOutApi_thread_1 - register broker[1]to name server rocketmq1:9876 OK
2019-12-01 18:39:17 WARN brokerOutApi_thread_2 - registerBroker Exception, rocketmq2:9876

Check it, this machine is deployed, we will write the script, easy to restart the virtual machine after closing

Creating startRocketmq.sh at root, as follows:

#!/bin/bash
nohup sh /root/rocketmqbuild/namesrv/bin/mqnamesrv &
nohup sh /root/rocketmqbuild/rocketmq-m/bin/mqbroker -c /root/rocketmqbuild/rocketmq-m/conf/2m-2s-sync/broker-a.properties >/dev/null 2>&1 &
nohup sh /root/rocketmqbuild/rocketmq-s/bin/mqbroker -c /root/rocketmqbuild/rocketmq-s/conf/2m-2s-sync/broker-b-s.properties >/dev/null 2>&1 &

B machine

Now make a second virtual machine's configuration, (non-virtual machine, you can copy the source code project) clone, modify ip address, modify the startup scripts will modify configuration files

#!/bin/bash
nohup sh /root/rocketmqbuild/namesrv/bin/mqnamesrv &
nohup sh /root/rocketmqbuild/rocketmq-m/bin/mqbroker -c /root/rocketmqbuild/rocketmq-m/conf/2m-2s-sync/broker-b.properties >/dev/null 2>&1 &
nohup sh /root/rocketmqbuild/rocketmq-s/bin/mqbroker -c /root/rocketmqbuild/rocketmq-s/conf/2m-2s-sync/broker-a-s.properties >/dev/null 2>&1 &

Monitoring console

In order to facilitate our producers and consumers as well as to observe the state of the cluster, our deployment console, in order to facilitate the deployment, use this way docker

docker run -d -e "JAVA_OPTS=-Drocketmq.namesrv.addr=rocketmq1:9876;rocketmq2:9876 -Dcom.rocketmq.sendMessageWithVIPChannel=false" --net=host -t styletang/rocketmq-console-ng

It should be noted that the use of the --net=hostmodel, then the corresponding virtual machine to open the 8080 port, if the port mapping under way to consider network problems (the author is stupid, not to engage in, and how easy how to), in accordance with the following:

  • Check firewall status: firewall-cmd --state; if not started (not running), first start, Ruoyi start, direct the next step, start a command: systemctl start firewalld.service;
  • Open port 8080: firewall-cmd --zone = public --add-port = 8080 / tcp --permanent;
  • Restart the firewall: systemctl restart firewalld.service;
  • Reload the configuration: firewall-cmd --reload;

Open the console browser address: http: //192.168.211.11: 8080, observed cluster information we deploy


Cluster Information

test

这里使用源码中org.apache.rocketmq.example.quickstart目录下的Consumer和Producer进行测试

生产者

        DefaultMQProducer producer = new DefaultMQProducer("please_rename_unique_group_name");
        producer.setNamesrvAddr("192.168.211.11:9876;192.168.211.12:9876");
        producer.start();
        for (int i = 0; i < 1000; i++) {
            try {
                Message msg = new Message("TopicTest" /* Topic */,
                    "TagA" /* Tag */,
                    ("Hello RocketMQ " + i).getBytes(RemotingHelper.DEFAULT_CHARSET) /* Message body */
                );
                SendResult sendResult = producer.send(msg);

                System.out.printf("%s%n", sendResult);
            } catch (Exception e) {
                e.printStackTrace();
                Thread.sleep(1000);
            }
        }
        producer.shutdown();

Production Success

这个生产消息的地方可能有些人看出来了有些问题,只发送到了其中一个broker上,这个跟我们配置参数autoCreateTopicEnable=true有关,之后的文章会通过源码进行分析说明,多了2条是之前测试的两条,大家可以忽略掉


Production display

消费者

        DefaultMQPushConsumer consumer = new DefaultMQPushConsumer("please_rename_unique_group_name_4");
        consumer.setNamesrvAddr("192.168.211.11:9876;192.168.211.12:9876");
        consumer.setConsumeFromWhere(ConsumeFromWhere.CONSUME_FROM_FIRST_OFFSET);
        consumer.subscribe("TopicTest", "*");
        consumer.registerMessageListener(new MessageListenerConcurrently() {

            @Override
            public ConsumeConcurrentlyStatus consumeMessage(List<MessageExt> msgs,
                ConsumeConcurrentlyContext context) {
                System.out.printf("%s Receive New Messages: %s %n", Thread.currentThread().getName(), msgs);
                return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
            }
        });
        consumer.start();
        System.out.printf("Consumer Started.%n");

Consumer Success

Consumer display

另外可查看下消费配置:


Consumer Configuration

问题

整个搭建流程是没什么疑问的,官方操作文档也比较好理解,但是在集群搭建中还是出现了一些问题,这里进行下记录:

1.基础环境一定要先配置好,避免中途出现各种各样的问题

2.配置文件从我的win10上复制到虚拟机中出现了一个像是字符集的问题,配置文件不起作用,启动脚本中-c这个地方我在centos7下一个一个的敲进去才起作用,我复制过去broker启动了但是配置参数看日志全是默认,找了半天,最后才找到是命令这里的问题,同样的store文件目录设置里也有个横杠,创建时候文件目录乱码了,才感觉应该是复制过去字符集不对了,后来将文件都修改了一遍才正常,比较尴尬,这个地方也没深究,之后有时间我再查查

3.console用docker启动一定要注意网络的问题,有可能连上了namesrv,但是连接不上broker,观察下日志,我这里为了省事直接host网络模式

4.autoCreateTopicEnable和autoCreateSubscriptionGroup参数(自动创建topic和自动创建订阅组)最好设置成false,在后台专门开发页面进行管理配置,先配置再使用

总结

本文主要说明了rocketmq的分布式集群搭建流程,笔者自己动手也是搞了半天才完成,大部分时间都耗费在找问题上了,也算是有些收获吧。当然,硬件限制,最多做到这种程度,搭建分布式集群也是为了后面阅读源码调试进行的准备,单个broker环境毕竟还是有点限制,大家按照流程操作即可,涉及到网络的部分尽量简化,后面自我感觉也要加强下网络基础的学习

If the test environment is two physical machines can refer to the above deployment process, under jvm parameter adjustment, the environment or online at least four physical machine deployment is better, on the whole is still relatively simple, relatively easy to understand, and I hope you readers the help

If you have questions please point out above, I will verify promptly corrected, thank you

Guess you like

Origin www.cnblogs.com/freeorange/p/12001320.html