moquette[article published on CSDN, written in markdown]

     Message passing is often used in projects. In different scenarios, the requirements for message passing are different. In the java world, the jms specification can be followed, and there are also open source related software to support it.

    This article will talk about mqtt and moquette. It is more tangled when choosing the middleware of mqtt, and there is no confidence in the use of non-popular open source frameworks. Fortunately, the source code is available, the source code is studied, and after a lot of testing, the effect is OK. Recommend to everyone first.

    Communication group:

          

 

    After the test process, it was found that there are some problems with moquette, which have been modified. It may be a problem of understanding, or it may be a different starting point. In summary, the modifications are as follows:

  1. Modify the length of the message queue to 32 to avoid publishing errors after the original message queue exceeds the maximum number
  2. Modified storage's constructor to be more generic
  3. Modified the determination of clientId every time, for the first connection of the client
  4. Modified the null pointer exception when signing offline messages
  5. Some less commonly used modules are deprecated
  6. Added redis storage implementation
  7. Redis uses the existing conf configuration mechanism
  8. The storage structure of the session has been redesigned to facilitate the subsequent addition of fragmentation processing
  9. Modify the structure of the project, separate the common module, and build redis, mapdb, and broker on the common basis
  10. The memory leak of publish was modified (originally thought to be a netty leak). After two days of sleepless debugging, it was found that moquette was not recycled.

Documentation for use:

 

1 Introduction

    Moquette is an open source message broker. The whole system is developed based on java and fully implements the MQTT protocol based on netty .

Based on the test, moquette 's client load capacity and message push speed are relatively objective, and it can also be tolerated in the scenario of frequent short-term online in large batches.

The Moquette code is completely open source, the problems in the testing process have been modified to some extent, and the redis -based storage mechanism has been extended.

 

2. Use

2.1 Configuration file

The configuration files used by Moquette are located in the config under its root directory , including the following:

  1. acl.conf : permission configuration
  2. hazelcast.xml  cluster configuration
  3. password_file.conf user password configuration
  4. moquette.conf main configuration

 

Each configuration file will be explained in detail below

Rights Profile

The file-based permission configuration is more complicated. The following is an example format, which will be explained in detail.

      

user admin

topic write mqtt/log

pattern write mqtt/log/+

topic read mqtt/lost

user client

topic read mqtt/log

pattern read mqtt/log/%c

topic write mqtt/lost

 

 

       [user admin]  indicates a user admin . Subsequent entries represent the read and write permissions of the user's related topic until the end of another user .

       [topic write wifi/log] The representative team's wifi/log topic has write permission. The topic command specifies a specific topic name without wildcards.

       [pattern write wifi/log/+] Use wildcards to indicate permissions for a certain number of topics that match the rules.

 

       Permission classification:

              write

              read

              writeread

Cluster configuration

    Moquette 's cluster configuration utility is hazelcast . Hazelcast is a data synchronization tool based on java . In moquette , it is used for synchronization of messages from different nodes.

<network>

<public-address>IP1:5701</public-address>

<port>5701</port>

<join>

       <multicast enabled="false" />

       <tcp-ip enabled="true">

              <required-member>IP2:5701</required-member>

       </tcp-ip>

</join>

</network>

 

public-address:代表了当前节点的IP及端口

required-member:代表了集群中的其他节点。

各节点的集群模式建立后,各节点是对等关系,无主从之分

 

用户管理

该文件用于系统的可登录用户,实例格式如下:

#*********************************************

# Each line define a user login in the format

#   <username>:sha256(<password>)

#*********************************************

#NB this password is sha256(passwd)

admin:8d969eef6ecad3c29a3a629280e686cf0c3f5d5a86aff3ca12020c923adc6c92

client:8d969eef6ecad3c29a3a629280e686cf0c3f5d5a86aff3ca12020c923adc6c92

 

该文件的格式非常简单:

     每行代表了一个用户及其密码,用:分割,密码是sha256摘要后的结果。

 

关于client(消费者)所使用的用户,大部分情况下,client只需要其clientId来区分,因此后台可针对业务类型建立不同的用户分给client使用,不需要为每个clientId都建立用户。

 

主配置

主配置中包含了较多内容,介绍如下:

 

1.端口

port 1883

websocket_port 8383

 

    port 1883 broker的主端口,默认为MQTT协议的1883端口

由于系统提供了websocket功能,可以使用websocket的方式使用(该模式未进行测试)。

 

2. SSL端口及配置

# ssl_port 8883

#jks_path serverkeystore.jks

#key_store_password passw0rdsrv

#key_manager_password passw0rdsrv

 

       对于有较高安全要求的系统,可以添加SSL支持。

 

3.IP绑定限制

#*********************************************************************

# The interface to bind the server

#  0.0.0.0 means "any"

#*********************************************************************

host 0.0.0.0

 

4.存储设置

storage_class io.moquette.persistence.redis.RedisStorageService

    由于基于不同存储的实现的性能,差异性较大,moquette默认采用内存存储的模式,该模式有很高的性能,但存在单点崩溃下,消息丢失的风险(由于集群负载的使用,可降低该问题发生的影响范围)。

如果对存储过于看重,性能可求次,可使用基于redis的存储实现,其自带的mapdb的存储实现,错误较多。

在不设置的情况,默认采用的基于memory的存储实现。

 

5.启用权限访问

#*********************************************************************

# acl_file:

#    defines the path to the ACL file relative to moquette home dir

#    contained in the moquette.path system property

#*********************************************************************

acl_file config/acl.conf

 

以上代表了,broker将以acl.conf中的内容为基础进行授权鉴权。

 

6.是否允许匿名访问

#*********************************************************************

# allow_anonymous is used to accept MQTT connections also from not

# authenticated clients.

#   - false to accept ONLY client connetions with credentials.

#   - true to accept client connection without credentails, validating

#       only against the password_file, the ones that provides.

#*********************************************************************

allow_anonymous false

 

以上代表不允许匿名访问,必须使用用户名及密码才可以访问。

 

7.用户密码文件配置

#*********************************************************************

# password_file:

#    defines the path to the file that contains the credentials for

#    authenticated client connection. It's relative to moquette home dir

#    defined by the system property moquette.path

#*********************************************************************

password_file config/password_file.conf

 

以上代表broker将使用password文件进行鉴权,若不需要则可以将其注释掉。

 

8.epoll的启用

#*********************************************************************

# Netty Configuration

#*********************************************************************

#

# Linux systems can use epoll instead of nio. To get a performance

# gain and reduced GC.

# http://netty.io/wiki/native-transports.html for more information

#

netty.epoll true

 

linux系统下,提供的epoll机制,可使系统能够承载更高的终端。以上代表启用epoll。在机器硬件较好的情况下,epoll模式提升明显。

 

9集群配置

#hazelcast

#intercept.handler io.moquette.interception.HazelcastInterceptHandler

 

集群配置的情况下,需要开启以上配置,开启配置的前提是hazelcast.xml文件已配置。

 

10.Redis配置

#redis storage

redis.host localhost

redis.port 6379

redis.password

redis.database 0

redis.prefix monitor:

 

store_class已经配置为redis的情况下,需要配置以上参数,由于集群模式使用hazelcast,目前的基于redis的实现,不具备分片等功能,但键值的设计已经具备。

 

2.2 启动

   Moquette代码工程采用maven管理,采用maven install可以打包一个在linux下运行的文件,打包后的格式如下:

 

Lib目录是所有使用到的lib文件,分为:

       Netty相关

       Hazelcast

       Log相关

       Redis存储实现引入的lib:在使用memory的模式下,redis相关可以删除,减少包的大小。

Bin目录:

      

linux下的moquette.sh启动方式:

       默认不是以后台运行的方式,需要使用以下命令运行:

      

 setsid  ./moquette.sh &

   nohup命令模式会找不到配置好的log输出。

   Windows下以bat命令运行

2.3客户端

     实现moqtt协议的客户端存在很多种,针对该broker,目前测试使用的是eclipse-phao的,该客户端实现提供了多种语言版本,便于不同终端使用。

可以在以下的网址中找到相关语言版本的下载:

http://www.eclipse.org/paho/downloads.php

       针对不同的语言版本,可提供的功能存在不同,目前broker默认没有实现除mqtt协议规范中提到的功能。

       重连机制,需要客户端想法实现该机制,避免客户端掉线后只能重启才能链接的境界。

      

 

3测试

broker经过了多次测试。

测试场景:

 

1.机器配置

Broker所在机器为8G内存,4核(非独用)。

内存

8G

CPU

4

 

  Client所运行机器多样,每台机器运行5000client

Publish为普通windows机器,两个publisher,每个5个发送线程,平均每秒100条消息。

2.消息发送速度

10秒一条群发的情况下,测试3论。

       每秒100条点对点的消息情况下,测试3轮。

       每轮测试2030几个小时。

3.客户端情况

   Clientclean Session的情况下,broker的内存占用较低,仅400M左右

在不清理会话的情况下,内存占用较高,在client大批量反复掉线重连情况下内存占用达到2G

心跳设置60s,过短(低于30s)的心跳,对broker来说不能承受。

 

4集群情况

  搭建了两个节点的集群,通过nginx进行tcp负载,客户端测试数量为3万。

3.1承载量测试

1.broker8000150001800025000几个级别的测试,在不发送消息的情况下,这几个级别的客户端都可以连接。

承载量

8000

15000

18000

25000

结果

OK

OK

OK

Ok

 

2.在发送消息的情况

承载量

8000

15000

18000

25000

结果

OK

OK

NG

NG

 

在每10秒一条群发消息的情况下,单点broker的无掉线承载量是1500018000发生较多的掉线情况。

 

3.2消息接收速度

10秒一条群发消息及每秒100条点对点消息的发送情况下,消息的接受速度都在1秒以内。

 

3.3所占内存

    基于实现分析,内存的占用主要由client在不清理会话进行链接掉线后产生的消息积累,在原有基于内存的实现机制中,为每个client保存1024条消息,在超过1024条后,消息会导致publish端出错。修改后的实现,为每个离线client保存最新的32条消息,超过32条的将被丢弃。

基于redis的实现,消息目前没有设定弃用或过期机制。

      

测试期间的内存分析:

IP1broker,总内存占用情况如下

      

 

 

 

内存稳定在800M左右,处于稳定状态。

 

 

3.4注意问题

    基于内存的存储实现,目前仅保存32条离线消息,超过32条将丢弃原有的。

遗愿消息内容必须为ascii,不能为其他字符。遗愿消息主要用于客户端掉线后的处理。

客户端的心跳不能设置过小,否则broker的承载量将严重下降,建议60s以上。

 

遗留的问题:

       在心跳之间的时间段,测试发现存在broker误签收的情况。

       以上问题,在业务实际使用过程中,采取业务签收等方式,避免消息质量的不可靠性的出现。

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326309396&siteId=291194637