【ActiveMQ】KahaDB

线上问题:项目使用的NAS文件系统报警[ITRS] Alert on Managed Entity(Host) xxx - disk partition xxx - the disk usage is too high.

RCA:NAS默认设置storage-snapshot

解决方案之一:ActiveMQ性能优化

学习笔记:

1. Storage-Snapshot 

https://searchdatabackup.techtarget.com/definition/storage-snapshot

https://blog.qnap.com/snapshot-different-backup/

对比:Backup备份

2. ActiveMQ KahaDB

https://cwiki.apache.org/confluence/display/ACTIVEMQ/KahaDB

http://activemq.apache.org/kahadb

http://www.idevnews.com/images/emailers/110127_ProgressFUSE/WhitePapers/ActiveMQinActionCH05.pdf

https://activemq.apache.org/why-do-kahadb-log-files-remain-after-cleanup

https://www.cnblogs.com/hapjin/p/5674257.html

https://www.cnblogs.com/kaka/archive/2012/07/24/2606570.html

https://blog.csdn.net/u012758088/article/details/78046108?utm_source=blogxgwz2

KahaDB is a file based persistence database that is local to the message broker that is using it. It has been optimised for fast persistence and is the the default storage mechanism from ActiveMQ 5.4 onwards. KahaDB uses less file descriptors and provides faster recovery than its predecessor, the AMQ Message Store.

db-*.log  存储消息的内容。对于一个消息而言,不仅仅有消息本身的数据(message data),而且还有(Destinations、订阅关系、事务...)

the data logs contain all of the message data and all of the information about destinations, subscriptions, transactions, etc.. 

data log以日志形式存储消息,而且新的数据总是以APPEND的方式追加到日志文件末尾。因此,消息的存储是很快的。比如,对于持久化消息,Producer把消息发送给Broker,Broker先把消息存储到磁盘中(enableJournalDiskSyncs配置选项),然后再向Producer返回Acknowledge。Append方式在一定程度上减少了Broker向Producer返回Acknowledge的时间。

journalMaxFileLength  默认值32MB,当存储的消息达到32MB时,新建一个新文件来保存消息。这个配置对生产者或消息者的速率有影响。比如,生产者速率很快而消费者速率很慢时,将它配置得大一点比较好。

journalMaxFileLength—(default 32mb当broker的吞吐量特别大的时候,日志文件会很快被写满,这样会因为频繁的关闭文件,打开文件而导致性能低下。你可以通过调整文件的size,减少文件切换的频率,从而获得轻微的性能改善。

kahadb写文件时是按消息顺序依次写入的,删文件时则要等到这个文件内的所有消息被消费完毕。也就是说,即使这个文件里只有一条消息没被消费掉,也需要占用完整的空间。如果本身队列特别多,恰好有一个队列消费没跟上,可能它本身占用空间非常小,但是会占用大量磁盘空间无法释放。给每个队列分别配置的话就可以大大缓解这一情况。

对比:LevelDB-store (http://activemq.apache.org/leveldb-store)

Both KahaDB and the LevelDB store have to do periodic garbage collection cycles to determine which log files can deleted. In the case of KahaDB, this can be quite expensive as you increase the amount of data stored and can cause read/write stalls while the collection occurs. The LevelDB store uses a much cheaper algorithm to determine when log files can be collected and avoids those stalls.

猜你喜欢

转载自www.cnblogs.com/cathygx/p/12626083.html