kafka and message queue

https://downloads.apache.org/kafka/3.5.1/kafka_2.13-3.5.1.tgz
d

kafka dependencies and zookeeper

kakka configuration file

broker.id=1    #每个 broker 在集群中的唯一标识,正整数。每个节点不一样
listeners=PLAINTEXT://192.168.74.70:9092 ##监听地址
num.network.threads=3  # 定义用于网络处理的线程数。
num.io.threads=8  #定义用于I/O操作的线程数。
socket.send.buffer.bytes=102400  #设置用于发送数据的套接字缓冲区大小。
socket.receive.buffer.bytes=102400  # 设置用于接收数据的套接字缓冲区大小。
socket.request.max.bytes=104857600 #指定单个请求的最大字节数。
log.dirs=/data/kafka #kakfa 用于保存数据的目录,所有的消息都会存储在该目录当中
num.partitions=3#设置创建新的 topic 默认分区数量,一般为集群的节点数
num.recovery.threads.per.data.dir=1  #
offsets.topic.replication.factor=1  #设置偏移量主题的复制因子为1。
transaction.state.log.replication.factor=1 #事务主题的复制因子(设置更高以确保可用性)。 内部主题创建将失败,直到群集大小满足此复制因素要求
transaction.state.log.min.isr=1#覆盖事务主题的min.insync.replicas配置
log.retention.hours=168#设置 kafka 中消息保留时间,
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.74.70:2181,192.168.74.71:2181,192.168.74.72:2181 #设置Zookeeper服务器的连接地址。
zookeeper.connection.timeout.ms=18000  #设置与Zookeeper建立连接的超时时间。
group.initial.rebalance.delay.ms=0 #设置初始重新平衡延迟的毫秒数。

Start
/apps/kafka_2.13-3.5.1/bin/kafka-server-start.sh -daemon /apps/kafka_2.13-3.5.1/config/server.properties

Check the port
netstat -antp|grep 2181

insert image description here

Create topic

/apps/kafka_2.13-3.5.1/bin/kafka-topics.sh --create --topic magedu --bootstrap-server 192.168.74.70:9092 --partitions 3 --replication-factor 2

insert image description here
查看topic
root@ubuntu20:~# /apps/kafka_2.13-3.5.1/bin/kafka-topics.sh --bootstrap-server 192.168.74.70:9092 --list
magedu

Stop kafka
/apps/kafka_2.13-3.5.1/bin/kafka-server-stop.sh

Verify topic
status description: There are three partitions, namely 0, 1, and
2. PartitionCount: 3 represents 3 shards.
ReplicationFactor: 2 represents 2 replicas.
The leader of partition 0 is 1.

Replicas: represents the brokerid stored in the replica
Isr: 1,3 represents the live person is 1,3

root@ubuntu20:~# /apps/kafka_2.13-3.5.1/bin/kafka-topics.sh  --describe   --bootstrap-server  192.168.74.70:9092
    Topic: magedu	TopicId: H4nV6WulTU-y_S4J2pHfOA	PartitionCount: 3	ReplicationFactor: 2	Configs: 
	Topic: magedu	Partition: 0	Leader: 1	Replicas: 1,3	Isr: 1,3
	Topic: magedu	Partition: 1	Leader: 1	Replicas: 2,1	Isr: 1
	Topic: magedu	Partition: 2	Leader: 3	Replicas: 3,2	Isr: 3

There is no 2 in the above Isr. You can know that node 2 is missing. Check and find that kafka of node 2 has not been started. It
is normal after starting.

Topic: magedu	TopicId: H4nV6WulTU-y_S4J2pHfOA	PartitionCount: 3	ReplicationFactor: 2	Configs: 
	Topic: magedu	Partition: 0	Leader: 1	Replicas: 1,3	Isr: 1,3
	Topic: magedu	Partition: 1	Leader: 1	Replicas: 2,1	Isr: 1,2
	Topic: magedu	Partition: 2	Leader: 3	Replicas: 3,2	Isr: 3,2

Specify the topic to view
/apps/kafka_2.13-3.5.1/bin/kafka-topics.sh --describe --bootstrap-server 192.168.74.70:9092 --topic luo

root@ubuntu20:~#ll /data/kafka/

Production data
/apps/kafka_2.13-3.5.1/bin/kafka-console-producer.sh --broker-list 192.168.74.71:9092,192.168.74.70:9092 --topic magedu
insert image description here

Consumption data
/apps/kafka_2.13-3.5.1/bin/kafka-console-consumer.sh --topic magedu --bootstrap-server 192.168.74.71:9092,192.168.74.70:9092 --from-beginning
--from- beginning consume from the beginning

View data through software
Create connection
insert image description here
Add kafka address configuration
insert image description here

Change to string type
insert image description here

Can view data

insert image description here

Summarize

A message is consumed once.
A message is consumed multiple times -> Data synchronization and distribution.
Most times, it is usually consumed once.

Kafka usually runs in a cluster to achieve high availability
topic (topic): logically group and save records (records and logs)

Kafka partition: In order to achieve high data availability, for example, the data of partition 0 is distributed to different kafka nodes. Each partition has one broker as the leader and one broker as the follower.

Sequential read
and write Sequential read

Monitor: Check
whether the port
curl url is 200

activemq

The port is the same as rabbitmq.
Install
wget https://mirrors.tuna.tsinghua.edu.cn/apache/activemq/5.18.2/apache-activemq-5.18.2-bin.tar.gz
cp apache-activemq-5.18.2- bin.tar.gz /apps/
cd /apps/
tar -xf apache-activemq-5.18.2-bin.tar.gz
ln -s apache-activemq-5.18.2 activemq
vi activemq.xml #No need to modify
cd...
./ bin/linux-x86-64/activemq start #Start

netstat -antp|grep 8161 Modify the address to 0.0.0.0
/usr/local/activemq/conf# vi jetty.xml
insert image description here
and visit
http://120.77.146.92:8161/admin/
insert image description here

Guess you like

Origin blog.csdn.net/m0_37749659/article/details/132515963
Recommended