10.Kafka系列之Connect实践

Kafka Connect是一种在Kafka和外部系统之间传输数据的工具,它提供了可扩展的、可靠的、高效的方式来处理数据流。

1. 主要优势

1.简单易用:Kafka Connect使用基于配置的方式来定义数据源和目标,而不需要编写复杂的代码。它提供了许多现成的连接器,包括JDBC、HDFS、Elasticsearch等,可以轻松地将数据集成到Kafka中。

2.可扩展性:Kafka Connect是一个可扩展的工具,可以轻松地添加新的连接器或自定义连接器来满足不同的需求。它还支持分布式模式,可以轻松地扩展处理能力。

3.高可靠性:Kafka Connect提供了丰富的错误处理和重试机制,确保数据的可靠性和一致性。它还支持精确一次性语义,确保每个记录都只被处理一次。

4.高效性:Kafka Connect使用Kafka作为数据传输的基础,充分利用了Kafka的高性能和可扩展性。它还支持增量拉取和批处理,提高了处理效率。

5.可管理性:Kafka Connect提供了丰富的监控和管理工具,可以轻松地监控和管理连接器的状态和性能。它还提供了REST API,可以通过API来管理连接器。

2. 将MySQL中数据导入至Kafka

2.1 下载kafka-connect-jdbc插件https://www.confluent.io/hub/confluentinc/kafka-connect-jdbc

将解压后的lib包重命名为libs

里面有点坑的是没有Mysql8的驱动,手动下载mysql-connector-j-8.0.33.jar放置libs中

2.2 编写connect-mysql-source.properties
# 数据源的名称
name=jdbc-mysql-source
# 使用的插件类型
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
# MySQL的连接地址
connection.url=jdbc:mysql://IP:3306/blog
connection.user=shenjian
connection.password=******
tasks.max=10
# mode=timestamp+incrementing
# bulk为批量导入,此外还有incrementing和timestamp模式
mode=bulk
# 要读取的数据表名称,请勿用t_blog这种表,kafka不允许创建
table.whitelist=blog
# 自增列的名称
incrementing.column.name=id
topic.prefix=mysql-blog-
2.3 编写connect-standalone.properties文件
# 改为本机IP
bootstrap.servers=192.168.1.6:30092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/opt/bitnami/kafka/libs
2.4 编写Dockerfile文件

最初我把libs与.properties文件进行K8S挂载,出现权限等新问题,故先采用该方式

FROM bitnami/kafka:3.3.1
COPY config/* /opt/bitnami/kafka/config/
COPY libs/* /opt/bitnami/kafka/libs/
USER 0
RUN chmod g+rwX /opt/bitnami

EXPOSE 9092
USER 1001
ENTRYPOINT [ "/opt/bitnami/scripts/kafka/entrypoint.sh" ]
CMD [ "/opt/bitnami/scripts/kafka/run.sh" ]

通过Dockerfile构建镜像

docker build -t shenjian/kafka:3.3.1 .

修改kafka.yaml镜像为

image: shenjian/kafka:3.3.1

然后我们启动kafka确保正常启动无报错

2.5 启动Connect
cd /opt/bitnami/kafka
bin/connect-standalone.sh config/connect-standalone.properties config/connect-mysql-source.properties

2.6 验证写入主题成功
$ kafka-topics.sh --describe --topic mysql-blog-blog --bootstrap-server localhost:9092
Topic: mysql-blog-blog  TopicId: rZg5-8B7TE2ekOVqglxdfg PartitionCount: 1       ReplicationFactor: 1    Configs: 
        Topic: mysql-blog-blog  Partition: 0    Leader: 1001    Replicas: 1001  Isr: 1001
kafka-console-consumer.sh --topic mysql-blog-blog --from-beginning --bootstrap-server localhost:9092

3. 错误总结

1.KILL掉Kafka后出现如下 Failed to elect leader for partition __consumer_offsets-40 under strategy问题

[2023-05-07 09:57:50,400] ERROR [Controller id=1004 epoch=5] Controller 1004 epoch 5 failed to change state for partition __consumer_offsets-40 from OfflinePartition to OnlinePartition (state.change.logger)
kafka.common.StateChangeFailedException: Failed to elect leader for partition __consumer_offsets-40 under strategy OfflinePartitionLeaderElectionStrategy(false)
        at kafka.controller.ZkPartitionStateMachine.$anonfun$doElectLeaderForPartitions$7(PartitionStateMachine.scala:433)
        at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
        at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
        at kafka.controller.ZkPartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:430)
        at kafka.controller.ZkPartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:336)
        at kafka.controller.ZkPartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:242)
        at kafka.controller.ZkPartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:162)
        at kafka.controller.PartitionStateMachine.triggerOnlineStateChangeForPartitions(PartitionStateMachine.scala:76)
        at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:61)
        at kafka.controller.PartitionStateMachine.startup(PartitionStateMachine.scala:44)
        at kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:270)
        at kafka.controller.KafkaController.elect(KafkaController.scala:1520)
        at kafka.controller.KafkaController.processStartup(KafkaController.scala:1427)
        at kafka.controller.KafkaController.process(KafkaController.scala:2601)
        at kafka.controller.QueuedEvent.process(ControllerEventManager.scala:52)
        at kafka.controller.ControllerEventManager$ControllerEventThread.process$1(ControllerEventManager.scala:130)
        at kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:133)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
        at kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:133)
        at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)

问题原因: 新增加的副本的offset副本的offset比leader的新 所以在elect的时候 出现问题
解决方法: 在kafka的home path 的bin目录下执行自带平衡topic脚本

kafka-preferred-replica-election.sh --zookeeper zookeeper-pod.middleware:2181

可能由于镜像原因,未找到该脚本,直接粗暴解决【仅仅学习时用】,重新启动就好了

kubectl delete deployment kafka -n middleware
kubectl delete deployment zookeeper-pod -n middleware
kubectl apply -f zookeeper.yaml
kubectl apply -f kafka.yaml 

欢迎关注公众号算法小生与我沟通交流

猜你喜欢

转载自blog.csdn.net/SJshenjian/article/details/130545983