Kafka sets user password and passes SpringBoot test

narrative

The current Kafka authentication method uses 动态增加用户the protocol.

Since version 0.9.0.0, the Kafka community has added many functions to improve the security of the Kafka cluster. Kafka provides two security strategies: SSL or SASL. The SSL method is mainly realized through the CA token. This article mainly introduces the SASL method.

1) SASL authentication:

Ways of identifying Kafka version features
SASL/PLAIN 0.10.0.0 Cannot add users dynamically
SASL/SCRAM 0.10.2.0 Can dynamically add users
SASL/Kerberos 0.9.0.0 Requires independent deployment of verification services
SASL/OAUTHBEARER 2.0.0 You need to implement the interface yourself to create and verify tokens, and additional Oauth services are required

2) SSL Encryption: Use SSL to encrypt data transmitted between agents and clients, between agents, or between agents and tools.

operate

1. Enter the kafka directory, modify the kafka configuration file, modify config/server.propertiesthe file, and add the following content

vi config/server.properties

Modifications:

#配置zookeeper管理kafka的路径
zookeeper.connect=localhost:2181

#配置kafka的监听端口(SASL_PLAINTEXT:动态增加用户协议,PLAINTEXT 不能动态增加用户)
listeners=SASL_PLAINTEXT://:9092

#把kafka的地址端口注册给zookeeper,如果是远程访问要改成外网IP(SASL_PLAINTEXT:动态增加用户协议,PLAINTEXT 不能动态增加用户)
advertised.listeners=SASL_PLAINTEXT://你外网的ip:9092


#使用的认证协议(SASL_PLAINTEXT:动态增加用户协议,PLAINTEXT 不能动态增加用户)
security.inter.broker.protocol=SASL_PLAINTEXT

#SASL机制
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN

#完成身份验证的类
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer

#如果没有找到ACL(访问控制列表)配置,则允许任何操作。
allow.everyone.if.no.acl.found=false

#需要开启设置超级管理员,设置visitor用户为超级管理员
super.users=User:visitor

all content:

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092

# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/usr/local/kafka/kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0


#配置zookeeper管理kafka的路径
zookeeper.connect=localhost:2181

#配置kafka的监听端口(SASL_PLAINTEXT:动态增加用户协议,PLAINTEXT 不能动态增加用户)
listeners=SASL_PLAINTEXT://:9092

#把kafka的地址端口注册给zookeeper,如果是远程访问要改成外网IP(SASL_PLAINTEXT:动态增加用户协议,PLAINTEXT 不能动态增加用户)
advertised.listeners=SASL_PLAINTEXT://你外网的ip:9092


#使用的认证协议(SASL_PLAINTEXT:动态增加用户协议,PLAINTEXT 不能动态增加用户)
security.inter.broker.protocol=SASL_PLAINTEXT

#SASL机制
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN

#完成身份验证的类
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer

#如果没有找到ACL(访问控制列表)配置,则允许任何操作。
allow.everyone.if.no.acl.found=false

#需要开启设置超级管理员,设置visitor用户为超级管理员
super.users=User:visitor

2. Create a new passdirectory under the kafka directory, and create a configuration file, and fill in the following content.

Create a directory

mkdir pass

create kafka_server_jaas.conffile

vi kafka_server_jaas.conf

Fill in the following content

KafkaServer {
    
    
    org.apache.kafka.common.security.plain.PlainLoginModule required
        username="visitor"
        password="qaz@123"
        user_visitor="qaz@123";
};

3. In the current kafka directory, modify kafkathe startup script

vi bin/kafka-server-start.sh

Add variables at the top of the file

export KAFKA_OPTS=" -Djava.security.auth.login.config=/usr/local/kafka2.12/pass/kafka_server_jaas.conf"

All configuration content

#!/bin/bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

export KAFKA_OPTS=" -Djava.security.auth.login.config=/usr/local/kafka2.12/pass/kafka_server_jaas.conf"

if [ $# -lt 1 ];
then
	echo "USAGE: $0 [-daemon] server.properties [--override property=value]*"
	exit 1
fi
base_dir=$(dirname $0)

if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
    export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
fi

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
    export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
fi

EXTRA_ARGS=${
    
    EXTRA_ARGS-'-name kafkaServer -loggc'}

COMMAND=$1
case $COMMAND in
  -daemon)
    EXTRA_ARGS="-daemon "$EXTRA_ARGS
    shift
    ;;
  *)
    ;;
esac

exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "$@"

insert image description here

Kafka consumer provider client authentication

注意:如果你只需要使用Kafka连接SpringBoot验证,那么这一步可以不做!

Enter kafkathe directory, passcreate kafka_client_jaas.confa file under the directory, and write the following content.

KafkaClient {
    
      
org.apache.kafka.common.security.plain.PlainLoginModule required  
    username="visitor"  
    password="qaz@123";  
};

provider

1. Configure provider authentication, modify the provider startup script, and write the following content at the top.

vi bin/kafka-console-producer.sh

Write the following at the top

export KAFKA_OPTS=" -Djava.security.auth.login.config=/usr/local/kafka2.12/pass/kafka_client_jaas.conf"

insert image description here

2. Start the provider

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic testTopic --producer-property security.protocol=SASL_PLAINTEXT --producer-property sasl.mechanism=PLAIN

insert image description here

consumer

1. Configure consumer authentication, modify the consumer startup script, and write the following content at the top.

vi bin/kafka-console-consumer.sh

Write the following at the top

export KAFKA_OPTS=" -Djava.security.auth.login.config=/usr/local/kafka2.12/pass/kafka_client_jaas.conf"

insert image description here

2. Start the consumer

bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic testTopic --from-beginning --consumer-property security.protocol=SASL_PLAINTEXT --consumer-property sasl.mechanism=PLAIN

insert image description here

Test verification in SpringBoot

1. Test and verify the account password in SpringBoot. For specific SpringBoot project configuration, please refer to other blog posts.

configuration file

#
#   环境选择器
#
#   @author:tongyao
#
spring:
  kafka:
    bootstrap-servers: 82.157.190.245:1803

    # 配置用户名密码
    properties:
      security.protocol: SASL_PLAINTEXT
      sasl.mechanism: PLAIN

      # SASL/PLAIN 不能动态增加用户
      #sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username="visitor" password="qaz@123";

      # SASL/SCRAM 可以动态增加用户
      sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="visitor" password="qaz@123";

    producer:

      # 发生错误后,消息重发的次数。
      retries: 0

      #当有多个消息需要被发送到同一个分区时,生产者会把它们放在同一个批次里。该参数指定了一个批次可以使用的内存大小,按照字节数计算。
      batch-size: 16384

      # 设置生产者内存缓冲区的大小。
      buffer-memory: 33554432

      # 键的序列化方式
      key-serializer: org.apache.kafka.common.serialization.StringSerializer

      # 值的序列化方式
      value-serializer: org.apache.kafka.common.serialization.StringSerializer

      # acks=0 : 生产者在成功写入消息之前不会等待任何来自服务器的响应。
      # acks=1 : 只要集群的首领节点收到消息,生产者就会收到一个来自服务器成功响应。
      # acks=all :只有当所有参与复制的节点全部收到消息时,生产者才会收到一个来自服务器的成功响应。
      acks: 1
    consumer:

      # 自动提交的时间间隔 在spring boot 2.X 版本中这里采用的是值的类型为Duration 需要符合特定的格式,如1S,1M,2H,5D
      auto-commit-interval: 1S

      # 该属性指定了消费者在读取一个没有偏移量的分区或者偏移量无效的情况下该作何处理:
      # latest(默认值)在偏移量无效的情况下,消费者将从最新的记录开始读取数据(在消费者启动之后生成的记录)
      # earliest :在偏移量无效的情况下,消费者将从起始位置读取分区的记录
      auto-offset-reset: earliest

      # 是否自动提交偏移量,默认值是true,为了避免出现重复数据和数据丢失,可以把它设置为false,然后手动提交偏移量
      enable-auto-commit: false

      # 键的反序列化方式
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer

      # 值的反序列化方式
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
    listener:

      # 在侦听器容器中运行的线程数。
      concurrency: 5

      #listner负责ack,每调用一次,就立即commit
      ack-mode: manual_immediate
      missing-topics-fatal: false
server:
  port: 8899

insert image description here
Article reference: https://blog.csdn.net/guaotianxia/article/details/121094383

Guess you like

Origin blog.csdn.net/u014641168/article/details/130841314