kafka 0.10.0.0 deployment SASL_PLAINTEXT

First add the kafka-jaas configuration file, modify server.properties, and modify the startup script sh

1. Security.inter.broker.protocol=SASL_PLAINTEXT must be configured

It turns out that this item was not configured in 2.2, and it was not configured in 0.11. In this version, kafka cannot be started without configuration, so add configuration

security.inter.broker.protocol=SASL_PLAINTEXT

listeners=SASL_PLAINTEXT://0.0.0.0:10092
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
security.inter.broker.protocol=SASL_PLAINTEXT

2、org.apache.kafka.common.KafkaException: Exception while loading Zookeeper JAAS login context ‘Client’


WARN SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: './bin/…/config/kafka_server_jaas.conf' in the zookper log in 0.11 . Will continue connection to Zookeeper
server without SASL authentication, if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn)
also has this prompt, but it does not affect the startup of Kafka. 1.10 reports an error, and Kafka cannot be started directly.

Solution: You need to configure the user name and password between kafka and zookeeper

  • (1) zookeeper adds jaas configuration file
Server {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    username="kafka"
    password="kafka"
    user_kafka="kafka";
};
  • (2)zookeeper-server-start.sh
export KAFKA_OPTS="-Djava.security.auth.login.config=file:$base_dir/../config/zk_server_jaas.conf -Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationP
rovider -Dzookeeper.requireClientAuthScheme=sasl"
  • (3)kafka_server_jaas.conf
KafkaServer {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="admin"
    password="admin"
    user_admin="admin";
};
Client {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    username="kafka"
    password="kafka";
};

Please note : both zookeeper server and kafka client should use org.apache.zookeeper.server.auth.DigestLoginModule required

因为:zookeeper does not support SASL Plain, but DigestMD5 is pretty similar.

3、java.io.IOException: Configuration Error: Line : expected [option key]

There is no semicolon at the end of the KafkaServer configuration

insert image description here

at last:

KafkaServer {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="admin"
    password="admin"
    user_admin="admin";
};
Client {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    username="kafka"
    password="kafka";
};

4、org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL for /brokers/ids

Because the following configuration is added to server.properties, some people searched on the Internet because zookeeper.set.acl=true made an error, so this part was annotated

#super.users=User:kafka
#authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
#zookeeper.set.acl=true

5、org.I0Itec.zkclient.exception.ZkException: org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL

It looks similar to the above error. I searched for information on the Internet and found the cause of the zookeeper log exception—this is not an error report, but a user-level KeeperException. You can ignore Kafka, which is not processed, and start it for the first time after it is installed. zookeeper log Error: KeeperErrorCode = NoNode for /config/topics/test, because kafka requests to access this path, but this path does not exist yet, zookeeper throws this error. Kafka will create this topic, and then access the path corresponding to the topic in zookeeper. The zookeeper log throws error NodeExists for /config/topics (kafka has already created the topic), and the path already exists. To sum up, these errors are normal log information and can be ignored.

https://stackoverflow.com/questions/43559328/got-user-level-keeperexception-when-processing

So I ignored it and continued. The topic was created successfully. When I created the producer again, the following problems appeared.

6、WARN Error while fetching metadata with correlation id 38 : {1001=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

Error content: The leader is unavailable
Reason analysis: The reason is that many topics are being deleted and the leader election is in progress.
The solution given is: use the kafka-topics script to check the leader information and then check the survival of the broker. Try to restart to solve the problem.
How to use kafka- The topics script checks the leader information, but looking at the log, it finds the log of failure to elect the leader. Try to restart
In the zookeeper log, found

7、ERROR Missing AuthenticationProvider for sasl (org.apache.zookeeper.server.PrepRequestProcessor)

Set in zookeeper-server-start.sh:
-Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
-Dzookeeper.requireClientAuthScheme=sasl

finally:

export KAFKA_OPTS="-Djava.security.auth.login.config=file:$base_dir/../config/zk_server_jaas.conf -Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider -Dzookeeper.requireClientAuthScheme=sasl"

The error disappears and the leader can be successfully selected

8、consumer-property is not a recognized option

[root@aiot bin]# ./kafka-console-consumer-saal.sh --bootstrap-server 10.221.13.102:10092 --topic 1001 --consumer-property security.protocol=SASL_PLAINTEXT --consumer-property sasl.mechanism=PLAIN

This command is used in 2.2, it must be not supported by the current version, find the official corresponding version

Official documentation:
https://kafka.apache.org/0100/documentation.html#quickstart_consume

Step 5: Start a consumerKafka also has a command line consumer that will dump out messages to standard output.> bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
This is a message
This is another message

If you have each of the above commands running in a different terminal then you should now be able to type messages into the producer terminal and see them appear in the consumer terminal.All of the command line tools have additional options; running the command with no arguments will display usage information documenting them in more detail.

You can view the parameters supported by this sh by executing .sh without any parameters

[root@aiot bin]# ./kafka-console-consumer.sh 
The console consumer is a tool that reads data from Kafka and outputs it to standard output.
Option                                  Description                            
------                                  -----------                            
--blacklist <blacklist>                 Blacklist of topics to exclude from    
                                          consumption.                         
--bootstrap-server <server to connect                                          
  to>                                                                          
--consumer.config <config file>         Consumer config properties file.       
--csv-reporter-enabled                  If set, the CSV metrics reporter will  
                                          be enabled                           
--delete-consumer-offsets               If specified, the consumer path in     
                                          zookeeper is deleted when starting up
--enable-systest-events                 Log lifecycle events of the consumer   
                                          in addition to logging consumed      
                                          messages. (This is specific for      
                                          system tests.)                       
--formatter <class>                     The name of a class to use for         
                                          formatting kafka messages for        
                                          display. (default: kafka.tools.      
                                          DefaultMessageFormatter)             
--from-beginning                        If the consumer does not already have  
                                          an established offset to consume     
                                          from, start with the earliest        
                                          message present in the log rather    
                                          than the latest message.             
--key-deserializer <deserializer for                                           
  key>                                                                         
--max-messages <Integer: num_messages>  The maximum number of messages to      
                                          consume before exiting. If not set,  
                                          consumption is continual.            
--metrics-dir <metrics directory>       If csv-reporter-enable is set, and     
                                          this parameter isset, the csv        
                                          metrics will be outputed here        
--new-consumer                          Use the new consumer implementation.   
--property <prop>                       The properties to initialize the       
                                          message formatter.                   
--skip-message-on-error                 If there is an error when processing a 
                                          message, skip it instead of halt.    
--timeout-ms <Integer: timeout_ms>      If specified, exit if no message is    
                                          available for consumption for the    
                                          specified interval.                  
--topic <topic>                         The topic id to consume on.            
--value-deserializer <deserializer for                                         
  values>                                                                      
--whitelist <whitelist>                 Whitelist of topics to include for     
                                          consumption.                         
--zookeeper <urls>                      REQUIRED: The connection string for    
                                          the zookeeper connection in the form 
                                          host:port. Multiple URLS can be      
                                          given to allow fail-over.    

Sure enough, it is not supported, but there is a parameter –consumer.config, so modify consumer.properties

[root@at config]# cat consumer.properties 
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
# 
#    http://www.apache.org/licenses/LICENSE-2.0
# 
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.consumer.ConsumerConfig for more details

# Zookeeper connection string
# comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
zookeeper.connect=127.0.0.1:10181

# timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000

#consumer group id
group.id=test-consumer-group

#consumer timeout
#consumer.timeout.ms=5000

security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN

The command becomes:
./kafka-console-consumer-saal.sh --zookeeper localhost:10181 --topic 1001 --consumer.config consumer.properties

Return: No brokers found in ZK.

9、No brokers found in ZK.

看到网上有人说:
Can you try the consumer without --zookeeper flag. If you are using the new consumer in 0.10.2 it’s not needed. If you provide --zookeeper then it tries to use the old 0.8 consumer.

Translation: Can you try using the consumer without the --zookeeper flag? Not required if you are using new consumers in 0.10.2. If --zookeeper is provided then it will try to use the old 0.8 consumer.

It is definitely not possible to directly remove –zookeeper, because yes REQUIRED, continue to look at kafka-console-consumer.sh. The parameter has --new-consumer,
so modify it to:

/kafka-console-consumer-saal.sh --bootstrap-server 127.0.0.1:10092 --topic 1001 --consumer.config …/config/consumer.properties --new-consumer

Finally, the producer's message can be successfully received.

Official documentation: https://kafka.apachecn.org/documentation.html
insert image description here

Guess you like

Origin blog.csdn.net/small_tu/article/details/109534634