Kafka SCRAM and combat PLAIN

1 Overview

Currently Kafka ACL support for multiple certification authority, today I tell you about the SCRAM PLAIN and certification authority. Verification environment as follows:

  • JDK:1.8
  • Kafka:2.3.0
  • Kafka Eagle:1.3.8

2. Content

2.1 PLAIN certification

First, $ KAFAK_HOME / config directory create a text file named kafka_server_plain_jaas.conf, configured as follows:

KafkaServer {
   org.apache.kafka.common.security.plain.PlainLoginModule required
   username="admin"
   password="admin-secret"
   user_admin="admin-secret"
   user_ke="ke";
};

Next, the script file kafka-server-start.sh rename kafka-server-plain-start.sh, and modify the contents of the last line is:

# Add authentication file 
Exec $ base_dir / Kafka used to live-RUN-class. SH $ -Djava.security.auth the extra_args. The Login .config = $ base_dir /../ config / kafka_server_plain_jaas.conf kafka.Kafka " $ @ "

Then, copy and rename files server.properties plain.properties, then modify the service profile side plain.properties, as follows:

# Protocol
listeners=SASL_PLAINTEXT://127.0.0.1:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN

# ACL
allow.everyone.if.no.acl.found=false
super.users=User:admin
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer

Finally, create a client user authentication file, kafka_client_plain_jaas.conf reads as follows:

KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="ke"
  password="ke";
};

2.2 Starting PLAIN certification cluster

2.2.1 Start Zookeeper

# Zookeeper configuration is relatively simple, there is not much to do introduction 
zkServer. SH Start

2.2.2 Start Kafka cluster

# Kafka installation into bin directory 
. / Kafka-Server-Start-Plain. SH ../config/plain.properties &

2.2.3 Creating Topic

./kafka-topics.sh --create --zookeeper 127.0.0.1:2181/plain --replication-factor 1 --partitions 3 --topic test_plain 

2.2.4 add read and write permissions

# Add read permissions 
. / Kafka used to live-acls. SH --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-the Properties zookeeper.connect = 127.0 . 0.1 : 2181 / Plain --add --allow the User-Principal: KE - the Read Operation - Topic test_plain 
# add write permissions 
. / Kafka used to live-acls. SH --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-the Properties zookeeper.connect = 127.0 . 0.1 : 2181 / Plain --add --allow- the User Principal: KE --operation the Write - Topic test_plain 
# added consumer group permission 
. / Kafka used to live-acls. SH --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=127.0.0.1:2181/plain --add --allow-principal User:ke --operation Read --group g_plain_test
# 查看权限列表
./kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=127.0.0.1:2181/plain --list

2.2.5 the results

Certification 2.3 SCRAM

PLAIN authentication There is a problem, it is not a dynamic new user, after every time you add a user, you need to restart Kafka cluster running to take effect. To this end, in a production environment, this authentication method is not realistic business scenarios. The SCRAM is not the same, the use of SCRAM authentication, users can dynamically add, after adding the user can not restart Kafka running cluster can be authenticated.

New kafka_server_scram_jaas.conf, configured as follows:

KafkaServer {
  org.apache.kafka.common.security.scram.ScramLoginModule required
  username="admin"
  password="admin-secret";
};

Next, the script file kafka-server-start.sh rename kafka-server-scram-start.sh, and modify the contents of the last line is:

# Add authentication file 
exec $ base_dir / kafka-run-class.sh $ EXTRA_ARGS -Djava.security.auth.login.config = $ base_dir /../ config / kafka_server_scram_jaas.conf kafka.Kafka "$ @"

Then $ KAFKA_HOME / config directory, copy and rename files server.properties scram.properties, then modify the server configuration file scram.properties, reads as follows:

# Protocol
listeners=SASL_PLAINTEXT://dn1:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
sasl.enabled.mechanisms=SCRAM-SHA-256

# ACL
allow.everyone.if.no.acl.found=false
super.users=User:admin
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer

2.3.1 Start Zookeeper

# Zookeeper configuration is relatively simple, there is not much to do introduction 
zkServer. SH Start

2.3.2 Add administrator and ordinary users

# 添加管理员
./kafka-configs.sh --zookeeper 127.0.0.1:2181/scram --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin
# 添加普通用户(ke)
./kafka-configs.sh --zookeeper 127.0.0.1:2181/scram --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=ke],SCRAM-SHA-512=[password=ke]' --entity-type users --entity-name ke

2.3.3 Start SCRAM certification cluster

./kafka-server-scram-start.sh ../config/scram.properties &

2.3.4 Creating Topic

./kafka-topics.sh --create --zookeeper 127.0.0.1:2181/scram --replication-factor 1 --partitions 3 --topic test_scram

2.3.5 Adding permissions

# Add read permissions 
./kafka-acls. SH --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-the Properties zookeeper.connect = 127.0 . 0.1 : 2181 / SCRAM --add --allow the User-Principal: KE - the Read Operation - Topic test_scram
# add write permissions .
/ Kafka used to live-acls. SH --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-the Properties zookeeper.connect = 127.0 . 0.1 : 2181 / SCRAM --add --allow- the User Principal: KE --operation the Write - Topic test_scram
# added consumer group permission .
/ Kafka used to live-acls. SH --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=127.0.0.1:2181/scram --add --allow-principal User:ke --operation Read --group g_scram_test
# 查看权限列表 ./kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=127.0.0.1:2181/scram --list

2.3.6 the results

3.Kafka permission levels

Kafka permission level comprises Topic, Group, Cluster, TransactionalId, right contents each dimension relates to the following:

Resource Operations
Topic Read,Write,Describe,Delete,DescribeConfigs,AlterConfigs,All
Group Read,Describe,All
Cluster Create,ClusterAction,DescribeConfigs,AlterConfigs,IdempotentWrite,Alter,Describe,All
TransactionalId Describe,Write,All

For example, statistics Capacity size Topic, if an exception is thrown "Cluster authorization failed", it is because there is no permission to open Describe Cluster level, execute the following command:

./kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=127.0.0.1:2181/scram --add --allow-principal User:ke --operation Describe --cluster 

4.Kafka Eagle Integrated SCRAM certification

So how to use Kafka Eagle SCRAM certified Kafka integrated cluster, to monitor it? Access http://www.kafka-eagle.org/ , download the package, unpack and configure the following:

# To be here to start a SCRAM has certified Kafka cluster (alias cluster1) configuration 
cluster1.kafka.eagle.sasl.enable = to true 
cluster1.kafka.eagle.sasl.protocol = SASL_PLAINTEXT 
cluster1.kafka.eagle.sasl.mechanism = -SHA-SCRAM 256 
cluster1.kafka.eagle.sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required username = " KE " password = " KE " ; 
# ClientId here if you do not, you can no configuration 
cluster1.kafka.eagle.sasl.client. the above mentioned id =

Then, ke.sh start to boot Kafka Eagle monitoring system.

4.1 Topic Preview

4.2 KSQL Query Topic

Execute the following SQL statement, as follows:

select * from "test_scram" where "partition" in (0) limit 1

Execution results are as follows:

5. Summary

Production environment, the user may vary with business needs, add or delete, it is necessary at this time when the dynamic control of the user. Moreover, the production environment Kafka cluster can not casually restart, so the use of SCRAM authentication ideally suited to Kafka.

6. Conclusion

This blog will share here, if you have any questions in the process of research study, you can add the group to discuss or send mail to me, I will do my best to answer your questions, and the king of mutual encouragement!

In addition, a blogger book " Kafka used to live not difficult to learn " and " Hadoop big data mining from entry to advanced combat ", like a friend or a classmate, you can click the link to buy the book to buy bloggers learn the bulletin board there, thank you for your support. The following public concern number, follow the prompts, you can get free video teaching books.

Guess you like

Origin www.cnblogs.com/smartloli/p/11404610.html