Kafka configuration 3-configure Kafka cluster under Windows

 

Kafka configuration 1-install and configure Kafka in Windows environment

Kafka configuration 2-configure Kafka SASL-PLAIN authentication under Windows

Kafka configuration 3-configure Kafka cluster under Windows

Kafka configuration 4-configure Kafka SSL certificate under Windows

Kafka configuration 5-Kafka cluster + SASL + SSL under Windows

Kafka configuration 6-setting and adding SASL users or user permissions under Windows

1. Zookeeper configuration
    Here we take the configuration of 3 sets of Zookeeper on the same server as an example, install Zookeeper into 3 directories respectively,
    such as:
        D:\Net_Program\Net_Zookeeper
        D:\Net_Program\Net_Zookeeper2
        D:\Net_Program\Net_Zookeeper3

    1.1, edit zoo.cfg
        Open the zoo.cfg file (the first server), modify or add the following configuration:

# 存放数据
dataDir=D:/Net_Program/Net_Zookeeper/data-file
# 存放日志
dataLogDir=D:/Net_Program/Net_Zookeeper/data-log
# 监听端口
clientPort=2181

# 配置集群服务
server.1=192.168.2.200:2881:3881
server.2=192.168.2.200:2882:3882
server.3=192.168.2.200:2883:3883

        Note:
            1 in server.1 represents the unique identification of each server Zookeeper, the value can only be between 1 and 255
            localhost is the IP address of each cluster server
            2888 is the communication port
            3888 is the election port The

            
        second and third servers The configuration is as follows:
            Second:

# 存放数据
dataDir=D:/Net_Program/Net_Zookeeper2/data-file
# 存放日志
dataLogDir=D:/Net_Program/Net_Zookeeper2/data-log
# 监听端口
clientPort=2182

# 配置集群服务
server.1=192.168.2.200:2881:3881
server.2=192.168.2.200:2882:3882
server.3=192.168.2.200:2883:3883

                
            Third station:

# 存放数据
dataDir=D:/Net_Program/Net_Zookeeper3/data-file
# 存放日志
dataLogDir=D:/Net_Program/Net_Zookeeper3/data-log
# 监听端口
clientPort=2183

# 配置集群服务
server.1=192.168.2.200:2881:3881
server.2=192.168.2.200:2882:3882
server.3=192.168.2.200:2883:3883

                
        Format: server.A = B:C:D
            A: is a number, indicating the number server
            B: server IP address
            C: is a port number, used for information exchange between cluster members, representing the server and the leader in the cluster Port
            D for the server to exchange information : It is the port used for leader election when the leader hangs

    1.2. Create a myid file.
        Because we are here to build a cluster, we need to create a myid file here
        because we have previously specified the data storage path folder in zoo.cfg, such as: dataDir=D:/Net_Program/Net_Zookeeper/data- file
        So here we only need to create a new file named myid in the folder data-file, the content is the 1, 2 or 3 in server.1, server.2, server.3 mentioned in 1.1,
        
        namely:
            in D :\Net_Program\Net_Zookeeper\data-file Create a new file named myid (without suffix) and the content is 1 Create a new file named myid (without suffix)
            in D:\Net_Program\Net_Zookeeper2\data-file and the content is 2
            Create a new file named myid (without suffix) in D:\Net_Program\Net_Zookeeper3\data-file, the content is 3

    1.3. Start the service
        Run CMD as an administrator (open 3 to start 3 Zookeeper services), locate the bin directory under the 3 Zookeeper installation directories, and then enter the zkService command in the 3 windows to start these 3 Zookeeper services
        
        Then we can use the following commands to test whether the three Zookeeper services have been started, if it is open, it means the startup is successful
        nc -vz 192.168.2.200 2181
        nc -vz 192.168.2.200 2182
        nc -vz 192.168.2.200 2183

        
        here We can set 3 Zookeeper services as Windows system services

    Summary: The
        above configuration process is called "cluster mode", that is: a Zookeeper is configured on all three servers (the above example is demonstrated on the same one)
        There is another way we call it "cluster pseudo-distribution mode" ", that is: configure 3 Zookeeper
        
    1.4 on the same server , cluster pseudo-distribution mode
        1.4.1, modify the configuration file
            to copy zoo.cfg three times under the conf directory to the same directory, and name them zoo1.cfg, zoo2.cfg, zoo3. cfg, modify the three files as follows:
                zoo1.cfg:

# 存放数据
dataDir=D:/Net_Program/Net_Zookeeper/data-file/data1
# 存放日志
dataLogDir=D:/Net_Program/Net_Zookeeper/data-log/log1
# 监听端口
clientPort=2181
# 配置集群服务
server.1=192.168.2.200:2881:3881
server.2=192.168.2.200:2882:3882
server.3=192.168.2.200:2883:3883

                
                zoo2.cfg :

# 存放数据
dataDir=D:/Net_Program/Net_Zookeeper/data-file/data2
# 存放日志
dataLogDir=D:/Net_Program/Net_Zookeeper/data-log/log2
# 监听端口
clientPort=2182
# 配置集群服务
server.1=192.168.2.200:2881:3881
server.2=192.168.2.200:2882:3882
server.3=192.168.2.200:2883:3883

                
                zoo3.cfg :

# 存放数据
dataDir=D:/Net_Program/Net_Zookeeper/data-file/data3
# 存放日志
dataLogDir=D:/Net_Program/Net_Zookeeper/data-log/log3
# 监听端口
clientPort=2183
# 配置集群服务
server.1=192.168.2.200:2881:3881
server.2=192.168.2.200:2882:3882
server.3=192.168.2.200:2883:3883

                    
        1.4.2. Modify the zkServer.cmd
            bin directory and copy zkServer.cmd three times to the same directory, name zkServer1.cmd, zkServer2.cmd, zkServer3.cmd, and add the following content to the three files:
                zkServer1.cmd:

set ZOOCFG=..\conf\zoo1.cfg

               
                zkServer2.cmd:

set ZOOCFG=..\conf\zoo2.cfg

                
                zkServer3.cmd:

set ZOOCFG=..\conf\zoo3.cfg




        1.4.3. Create a myid file
            refer to 1.2, the content is very simple, as shown below:
                Create a new file named myid in D:\Net_Program\Net_Zookeeper\data-file\data1 (without suffix), and the content is 1
                in D:\ Create a new file named myid in Net_Program\Net_Zookeeper\data-file\data2 (without suffix), and the content is 2 Create a new file named myid
                in D:\Net_Program\Net_Zookeeper\data-file\data3 (without suffix), content Is 3

        1.4.4, start the service
            and the steps mentioned in 1.3

 

2. Kafka configuration
    Here we take the configuration of 3 sets of Kafka on the same server as an example, install Kafka into 3 directories respectively,
    such as:
        D:\Net_Program\Net_Kafka
        D:\Net_Program\Net_Kafka2
        D:\Net_Program\Net_Kafka3

        
    2.1. Edit server.properties
        Open the server.properties file (the first server), modify or add the following configuration:

# kafka消息存放的路径
log.dirs=D:/Net_Program/Net_Kafka/kafka-data
# 唯一标识
broker.id=0
host.name=192.168.2.200
# 监听端口
port=9092
# 对应着3台Zookeeper的IP地址和端口
zookeeper.connect=192.168.2.200:2181,192.168.2.200:2182,192.168.2.200:2183

            
        The second and third server configurations are as follows:
            Second:

# kafka消息存放的路径
log.dirs=D:/Net_Program/Net_Kafka2/kafka-data
# 唯一标识
broker.id=1
host.name=192.168.2.200
# 监听端口
port=9093
# 对应着3台Zookeeper的IP地址和端口
zookeeper.connect=192.168.2.200:2181,192.168.2.200:2182,192.168.2.200:2183

            
            Third station:

# kafka消息存放的路径
log.dirs=D:/Net_Program/Net_Kafka3/kafka-data
# 唯一标识
broker.id=2
host.name=192.168.2.200
# 监听端口
port=9094
# 对应着3台Zookeeper的IP地址和端口
zookeeper.connect=192.168.2.200:2181,192.168.2.200:2182,192.168.2.200:2183


        Note:
            If the above 3 servers are configured with SASL authentication, you need to modify the SASL port as follows: The
                first one:
                    listeners=SASL_PLAINTEXT://192.168.2.200:9092
                    advertised.listeners=SASL_PLAINTEXT://192.168 .2.200:9092

                Second:
                    listeners=SASL_PLAINTEXT://192.168.2.200:
                    9093 advertised.listeners=SASL_PLAINTEXT://192.168.2.200:9093

                Third:
                    listeners=SASL_PLAINTEXT://192.168.2.200:9094
                    advertised.listeners =SASL_PLAINTEXT://192.168.2.200:9094

                    
            If you start a Kafka service, the error shown below may occur, because we set log.dirs=D:/Net_Program/Net_Kafka3/kafka-data,
            In kafka-data folder will generate a meta.properties file, which has a configuration to broker.id = 0, 0 and we start this second Kafka in server.properties of broker.id = 1 does not match
            me The reason for this error is that during the test installation, all the files in D:\Net_Program\Net_Kafka were copied directly to D:\Net_Program\Net_Kafka2, and the kafka in D:\Net_Program\Net_Kafka The meta.properties file already exists in -data at the beginning, and
            if broker.id=0, if it is a new configuration, there is no
            solution to this problem . You can directly delete all in the kafka-data folder content

[2020-03-25 12:28:34,758] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentBrokerIdException: Configured broker.id 1 doesn't match stored broker.id 0 in meta.properties. If you moved your data, make sure your configured broker.id matches. If you intend to create a new broker, you should remove all data in your data directories (log.dirs).
        at kafka.server.KafkaServer.getOrGenerateBrokerId(KafkaServer.scala:762)
        at kafka.server.KafkaServer.startup(KafkaServer.scala:223)
        at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
        at kafka.Kafka$.main(Kafka.scala:84)
        at kafka.Kafka.main(Kafka.scala)


                        
    2.2. Start the service
        Run CMD as an administrator (open 3, used to start 3 Kafka services), locate the 3 Kafka installation directories, and then enter .\bin\windows\kafka-server- in the 3 windows respectively start.bat .\config\server.properties command to start these 3 Kafka services
        
        Then we can use the following command to test whether the 3 Kafka services have been started, if it is open, it means the startup is successful
        nc -vz 192.168.2.200 9092
        nc -vz 192.168.2.200 9093
        nc -vz 192.168.2.200 9094

        
        Here we can set 3 Kafka services as Windows system services

 

3. At
    this point in the command configuration , the Kafka cluster environment is set up.
    Since it is a cluster operation, the previous Kafka operation commands have been changed, such as:
        Create topic:
            stand-alone mode:

.\bin\windows\kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic TestTopic1
或
kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic TestTopic1

            Cluster mode:

.\bin\windows\kafka-topics.bat --create --zookeeper localhost:2181,localhost:2182,localhost:2183 --replication-factor 1 --partitions 1 --topic TestTopic1
或
kafka-topics --create --zookeeper localhost:2181,localhost:2182,localhost:2183 --replication-factor 1 --partitions 1 --topic TestTopic1

        
        Query subject:
            Stand-alone mode:

.\bin\windows\kafka-topics.bat --zookeeper localhost:2181 --list
或
kafka-topics --zookeeper localhost:2181 --list

            Cluster mode:

.\bin\windows\kafka-topics.bat --zookeeper localhost:2181,localhost:2182,localhost:2183 --list
或
kafka-topics --zookeeper localhost:2181,localhost:2182,localhost:2183 --list

                
    Other commands (such as setting user read and write permissions, group permissions, etc.) operate similarly

 

4. Test

    Here we take C# as an example, using C#'s Confluent.Kafka library to achieve the production and consumption of Kafka messages

    Start a producer and 2 consumers. The SASL account and password of the producer are quber and quber123456 (quber has read and write permissions). The SASL account and password of the first consumer are quber1 and quber123456 (quber1 can only be read Permission, and the permission of the group belongs to TestGroup1), the SASL account and password of the second consumer are quber2, quber123456 (quber2 has read and write permissions, and the permission of the group belongs to TestGroup2), the consumption of 2 consumers The topics are all TestTopic1.

    It should be noted here that the two consumer accounts quber1 and quber2 here must be in different groups when consuming the topic TestTopic, otherwise the consumers who start later will not be able to consume information. This is in the previous article " Kafka configuration 2-Configure Kafka SASL-PLAIN authentication under Windows " is also mentioned, which requires special attention.

    The test results are shown in the figure below:

 

5. Reference document
    Kafka cluster construction (under windows environment): https://www.cnblogs.com/lentoo/p/7785004.html

Guess you like

Origin blog.csdn.net/qubernet/article/details/105094601