Rapid deployment of Kafka in K8S environment (accessible outside K8S)

How to deploy quickly

  1. With Helm, kafka can be deployed with a few operations;
  2. Both kafka and zookeeper have storage requirements. If StorageClass is prepared in advance, storage becomes very simple

Reference article

K8S, Helm, NFS, StorageClass and other preconditions involved in this actual combat, please refer to their installation and use:

  1. "Kubespray2.11 install kubernetes1.15"
  2. "Deploy and Experience Helm (Version 2.16.1)"
  3. "Install and use NFS in Ubuntu16 environment"
  4. "K8S uses NFS of Synology DS218 +"
  5. "K8S StorageClass combat (NFS)"

Environmental information

The version information of the actual operating system and software is as follows:

  1. Governors : 1.15
  2. Kubernetes host: CentOS Linux release 7.7.1908
  3. NFS service: IP address 192.168.50.135 , folder / volume1 / nfs-storageclass-test
  4. Helm:2.16.1
  5. Kafka:2.0.1
  6. Zookeeper:3.5.5

Before the actual combat, please prepare: K8S, Helm, NFS, StorageClass;

operating

  1. Add helm warehouse (kafka in the warehouse): helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
  2. Download kafka's chart: helm fetch incubator / kafka
  3. After the download is successful, there is a compressed package in the current directory: kafka-0.20.8.tgz , unzip: tar -zxvf kafka-0.20.8.tgz
  4. Enter the decompressed kafka directory, edit the values.yaml file, the following is the specific modification point:
  5. First of all, it is necessary to set the kafka service to be used outside of K8S. Modify the value of external.enabled to true :
    Insert picture description here
  6. Find configurationOverrides . The two yellow boxes in the figure below are originally commented. Please delete the comment symbol. In addition, if you have set up cross-network access to kafka before, you can understand the reason for writing the K8S host IP below:
    Insert picture description here
  7. Next set the data volume, find persistence , adjust the size as needed, and then set the name of the prepared storageclass :
    Insert picture description here
  8. Then set the data volume of zookeeper:
    Insert picture description here
  9. After the setting is complete, start the deployment, first create the namespace, execute: kubectl create namespace kafka-test
  10. Execute in kafka directory: helm install --name-template kafka -f values.yaml. --Namespace kafka-test
  11. If there is no problem with the previous configuration, the console prompts as follows:
    Insert picture description here
  12. Kafka depends on zookeeper for startup. The entire startup will take several minutes. During this time, it can be seen that the pods of zookeeper and kafka gradually start:
    Insert picture description here
  13. View service: kubectl get services -n kafka-test, as shown in the red box below, you can access kafka from the outside through the host IP: 31090, host IP: 31091, host IP: 31092 :
    Insert picture description here
  14. Check the kafka version: kubectl exec kafka-0 -n kafka-test – sh -c 'ls /usr/share/java/kafka/kafka_*.jar' , as shown in the red box below, scala version 2.11 , kafka version 2.0. 1 :
    Insert picture description here
  15. After kafka starts successfully, let's verify whether the service is normal;

Exposing zookeeper

  1. In order to operate kafka remotely, sometimes it is necessary to connect to zookeeper, so it is necessary to expose zookeeper;
  2. Create the file zookeeper-nodeport-svc.yaml with the following content:
apiVersion: v1
kind: Service
metadata:
  name: zookeeper-nodeport
  namespace: kafka-test
spec:
  type: NodePort
  ports:
       - port: 2181
         nodePort: 32181
  selector:
    app: zookeeper
    release: kafka
  1. Execute the command: kubectl apply -f zookeeper-nodeport-svc.yaml
  2. Check the service and find that zookeeper can be accessed through the host IP: 32181 , as shown below:
    Insert picture description here

Verify kafka service

Find a computer to install the kafka package, you can remotely connect and operate the kafka of K8S through the built-in commands:

  1. Visit kafka's official website: http://kafka.apache.org/downloads , just confirmed scala version 2.11 and kafka version 2.0.1 , so download the version in the red box below:
    Insert picture description here
  2. After downloading and unzipping, enter the directory kafka_2.11-2.0.1 / bin
  3. View the current topic:
./kafka-topics.sh --list --zookeeper 192.168.50.135:32181

As shown below, empty:
Insert picture description here
4. Create topic:

./kafka-topics.sh --create --zookeeper 192.168.50.135:32181 --replication-factor 1 --partitions 1 --topic test001

As shown in the figure below, after the creation is successful, then view the topic and finally have content:
Insert picture description here
5. View the topic named test001:

./kafka-topics.sh --describe --zookeeper 192.168.50.135:32181 --topic test001

Insert picture description here
6. Enter the interactive mode for creating messages:

./kafka-console-producer.sh --broker-list 192.168.50.135:31090 --topic test001

After entering the interactive mode, enter any character string and then enter, and the current content will be sent as a message:
Insert picture description here
7. Open a window and execute the command to consume the message:

./kafka-console-consumer.sh --bootstrap-server 192.168.50.135:31090 --topic test001 --from-beginning

Insert picture description here
8. Open another window and execute the command to view the consumer group:

./kafka-consumer-groups.sh --bootstrap-server 192.168.50.135:31090 --list

As shown in the figure below, groupid is equal to console-consumer-21022
Insert picture description here
9. Run the command to view the consumption of groupid equal to console-consumer-21022:

./kafka-consumer-groups.sh --group console-consumer-21022 --describe --bootstrap-server 192.168.50.135:31090

As shown in the following figure: The
Insert picture description here
basic function of remote connection kafka experience is completed, and the viewing and sending and receiving of messages are normal, which proves that the deployment is successful;

kafkacat connection

  1. kafkacat is a client tool, I installed it with brew on MacBook Pro here;
  2. My K8S server IP is 192.168.50.135 , so execute this command to view kafka information: kafkacat -b 192.168.50.135:31090 -L , as shown below, you can see the broker information, and topic information (one is test001, another one is consumer's offset), change the port to 31091 and 31092 will connect to the other two brokers, you can also get the same information:
    Insert picture description here

Clean up resources

This actual combat created a lot of resources: rbac, role, serviceaccount, pod, deployment, service, the following script can clean up these resources (only NFS files are not cleaned up):

helm del --purge kafka
kubectl delete service zookeeper-nodeport -n kafka-test
kubectl delete storageclass managed-nfs-storage
kubectl delete deployment nfs-client-provisioner -n kafka-test
kubectl delete clusterrolebinding run-nfs-client-provisioner
kubectl delete serviceaccount nfs-client-provisioner -n kafka-test
kubectl delete role leader-locking-nfs-client-provisioner -n kafka-test
kubectl delete rolebinding leader-locking-nfs-client-provisioner -n kafka-test
kubectl delete clusterrole nfs-client-provisioner-runner
kubectl delete namespace kafka-test

At this point, the actual combat of deploying and verifying Kafka in the K8S environment is completed, and I hope to provide you with some reference;

Welcome to pay attention to my public number: programmer Xinchen

Insert picture description here

Published 376 original articles · praised 986 · 1.28 million views

Guess you like

Origin blog.csdn.net/boling_cavalry/article/details/105466163
Recommended