1. Environmental preparation
1.1 Environmental Description
This article builds MongoDB, based on WMware virtual machine, operating system CentOS 8, and has built a k8s cluster based on Kubeadm. The k8s node information is as follows:
server | IP address |
master | 192.168.31.80 |
node1 | 192.168.31.8 |
node2 | 192.168.31.9 |
If you want to know how to build a k8s cluster, you can jump to my article "kubeadm deploys a k8s cluster" to view it.
1.2 Installation Instructions
This article demonstrates how to deploy a Kafka cluster under K8s, and after setup, you can not only access Kafka services from inside K8s, but also support accessing Kafka services from outside the K8s cluster. This time we use the StatefulSet method to build a ZooKeeper cluster, and use Service&Deployment to build a Kafka cluster.
2. Create NFS storage
NFS storage is mainly to provide stable back-end storage for Kafka and ZooKeeper. When the Pod of KafkaZooKeeper fails and restarts or migrates, the original data can still be obtained.
2.1 Install NFS
I choose to create NFS storage on the master node, first execute the following command to install NFS:
yum -y install nfs-utils rpcbind
2.2 Create NFS shared folder
mkdir -p /var/nfs/kafka/pv{1..3}
mkdir -p /var/nfs/zookeeper/pv{1..3}
2.3 Edit configuration file
vim /etc/exports
/var/nfs/kafka/pv1 *(rw,sync,no_root_squash)
/var/nfs/kafka/pv2 *(rw,sync,no_root_squash)
/var/nfs/kafka/pv3 *(rw,sync,no_root_squash)
/var/nfs/zookeeper/pv1 *(rw,sync,no_root_squash)
/var/nfs/zookeeper/pv2 *(rw,sync,no_root_squash)
/var/nfs/zookeeper/pv3 *(rw,sync,no_root_squash)
2.4 Configuration takes effect
exportfs -r
2.5 View all shared directories
exportfs -v
2.6 Restart the NFS service
systemctl start nfs-server
systemctl enabled nfs-server
systemctl start rpcbind
systemctl enabled rpcbind
2.7 Install nfs-utils on other nodes
yum -y install nfs-utils
2.8 View the nfs shared directory information of the master node
showmount -e 192.168.31.80
3. Create a Zookeeper cluster
3.1 Create namespace space
kubectl create ns kafka-cluster
3.2 Create Zookeeper PV volume
cat > zookeeper-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: k8s-pv-zk01
namespace: kafka-cluster
labels:
app: zk
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.31.80 # 需要改成你的nfs服务地址
path: "/var/nfs/zookeeper/pv1" # 需要改成你的pv卷目录地址
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: k8s-pv-zk02
namespace: kafka-cluster
labels:
app: zk
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.31.80
path: "/var/nfs/zookeeper/pv2"
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: k8s-pv-zk03
namespace: kafka-cluster
labels:
app: zk
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.31.80
path: "/var/nfs/zookeeper/pv3"
persistentVolumeReclaimPolicy: Recycle
3.3 Execute command to create PV
kubectl apply -f zookeeper-pv.yaml
3.4 Execute the command to check whether the PV is created successfully
kubectl get pv
3.5 Create Zookeeper Service
cat > zookeeper-service.yaml
apiVersion: v1
kind: Service
metadata:
name: zk-hs
namespace: kafka-cluster
labels:
app: zk
spec:
selector:
app: zk
clusterIP: None
ports:
- name: server
port: 2888
- name: leader-election
port: 3888
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
namespace: kafka-cluster
labels:
app: zk
spec:
selector:
app: zk
type: NodePort
ports:
- name: client
port: 2181
nodePort: 31811
3.6 Execute command to create Service
kubectl apply -f zookeeper-service.yaml
3.7 Create a Zookeeper StatefulSet
cat > zookeeper-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
namespace: kafka-cluster
spec:
serviceName: "zk-hs"
replicas: 3 # by default is 1
selector:
matchLabels:
app: zk # has to match .spec.template.metadata.labels
updateStrategy:
type: RollingUpdate
podManagementPolicy: Parallel
template:
metadata:
labels:
app: zk # has to match .spec.selector.matchLabels
spec:
containers:
- name: zk
imagePullPolicy: Always
image: leolee32/kubernetes-library:kubernetes-zookeeper1.0-3.4.10
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
command:
- sh
- -c
- "start-zookeeper \
--servers=3 \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port=2181 \
--election_port=3888 \
--server_port=2888 \
--tick_time=2000 \
--init_limit=10 \
--sync_limit=5 \
--heap=4G \
--max_client_cnxns=60 \
--snap_retain_count=3 \
--purge_interval=12 \
--max_session_timeout=40000 \
--min_session_timeout=4000 \
--log_level=INFO"
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
3.8 Execute command to generate StatefulSet
kubectl apply -f zookeeper-statefulset.yaml
3.9 Check if there is a new one successfully
kubectl get pod -n kafka-cluster -o wide
kubectl get service -n kafka-cluster -o wide
kubectl get StatefulSet -n kafka-cluster -o wide
Fourth, create a kafka cluster
4.1 create kafka service
cat > kafka-service.yaml
apiVersion: v1
kind: Service
metadata:
name: kafka-service-1
namespace: kafka-cluster
labels:
app: kafka-service-1
spec:
type: NodePort
ports:
- port: 9092
name: kafka-service-1
targetPort: 9092
nodePort: 30901
protocol: TCP
selector:
app: kafka-1
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service-2
namespace: kafka-cluster
labels:
app: kafka-service-2
spec:
type: NodePort
ports:
- port: 9092
name: kafka-service-2
targetPort: 9092
nodePort: 30902
protocol: TCP
selector:
app: kafka-2
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service-3
namespace: kafka-cluster
labels:
app: kafka-service-3
spec:
type: NodePort
ports:
- port: 9092
name: kafka-service-3
targetPort: 9092
nodePort: 30903
protocol: TCP
selector:
app: kafka-3
4.2 Execute command to generate service
kubectl apply -f kafka-service.yaml
4.3 Check whether the service is successful
kubectl get service -n kafka-cluster -o wide
4.4 Create Kafka Deployment
cat > kafka-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-deployment-1
namespace: kafka-cluster
spec:
replicas: 1
selector:
matchLabels:
app: kafka-1
template:
metadata:
labels:
app: kafka-1
spec:
containers:
- name: kafka-1
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9092
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zk-0.zk-hs.kafka-cluster.svc.cluster.local:2181,zk-1.zk-hs.kafka-cluster.svc.cluster.local:2181,zk-2.zk-hs.kafka-cluster.svc.cluster.local:2181
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CREATE_TOPICS
value: mytopic:2:1
- name: KAFKA_LISTENERS
value: PLAINTEXT://0.0.0.0:9092
- name: KAFKA_ADVERTISED_PORT
value: "30901"
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: status.hostIP
volumeMounts:
- name: datadir
mountPath: /var/lib/kafka
volumes:
- name: datadir
nfs:
server: 192.168.31.80
path: "/var/nfs/kafka/pv1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-deployment-2
namespace: kafka-cluster
spec:
replicas: 1
selector:
matchLabels:
app: kafka-2
template:
metadata:
labels:
app: kafka-2
spec:
containers:
- name: kafka-2
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9092
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zk-0.zk-hs.kafka-cluster.svc.cluster.local:2181,zk-1.zk-hs.kafka-cluster.svc.cluster.local:2181,zk-2.zk-hs.kafka-cluster.svc.cluster.local:2181
- name: KAFKA_BROKER_ID
value: "2"
- name: KAFKA_LISTENERS
value: PLAINTEXT://0.0.0.0:9092
- name: KAFKA_ADVERTISED_PORT
value: "30902"
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: status.hostIP
volumeMounts:
- name: datadir
mountPath: /var/lib/kafka
volumes:
- name: datadir
nfs:
server: 192.168.31.80
path: "/var/nfs/kafka/pv2"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-deployment-3
namespace: kafka-cluster
spec:
replicas: 1
selector:
matchLabels:
app: kafka-3
template:
metadata:
labels:
app: kafka-3
spec:
containers:
- name: kafka-3
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9092
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zk-0.zk-hs.kafka-cluster.svc.cluster.local:2181,zk-1.zk-hs.kafka-cluster.svc.cluster.local:2181,zk-2.zk-hs.kafka-cluster.svc.cluster.local:2181
- name: KAFKA_BROKER_ID
value: "3"
- name: KAFKA_LISTENERS
value: PLAINTEXT://0.0.0.0:9092
- name: KAFKA_ADVERTISED_PORT
value: "30903"
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: status.hostIP
volumeMounts:
- name: datadir
mountPath: /var/lib/kafka
volumes:
- name: datadir
nfs:
server: 192.168.31.80
path: "/var/nfs/kafka/pv3"
4.5 Execute command to generate deployment
kubectl apply -f kafka-deployment.yaml
4.6 Check whether the generation is successful
kubectl get pod -n kafka-cluster -o wide
5. Test
5.1 Enter the container on the k8s page
5.2 New top
kafka-topics.sh --create --topic test_topic --zookeeper zk-0.zk-hs.kafka-cluster.svc.cluster.local:2181,zk-1.zk-hs.kafka-cluster.svc.cluster.local:2181,zk-2.zk-hs.kafka-cluster.svc.cluster.local:2181 --partitions 1 --replication-factor 1
5.3 Open a new page in k8s, enter the container, and open the producer
kafka-console-producer.sh --broker-list kafka-service-1:9092,kafka-service-2:9092,kafka-service-3:9092 --topic test_topic
5.4 Open a new page in k8s, enter the container, and open the consumer
kafka-console-consumer.sh --bootstrap-server kafka-service-1:9092,kafka-service-2:9092,kafka-service-3:9092 --topic test_topic
Edit and send "kafka" in the producer, and consume "kafka" in the consumer. It is clear that our zookeeper cluster and kafka cluster are ok. This is the end of this tutorial!
If you think this article is helpful to you, please like + bookmark + follow!