k8s-Bereitstellung zookeeper-kafka, NFS als Speicher

Inhaltsverzeichnis

Hinweis: Referenzlink zum NFS-Speicher

1. Ziehen Sie das ZK-Image und machen Sie es zu Ihrem eigenen Image

2. Bearbeiten Sie die Datei zookeeper.yaml 

3. Installieren Sie zk und überprüfen Sie den Status 

Viertens überprüfen Sie die Verfügbarkeit des Zookeeper-Clusters

5. Erstellen Sie die entsprechende Version des Kafka-Images

6. Bearbeiten Sie die Datei kafka.yaml

7. Erstellen Sie Kafka und sehen Sie sich den Status an

8. Überprüfen Sie die Konnektivität zwischen zk und kafka

Neun, Fehler und Lösung


Hinweis: Referenzlink zum NFS-Speicher

K8s konfiguriert Hadoop-Cluster, NFS als Speicher – Blog von Crazy Snail – CSDN-Blog

1. Ziehen Sie das ZK-Image und machen Sie es zu Ihrem eigenen Image

##官网镜像不能下载,使用如下镜像,并制作为自己镜像
docker pull mirrorgooglecontainers/kubernetes-zookeeper:1.0-3.4.10

docker tag mirrorgooglecontainers/kubernetes-zookeeper:1.0-3.4.10  registry.cn-beijing.aliyuncs.com/zhangxlei/kubernetes-zookeeper:1.0-3.4.10

2. Bearbeiten Sie die Datei zookeeper.yaml 

apiVersion: v1
kind: Service
metadata:
  name: zk-hs
  namespace: dev
  labels:
    app: zk
spec:
  ports:
  - port: 2888
    name: server
  - port: 3888
    name: leader-election
  clusterIP: None
  selector:
    app: zk
---
apiVersion: v1
kind: Service
metadata:
  name: zk-cs
  namespace: dev
  labels:
    app: zk
spec:
  ports:
  - port: 2181
    name: client
  selector:
    app: zk
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
  namespace: dev
spec:
  selector:
    matchLabels:
      app: zk
  maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet	
metadata:
  name: zk
  namespace: dev
spec:
  selector:
    matchLabels:
      app: zk
  serviceName: zk-hs
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: OrderedReady
  template:
    metadata:
      labels:
        app: zk
    spec:
      affinity:				
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - zk
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: kubernetes-zookeeper
        imagePullPolicy: IfNotPresent
        image: "registry.cn-beijing.aliyuncs.com/zhangxlei/kubernetes-zookeeper:1.0-3.4.10"
        resources:
          requests:
            memory: "1Gi"
            cpu: "2"
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        command:
        - sh
        - -c
        - "start-zookeeper \
          --servers=3 \
          --data_dir=/var/lib/zookeeper/data \
          --data_log_dir=/var/lib/zookeeper/data/log \
          --conf_dir=/opt/zookeeper/conf \
          --client_port=2181 \
          --election_port=3888 \
          --server_port=2888 \
          --tick_time=2000 \
          --init_limit=10 \
          --sync_limit=5 \
          --heap=512M \
          --max_client_cnxns=60 \
          --snap_retain_count=3 \
          --purge_interval=12 \
          --max_session_timeout=40000 \
          --min_session_timeout=4000 \
          --log_level=INFO"
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/zookeeper
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: datadir
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1G
      storageClassName: "nfs-storage"

3. Installieren Sie zk und überprüfen Sie den Status 

[root@master-01 zk+kafka]# kubectl apply -f zookeeper.yaml

[root@master-01 zk+kafka]# kubectl  get pods -l app=zk -n dev -o wide 
NAME   READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
zk-0   1/1     Running   0          11m     172.20.3.36   10.2.1.195   <none>           <none>
zk-1   1/1     Running   0          10m     172.20.4.72   10.2.1.194   <none>           <none>
zk-2   1/1     Running   0          9m23s   172.20.5.23   10.2.1.193   <none>           <none>

[root@master-01 ~]# kubectl get pv,pvc -n dev -o wide |grep zk
persistentvolume/pvc-042d769b-3754-42fe-b537-b5c0546e61a4   1G         RWO            Delete           Bound    dev/datadir-zk-0             nfs-storage             150m   Filesystem
persistentvolume/pvc-c8c036e7-e0a2-4858-8268-4cd67de25439   1G         RWO            Delete           Bound    dev/datadir-zk-2             nfs-storage             149m   Filesystem
persistentvolume/pvc-e2225e7e-d923-4ddd-9c24-3f3940062471   1G         RWO            Delete           Bound    dev/datadir-zk-1             nfs-storage             149m   Filesystem
persistentvolumeclaim/datadir-zk-0             Bound    pvc-042d769b-3754-42fe-b537-b5c0546e61a4   1G         RWO            nfs-storage    150m   Filesystem
persistentvolumeclaim/datadir-zk-1             Bound    pvc-e2225e7e-d923-4ddd-9c24-3f3940062471   1G         RWO            nfs-storage    149m   Filesystem
persistentvolumeclaim/datadir-zk-2             Bound    pvc-c8c036e7-e0a2-4858-8268-4cd67de25439   1G         RWO            nfs-storage    149m   Filesystem


#先来看下3个zookeeper的pod完整主机名是什么
[root@master-01 zk+kafka]# for i in 0 1 2;do kubectl exec zk-$i -n dev  -- hostname -f;done
zk-0.zk-hs.dev.svc.cluster.local.
zk-1.zk-hs.dev.svc.cluster.local.
zk-2.zk-hs.dev.svc.cluster.local.

#查看3个zookeeper节点的角色
[root@master-01 zk+kafka]# for i in 0 1 2;do kubectl exec zk-$i -n dev  -- zkServer.sh status;done
ZooKeeper JMX enabled by default
Using config: /usr/bin/../etc/zookeeper/zoo.cfg
Mode: follower
ZooKeeper JMX enabled by default
Using config: /usr/bin/../etc/zookeeper/zoo.cfg
Mode: leader
ZooKeeper JMX enabled by default
Using config: /usr/bin/../etc/zookeeper/zoo.cfg
Mode: follower


#查看myid
[root@master-01 zk+kafka]# for i in 0 1 2;do echo -n "zk-$i " ;kubectl exec zk-$i -n dev  -- cat /var/lib/zookeeper/data/myid;done
zk-0 1
zk-1 2
zk-2 3

#查看zookeeper  leader的配置文件
[root@master-01 zk+kafka]#  kubectl exec -it -n dev  zk-1 -- cat /opt/zookeeper/conf/zoo.cfg
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/var/lib/zookeeper/data
dataLogDir=/var/lib/zookeeper/data/log
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=60
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=zk-0.zk-hs.dev.svc.cluster.local.:2888:3888
server.2=zk-1.zk-hs.dev.svc.cluster.local.:2888:3888
server.3=zk-2.zk-hs.dev.svc.cluster.local.:2888:3888

Viertens überprüfen Sie die Verfügbarkeit des Zookeeper-Clusters

#进入容器
#  kubectl exec -it -n dev  zk-1 /bin/bash

#登录zk
$ zkCli.sh

#创建一个节点并写入数据
[zk: localhost:2181(CONNECTED) 0] create /zk-test hdfdf

#查看节点
[zk: localhost:2181(CONNECTED) 1] get /zk-test
hdfdf
cZxid = 0x100000002
ctime = Wed Jun 14 06:14:19 UTC 2023
mZxid = 0x100000002
mtime = Wed Jun 14 06:14:19 UTC 2023
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0


#登录其他zk节点,能看到我们上面创建的节点数据,说明zk集群是正常的
#  kubectl exec -it -n dev  zk-2 /bin/bash

[zk: localhost:2181(CONNECTED) 0] get /zk-test
hdfdf
cZxid = 0x100000002
ctime = Wed Jun 14 06:14:19 UTC 2023
mZxid = 0x100000002
mtime = Wed Jun 14 06:14:19 UTC 2023
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0

5. Erstellen Sie die entsprechende Version des Kafka-Images

#kafka依赖jdk
docker pull ascdc/jdk8

docker tag ascdc/jdk8:latest registry.cn-beijing.aliyuncs.com/zhangxlei/jdk8:latest

#官网下载kafka安装包
wget https://downloads.apache.org/kafka/3.5.0/kafka_2.13-3.5.0.tgz

#创建Dockerfile
echo 'FROM registry.cn-beijing.aliyuncs.com/zhangxlei/jdk8:latest
COPY kafka_2.13-3.5.0.tgz /opt/kafka_2.13-3.5.0.tgz
WORKDIR /opt/
RUN tar -zxvf kafka_2.13-3.5.0.tgz && rm -rf kafka_2.13-3.5.0.tgz && mv kafka_2.13-3.5.0 kafka
EXPOSE 9092' >> Dockerfile

#构建kafka镜像
docker build --no-cache -t registry.cn-beijing.aliyuncs.com/zhangxlei/kafka_2.13-3.5.0 .

6. Bearbeiten Sie die Datei kafka.yaml

---
apiVersion: v1
kind: Service
metadata:
  name: kafka-hs
  namespace: dev
  labels:
    app: kafka
spec:
  ports:
  - port: 9092
    name: server
  clusterIP: None
  selector:
    app: kafka
--- 
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: kafka-pdb
  namespace: dev
spec:
  selector:
    matchLabels:
      app: kafka
  minAvailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kafka
  namespace: dev
spec:
  serviceName: kafka-hs
  replicas: 3
  selector:
    matchLabels:
      app: kafka
  template:
    metadata:
      labels:
        app: kafka
    spec:
      affinity:
        podAntiAffinity:		
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - kafka
              topologyKey: "kubernetes.io/hostname"
        podAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
             - weight: 1
               podAffinityTerm:
                 labelSelector:
                    matchExpressions:
                      - key: "app"
                        operator: In
                        values:
                        - zk
                 topologyKey: "kubernetes.io/hostname"
      terminationGracePeriodSeconds: 300
      containers:
      - name: kafka
        imagePullPolicy: IfNotPresent
        image: registry.cn-beijing.aliyuncs.com/zhangxlei/kafka_2.13-3.5.0
        resources:
          requests:
            memory: "1Gi"			
            cpu: 2				
        ports:
        - containerPort: 9092
          name: server
        command:
        - sh
        - -c
        - "exec /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \
          --override listeners=PLAINTEXT://:9092 \
          --override zookeeper.connect=zk-0.zk-hs.dev.svc.cluster.local:2181,zk-1.zk-hs.dev.svc.cluster.local:2181,zk-2.zk-hs.dev.svc.cluster.local:2181 \
          --override log.dirs=/var/lib/kafka/data \
          --override auto.create.topics.enable=true \
          --override auto.leader.rebalance.enable=true \
          --override background.threads=10 \
          --override compression.type=producer \
          --override delete.topic.enable=true \
          --override leader.imbalance.check.interval.seconds=300 \
          --override leader.imbalance.per.broker.percentage=10 \
          --override log.flush.interval.messages=9223372036854775807 \
          --override log.flush.offset.checkpoint.interval.ms=60000 \
          --override log.flush.scheduler.interval.ms=9223372036854775807 \
          --override log.retention.bytes=-1 \
          --override log.retention.hours=168 \
          --override log.roll.hours=168 \
          --override log.roll.jitter.hours=0 \
          --override log.segment.bytes=1073741824 \
          --override log.segment.delete.delay.ms=60000 \
          --override message.max.bytes=1000012 \
          --override min.insync.replicas=1 \
          --override num.io.threads=8 \
          --override num.network.threads=3 \
          --override num.recovery.threads.per.data.dir=1 \
          --override num.replica.fetchers=1 \
          --override offset.metadata.max.bytes=4096 \
          --override offsets.commit.required.acks=-1 \
          --override offsets.commit.timeout.ms=5000 \
          --override offsets.load.buffer.size=5242880 \
          --override offsets.retention.check.interval.ms=600000 \
          --override offsets.retention.minutes=1440 \
          --override offsets.topic.compression.codec=0 \
          --override offsets.topic.num.partitions=50 \
          --override offsets.topic.replication.factor=3 \
          --override offsets.topic.segment.bytes=104857600 \
          --override queued.max.requests=500 \
          --override quota.consumer.default=9223372036854775807 \
          --override quota.producer.default=9223372036854775807 \
          --override replica.fetch.min.bytes=1 \
          --override replica.fetch.wait.max.ms=500 \
          --override replica.high.watermark.checkpoint.interval.ms=5000 \
          --override replica.lag.time.max.ms=10000 \
          --override replica.socket.receive.buffer.bytes=65536 \
          --override replica.socket.timeout.ms=30000 \
          --override request.timeout.ms=30000 \
          --override socket.receive.buffer.bytes=102400 \
          --override socket.request.max.bytes=104857600 \
          --override socket.send.buffer.bytes=102400 \
          --override unclean.leader.election.enable=true \
          --override zookeeper.session.timeout.ms=6000 \
          --override zookeeper.set.acl=false \
          --override broker.id.generation.enable=true \
          --override connections.max.idle.ms=600000 \
          --override controlled.shutdown.enable=true \
          --override controlled.shutdown.max.retries=3 \
          --override controlled.shutdown.retry.backoff.ms=5000 \
          --override controller.socket.timeout.ms=30000 \
          --override default.replication.factor=1 \
          --override fetch.purgatory.purge.interval.requests=1000 \
          --override group.max.session.timeout.ms=300000 \
          --override group.min.session.timeout.ms=6000 \
          --override log.cleaner.backoff.ms=15000 \
          --override log.cleaner.dedupe.buffer.size=134217728 \
          --override log.cleaner.delete.retention.ms=86400000 \
          --override log.cleaner.enable=true \
          --override log.cleaner.io.buffer.load.factor=0.9 \
          --override log.cleaner.io.buffer.size=524288 \
          --override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \
          --override log.cleaner.min.cleanable.ratio=0.5 \
          --override log.cleaner.min.compaction.lag.ms=0 \
          --override log.cleaner.threads=1 \
          --override log.cleanup.policy=delete \
          --override log.index.interval.bytes=4096 \
          --override log.index.size.max.bytes=10485760 \
          --override log.message.timestamp.difference.max.ms=9223372036854775807 \
          --override log.message.timestamp.type=CreateTime \
          --override log.preallocate=false \
          --override log.retention.check.interval.ms=300000 \
          --override max.connections.per.ip=2147483647 \
          --override num.partitions=1 \
          --override producer.purgatory.purge.interval.requests=1000 \
          --override replica.fetch.backoff.ms=1000 \
          --override replica.fetch.max.bytes=1048576 \
          --override replica.fetch.response.max.bytes=10485760 \
          --override reserved.broker.max.id=1000"
        env:
        - name: KAFKA_HEAP_OPTS
          value : "-Xmx500M -Xms500M"
        - name: KAFKA_OPTS
          value: "-Dlogging.level=INFO"
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/kafka/data
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: datadir
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 500M
      storageClassName: "nfs-storage"

7. Erstellen Sie Kafka und sehen Sie sich den Status an

kubectl apply -f kafka.yaml


[root@master-01 zk+kafka]# kubectl get pods  -n dev -o wide |grep kafka
kafka-0                                  1/1     Running   0          6m48s   172.20.3.38   10.2.1.195   <none>           <none>
kafka-1                                  1/1     Running   0          6m46s   172.20.4.74   10.2.1.194   <none>           <none>
kafka-2                                  1/1     Running   0          6m45s   172.20.5.25   10.2.1.193   <none>           <none>


[root@master-01 zk+kafka]# kubectl get pv,pvc -n dev |grep kafka
persistentvolume/pvc-b2a9dc2e-3283-4e17-97c6-e3fcda75c643   500M       RWO            Delete           Bound    dev/datadir-kafka-0          nfs-storage             20m
persistentvolume/pvc-e1769233-45da-4409-9bfd-78dc796ba509   500M       RWO            Delete           Bound    dev/datadir-kafka-2          nfs-storage             11m
persistentvolume/pvc-f5bd1100-8c79-4fc5-8250-4003cadbfcaa   500M       RWO            Delete           Bound    dev/datadir-kafka-1          nfs-storage             13m
persistentvolumeclaim/datadir-kafka-0          Bound    pvc-b2a9dc2e-3283-4e17-97c6-e3fcda75c643   500M       RWO            nfs-storage    20m
persistentvolumeclaim/datadir-kafka-1          Bound    pvc-f5bd1100-8c79-4fc5-8250-4003cadbfcaa   500M       RWO            nfs-storage    13m
persistentvolumeclaim/datadir-kafka-2          Bound    pvc-e1769233-45da-4409-9bfd-78dc796ba509   500M       RWO            nfs-storage    11m


8. Überprüfen Sie die Konnektivität zwischen zk und kafka

#查看zookeeper是否已经注册了kafka
[root@master-01 zk+kafka]# kubectl exec -it zk-0 -n dev /bin/bash

zookeeper@zk-0:/$ zkCli.sh
Connecting to localhost:2181
[zk: localhost:2181(CONNECTED) 0] ls /
[cluster, controller, brokers, zookeeper, admin, isr_change_notification, log_dir_event_notification, zk-test, controller_epoch, feature, consumers, latest_producer_id_block, config]
[zk: localhost:2181(CONNECTED) 1] ls /brokers
[ids, topics, seqid]
[zk: localhost:2181(CONNECTED) 2] ls /brokers/ids
[0, 1, 2]
[zk: localhost:2181(CONNECTED) 3] 


#在kafka创建topic
[root@master-02 data]# kubectl get pods -n dev |grep kafka
kafka-0                                  1/1     Running   0          19m
kafka-1                                  1/1     Running   0          19m
kafka-2                                  1/1     Running   0          19m
[root@master-02 data]# kubectl exec -it kafka-0 -n dev /bin/bash
I have no name!@kafka-0:/opt$ cd /opt/kafka/bin
I have no name!@kafka-0:/opt/kafka/bin$ ./kafka-topics.sh --bootstrap-server localhost:9092 --topic first --create --partitions 1 --replication-factor 3 
Created topic first.
I have no name!@kafka-0:/opt/kafka/bin$ ./kafka-topics.sh --bootstrap-server localhost:9092 --topic first --describe
Topic: first	TopicId: 64nWxI4AQTmwI-fn4mbMig	PartitionCount: 1	ReplicationFactor: 3	Configs: compression.type=producer,min.insync.replicas=1,cleanup.policy=delete,segment.bytes=1073741824,flush.messages=9223372036854775807,file.delete.delay.ms=60000,max.message.bytes=1000012,min.compaction.lag.ms=0,message.timestamp.type=CreateTime,preallocate=false,min.cleanable.dirty.ratio=0.5,index.interval.bytes=4096,unclean.leader.election.enable=true,retention.bytes=-1,delete.retention.ms=86400000,message.timestamp.difference.max.ms=9223372036854775807,segment.index.bytes=10485760
	Topic: first	Partition: 0	Leader: 2	Replicas: 2,1,0	Isr: 2,1,0


#在nfs共享存储查看后端是否有数据
# ll dev-datadir-kafka-0-pvc-b2a9dc2e-3283-4e17-97c6-e3fcda75c643/
total 16
-rw-r--r-- 1 ops root   0 Jun 14 16:59 cleaner-offset-checkpoint
drwxr-xr-x 2 ops root 167 Jun 14 17:20 first-0
-rw-r--r-- 1 ops root   4 Jun 14 17:21 log-start-offset-checkpoint
-rw-r--r-- 1 ops root  88 Jun 14 16:59 meta.properties
-rw-r--r-- 1 ops root  14 Jun 14 17:21 recovery-point-offset-checkpoint
-rw-r--r-- 1 ops root  14 Jun 14 17:21 replication-offset-checkpoint

Neun, Fehler und Lösung

1、Fehler: „zookeeper.yaml“ konnte nicht erkannt werden: Keine Übereinstimmungen für Art „PodDisruptionBudget“ in Version „policy/v1“

Lösung: Ändern Sie in apiVersion:policy/v1beta1

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
  namespace: dev
spec:
  selector:
    matchLabels:
      app: zk
  maxUnavailable: 1

Supongo que te gusta

Origin blog.csdn.net/zhangxueleishamo/article/details/131204994
Recomendado
Clasificación