kubernetes之zookeeper部署

本文采用网上镜像:mirrorgooglecontainers/kubernetes-zookeeper:1.0-3.4.10

准备共享存储:nfs,glusterfs,seaweed或其他,并在node节点挂载

本次采用seaweed分布式文件系统

node节点挂载:

./weed mount -filer=192.168.11.103:8801 -dir=/data/ -filer.path=/ &

准备kubernetes集群环境:

[root@k8s-master sts]# kubectl get nodes -o wide
NAME         STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
k8s-master   Ready    master   97d   v1.15.0   192.168.10.171   <none>        CentOS Linux 7 (Core)   3.10.0-862.el7.x86_64   docker://19.3.1
k8s-node1    Ready    node     97d   v1.15.0   192.168.11.63    <none>        CentOS Linux 7 (Core)   3.10.0-862.el7.x86_64   docker://19.3.1
[root@k8s-master sts]# 

创建pv

使用本地目录创建pv, 确定node节点已经挂载共享存储..使用本地目录也可以

[root@k8s-master sts]# cat zk-pv.yaml 
kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-zk1
  namespace: bigdata
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: local
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/zookeeper1"
  persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-zk2
  namespace: bigdata
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: local
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/zookeeper2"
  persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-zk3
  namespace: bigdata
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: local
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/zookeeper3"
  persistentVolumeReclaimPolicy: Recycle
[root@k8s-master sts]# 
View Code

如下:

[root@k8s-master sts]# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv-zk1   3Gi        RWO            Recycle          Available           anything                5s
pv-zk2   3Gi        RWO            Recycle          Available           anything                5s
pv-zk3   3Gi        RWO            Recycle          Available           anything                5s

创建zk sts:

[root@k8s-master sts]# cat zk-pv.yaml 
kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-zk1
  namespace: bigdata
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: local
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/zookeeper1"
  persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-zk2
  namespace: bigdata
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: local
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/zookeeper2"
  persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-zk3
  namespace: bigdata
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: local
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/zookeeper3"
  persistentVolumeReclaimPolicy: Recycle
[root@k8s-master sts]# ^C
[root@k8s-master sts]# cat zk-sts.yaml 
---
apiVersion: v1
kind: Service
metadata:
  name: zk-hs
  namespace: bigdata
  labels:
    app: zk
spec:
  ports:
  - port: 2888
    name: server
  - port: 3888
    name: leader-election
  clusterIP: None
  selector:
    app: zk
---
apiVersion: v1
kind: Service
metadata:
  name: zk-cs
  namespace: bigdata
  labels:
    app: zk
spec:
  ports:
  - port: 2181
    name: client
  selector:
    app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
  namespace: bigdata
spec:
  selector:
    matchLabels:
      app: zk
  maxUnavailable: 1
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: zk
  namespace: bigdata
spec:
  serviceName: zk-hs
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: zk
    spec:
      containers:
      - name: kubernetes-zookeeper
        imagePullPolicy: IfNotPresent
        image: "mirrorgooglecontainers/kubernetes-zookeeper:1.0-3.4.10"
        resources:
          requests:
            memory: "128Mi"
            cpu: "0.1"
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        command:
        - sh
        - -c
        - "start-zookeeper \
          --servers=3 \
          --data_dir=/var/lib/zookeeper/data \
          --data_log_dir=/var/lib/zookeeper/data/log \
          --conf_dir=/opt/zookeeper/conf \
          --client_port=2181 \
          --election_port=3888 \
          --server_port=2888 \
          --tick_time=2000 \
          --init_limit=10 \
          --sync_limit=5 \
          --heap=512M \
          --max_client_cnxns=60 \
          --snap_retain_count=3 \
          --purge_interval=12 \
          --max_session_timeout=40000 \
          --min_session_timeout=4000 \
          --log_level=INFO"
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/zookeeper
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: datadir
      annotations:
        volume.beta.kubernetes.io/storage-class: "anything"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 3Gi
[root@k8s-master sts]#
View Code

源模板文件:https://github.com/kow3ns/kubernetes-zookeeper/blob/master/manifests/zookeeper_mini.yaml

注意:有修改

由于镜像默认是zookeeper用户,所有会出现没有权限创建zookeeper目,如下所示:

修改挂载目录权限即可.简单暴力: chmod 777 ${{dirname}}

zk启动成功后:如下,pvc绑定pv

[root@k8s-master sts]# kubectl get pods -n bigdata
NAME   READY   STATUS    RESTARTS   AGE
zk-0   1/1     Running   0          18m
zk-1   1/1     Running   3          18m
zk-2   1/1     Running   2          18m
[root@k8s-master sts]# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS   REASON   AGE
pv-zk1   3Gi        RWO            Recycle          Bound    bigdata/datadir-zk-2   anything                23m
pv-zk2   3Gi        RWO            Recycle          Bound    bigdata/datadir-zk-0   anything                23m
pv-zk3   3Gi        RWO            Recycle          Bound    bigdata/datadir-zk-1   anything                23m
[root@k8s-master sts]# kubectl get pvc -n bigdata
NAME           STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
datadir-zk-0   Bound    pv-zk2   3Gi        RWO            anything       23m
datadir-zk-1   Bound    pv-zk3   3Gi        RWO            anything       23m
datadir-zk-2   Bound    pv-zk1   3Gi        RWO            anything       23m
[root@k8s-master sts]# 

 在node节点查看数据目录结构如下:

[root@k8s-node1 data]# tree -L 3 /data/
/data/
├── test
│   ├── a.txt
│   └── aaa.jpg
├── zookeeper1
│   └── data
│       ├── log
│       ├── myid
│       └── version-2
├── zookeeper2
│   └── data
│       ├── log
│       ├── myid
│       └── version-2
└── zookeeper3
    └── data
        ├── log
        ├── myid
        └── version-2

13 directories, 5 files
[root@k8s-node1 data]# 

查看主机名:

[root@k8s-master sts]# for i in 0 1 2; do kubectl exec zk-$i -n bigdata -- hostname; done
zk-0
zk-1
zk-2

查看myid

[root@k8s-master sts]# for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -n bigdata -- cat /var/lib/zookeeper/data/myid; done
myid zk-0
1
myid zk-1
2
myid zk-2
3

查看完整域名

[root@k8s-master sts]# for i in 0 1 2; do kubectl exec zk-$i -n bigdata -- hostname -f; done
zk-0.zk-hs.bigdata.svc.cluster.local
zk-1.zk-hs.bigdata.svc.cluster.local
zk-2.zk-hs.bigdata.svc.cluster.local

查看配置:

[root@k8s-master sts]# kubectl exec zk-0 -n bigdata -- cat /opt/zookeeper/conf/zoo.cfg
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/var/lib/zookeeper/data
dataLogDir=/var/lib/zookeeper/data/log
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=60
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=zk-0.zk-hs.bigdata.svc.cluster.local:2888:3888
server.2=zk-1.zk-hs.bigdata.svc.cluster.local:2888:3888
server.3=zk-2.zk-hs.bigdata.svc.cluster.local:2888:3888
[root@k8s-master sts]# 

测试zookeeper集群整体性:

在zk-1节点写入数据并查看

[root@k8s-master sts]# kubectl exec -ti zk-1 -n bigdata -- bash 
zookeeper@zk-1:/$ zkCli.sh 
Connecting to localhost:2181
[zk: localhost:2181(CONNECTED) 0] create /hello world
Created /hello
[zk: localhost:2181(CONNECTED) 1] get /hello
world
cZxid = 0x100000004
ctime = Wed Nov 13 02:41:56 UTC 2019
mZxid = 0x100000004
mtime = Wed Nov 13 02:41:56 UTC 2019
pZxid = 0x100000004
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
[zk: localhost:2181(CONNECTED) 2] 

在其他节点查看数据是否同步:

比如在zk-0节点查看数据:

[root@k8s-master sts]# kubectl exec -ti zk-0 -n bigdata -- bash 
zookeeper@zk-0:/$ zkCli.sh 
Connecting to localhost:2181
[zk: localhost:2181(CONNECTED) 0] ls /
[hello, zookeeper]
[zk: localhost:2181(CONNECTED) 1] ls /hello
[]
[zk: localhost:2181(CONNECTED) 2] get /hello
world
cZxid = 0x100000004
ctime = Wed Nov 13 02:41:56 UTC 2019
mZxid = 0x100000004
mtime = Wed Nov 13 02:41:56 UTC 2019
pZxid = 0x100000004
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
[zk: localhost:2181(CONNECTED) 3] 

具体使用:

将应用接入zookeeper集群

-----未完-------

猜你喜欢

转载自www.cnblogs.com/xuliang666/p/11847270.html
今日推荐