《Kubernetes进阶实战》第九章《StatefulSet控制器》

版权声明:秉承开源精神,博主博文可以随机转载,但请注明出处! https://blog.csdn.net/zisefeizhu/article/details/88359049

有状态应用和无状态应用介绍

应用程序存在“有状态”和“无状态”两种类别。

无状态类应用的Pod资源可按需增加,减少或重构,而不会对由齐其提供的服务产生除了并发相应能力之外的其他严重影响。

应用本身就是分布式二点集群,各应用实例彼此之间存在着关联关系,甚至是次序,角色方面的相关性,其中的每个实例都有其自身的独特性而无法轻易由其他实例所取代,管理这类应用的Pod资源就是StatefulSet控制器

根据其是否需要记录前一次或 次通信中的相关事 信息 以作为下一次通信的分类标准,可 将那些需要记录信息的应用程序称为有状态( stat巳缸 )应用,而无须记录的则称为无状态( stateless )应用

状态是进程的时间属性。

这类应用一般需要 录请求连接的相关信息,即“状态”,有的甚至还需要持久保存由请求生成的数据,尤其是存储服务类的应用, 运行于 ubemetes 系统上时需要用到持久存储卷

StatefulSet的特性

Statefu!Set Pod 资源控制器的一种实现,用于部署和扩展有状态应用的 Pod 资源,确保它们的运行顺序及每个 Pod 资源的唯一性。Statefu !Set 需要为每个 Po 维持一个唯一且固定的标识符,必要时还要为其创建专有的存储卷 Statefu!Set 主要适用于那些依赖于下列类型资源的应用程序

口稳定且唯 的网络标识符

口稳定且持久的存储

口有序、优雅地部署和扩展

口有序 优雅地删除和终止

口有序而自动地滚动更新

一个典型、完整可用的 Statefu!Set 通常由三个组件构成 Headless Service,StatefulSet和 volumeClaimTemplate 其中, Headless Service 用于为 Pod 资源标识符生成可解析的 DNS 资源记录, Statefu!Set 用于管控 Pod 资源, volum ClaimTemplate 则基于静态或动态的 PV 供给方式为 Pod 资源提供专有且固定的存储

创建StatefulSet应用

在创建StatefulSet之前需要准备的东西,值得注意的是创建顺序非常关键,创建顺序如下:
1、Volume
2、Persistent Volume
3、Persistent Volume Claim
4、Service
5、StatefulSet
Volume可以有很多种类型,比如nfs、glusterfs等

(1)查看statefulset的定义

​[root@master ~]# kubectl explain statefulset
KIND:     StatefulSet
VERSION:  apps/v1

DESCRIPTION:
     StatefulSet represents a set of pods with consistent identities. Identities
     are defined as: - Network: A single stable DNS and hostname. - Storage: As
     many VolumeClaims as requested. The StatefulSet guarantees that a given
     network identity will always map to the same storage identity.

FIELDS:
   apiVersion   <string>
   kind <string>
   metadata <Object>
   spec <Object>
   status   <Object>
[root@k8s-master ~]# kubectl explain statefulset.spec
KIND:     StatefulSet
VERSION:  apps/v1

RESOURCE: spec <Object>

DESCRIPTION:
     Spec defines the desired identities of pods in this set.

     A StatefulSetSpec is the specification of a StatefulSet.

FIELDS:
   podManagementPolicy  <string>  #Pod管理策略
   replicas <integer>    #副本数量
   revisionHistoryLimit <integer>   #历史版本限制
   selector <Object> -required-    #选择器,必选项
   serviceName  <string> -required-  #服务名称,必选项
   template <Object> -required-    #模板,必选项
   updateStrategy   <Object>       #更新策略
   volumeClaimTemplates <[]Object>   #存储卷申请模板,列表对象形式

(2)配置nfs

[root@nfs ~]# mkdir /data/volumes/v{1,2,3,4,5}
[root@stor01 volumes]# vim /etc/exports
/data/volumes/v1 20.0.0.0/24(rw,no_root_squash)
/data/volumes/v2 20.0.0.0/24(rw,no_root_squash)
/data/volumes/v3 20.0.0.0/24(rw,no_root_squash)
/data/volumes/v4 20.0.0.0/24(rw,no_root_squash)
/data/volumes/v5 20.0.0.0/24(rw,no_root_squash)
[root@stor01 volumes]# exportfs -arv
exporting 20.0.0.0/24:/data/volumes/v5
exporting 20.0.0.0/24:/data/volumes/v4
exporting 20.0.0.0/24:/data/volumes/v3
exporting 20.0.0.0/24:/data/volumes/v2
exporting 20.0.0.0/24:/data/volumes/v1
[root@stor01 volumes]# showmount -e
Export list for nfs:
/data/volumes/v5 20.0.0.0/24
/data/volumes/v4 20.0.0.0/24
/data/volumes/v3 20.0.0.0/24
/data/volumes/v2 20.0.0.0/24
/data/volumes/v1 20.0.0.0/24

(3) 定义PV

[root@master ~]# mkdir volumes
[root@master ~]# cd volumes
[root@master volumes]# vim pv-demo.yaml
[root@master volumes]# cat pv-demo.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv001
  labels:
    name: pv001
spec:
  nfs:
    path: /data/volumes/v1
    server: nfs
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv002
  labels:
    name: pv002
spec:
  nfs:
    path: /data/volumes/v2
    server: nfs
  accessModes: ["ReadWriteOnce"]
  capacity:
    storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv003
  labels:
    name: pv003
spec:
  nfs:
    path: /data/volumes/v3
    server: nfs
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv004
  labels:
    name: pv004
spec:
  nfs:
    path: /data/volumes/v4
    server: nfs
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv005
  labels:
    name: pv005
spec:
  nfs:
    path: /data/volumes/v5
    server: nfs
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 2Gi
[root@master volumes]# kubectl apply -f pv-demo.yaml
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
persistentvolume/pv004 created
persistentvolume/pv005 created
[root@master volumes]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv001   1Gi        RWO,RWX        Retain           Available                                   5s
pv002   2Gi        RWO            Retain           Available                                   5s
pv003   2Gi        RWO,RWX        Retain           Available                                   5s
pv004   2Gi        RWO,RWX        Retain           Available                                   5s
pv005   2Gi        RWO,RWX        Retain           Available                      

(4) 定义StatefulSet

[root@master ~]# mkdir stateful
[root@master ~]# cd stateful
[root@master stateful]# vim stateful-demo.yaml
[root@master stateful]# cat stateful-demo.yaml
apiVersion: v1
kind: Service
metadata:
  name: myapp-svc
  labels:
    app: myapp-svc
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: myapp-pod
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: myapp
spec:
  serviceName: myapp-svc
  replicas: 3
  selector:
    matchLabels:
      app: myapp-pod
  template:
    metadata:
      labels:
        app: myapp-pod
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v1
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: myappdata
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: myappdata
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 2Gi
[root@master stateful]# kubectl apply -f stateful-demo.yaml
service/myapp-svc created
statefulset.apps/myapp configured
[root@master stateful]# kubectl get pods -l app=myapp-pod -w
NAME      READY   STATUS    RESTARTS   AGE
myapp-0   1/1     Running   0          40m
myapp-1   1/1     Running   0          3m19s
myapp-2   1/1     Running   0          3m17s

#查看相关pod资源的就绪信息
^C[root@master stateful]# kubectl get statefulsets myapp
NAME    READY   AGE
myapp   3/3     63m

#由 StatefulSet 控制器创建的Pod对象拥有固定且唯一的标识符,它们基于唯一的索
引序号及相关的 StatefulSet 对象的名称而生成,格式为“<statefulset name> -< ordinal 
inde >”, 如下面的命令结果所示
[root@master stateful]# kubectl get pods -l app=myapp-pod
NAME      READY   STATUS    RESTARTS   AGE
myapp-0   1/1     Running   0          44m
myapp-1   1/1     Running   0          7m4s
myapp-2   1/1     Running   0          7m2s

#Pod资源的主机名同其资源名称,因此也是带索 序号的名称格式
[root@master stateful]# for i in 0 1 2;do kubectl exec myapp-$i -- sh -c 'hostname';done
myapp-0
myapp-1
myapp-2

#这些名称标识会由 StatefulSet 资源相关的 Headless Service 资源创建为 DNS 资源记录,
其域名格式为$(service name ).$ (namespace) .svc cluster.local 其中“ cluster.local ”是集
默认域名。在 Pod 资源创建后,与 DNS 资源记录格式为“$(pod_name).$( service_ 
name).$( namespace ). svc. cluster. local ”,例如前面 建的两个 Pod 资源的资源记录为 myapp-0.
myapp- svc.defau lt.svc .cluster local 和 myapp -1 my pp- svc. default.s c.cluster.local
[root@master stateful]# kubectl run -it --image busybox dns-client --restart=Never --rm /bin/sh
If you don't see a command prompt, try pressing enter.
/ # nslookup myapp-0.myapp-svc
Server:		10.96.0.10
Address:	10.96.0.10:53

测试删除

[root@master stateful]# kubectl delete pods/myapp-0
pod "myapp-0" deleted
[root@master stateful]# kubectl get pods
NAME      READY   STATUS    RESTARTS   AGE
myapp-0   1/1     Running   0          2s
myapp-1   1/1     Running   0          19m
myapp-2   1/1     Running   0          19m

:当客户端尝试向 StatefulSet 资源的 Pod 成员发出访问请求时,应该针对 Headless Service 资源的 CNAME ( myapp-svc.default.svc.cluster.local )记 录进行,它指向的 SRV记录包含了当前处于就绪状态的 Pod 资源 当然,若在配置 Pod 模板时定义了 Pod 资源的liveness probe readiness probe 虑到名称标识固定不变,也可以让客户端直接向 SRV资源记录( myapp-0.myapp-svc myapp-1 myapp-svc) 发出请求

Pod资源的专有存储卷

[root@master stateful]# kubectl get pvc -l app=myapp-pod -o custom-columns=NAME:metadata.name,VOLUME:spec.volumeName,STATUS:status.phase
NAME                VOLUME   STATUS
myappdata-myapp-0   pv002    Bound
myappdata-myapp-1   pv003    Bound
myappdata-myapp-2   pv004    Bound
#通过 kubectl exec 命令为每个 Pod 资源于此目录种生成一个测试页面 ,用于存储卷持久性测试:
[root@master stateful]# for i in 0 1 2 ; do kubectl exec myapp-$i -- sh -c 'echo $(date),Hostname: $(hostname) > /usr/share/nginx/html/index.html'; done

[root@master ~]# kubectl exec -it myapp-0 -- /bin/sh
/ # cat /usr/share/nginx/html/index.html 
Tue Mar 19 09:12:41 UTC 2019,Hostname: myapp-0
/ # exit
[root@master ~]# kubectl delete pods/myapp-0
pod "myapp-0" deleted
[root@master ~]# kubectl exec -it myapp-0 -- /bin/sh
/ # cat /usr/share/nginx/html/index.html 
Tue Mar 19 09:12:41 UTC 2019,Hostname: myapp-0

StatefulSet资源扩缩容

#扩容
[root@master ~]# kubectl scale statefulset myapp --replicas=6
statefulset.apps/myapp scaled
[root@master ~]# kubectl get pods -l app=myapp-pod -w
NAME      READY   STATUS    RESTARTS   AGE
myapp-0   1/1     Running   0          111s
myapp-1   1/1     Running   0          36m
myapp-2   1/1     Running   0          36m
myapp-3   1/1     Running   0          11s
myapp-4   0/1     Pending   0          8s

#缩容
[root@master ~]# kubectl patch statefulset myapp -p '{"spec":{"replicas":3}}'
statefulset.apps/myapp patched
[root@master ~]# kubectl get pods -l app=myapp-pod 
NAME      READY   STATUS        RESTARTS   AGE
myapp-0   1/1     Running       0          3m34s
myapp-1   1/1     Running       0          38m
myapp-2   1/1     Running       0          38m
myapp-3   0/1     Terminating   0          114s

:终止 Pod 资源后 其存储卷并不会被 删除,因此缩减规模后若再将其 展回来那么此前的数据依然可用,且 Pod 资源名称保持不变。

滚动更新

[root@master ~]# kubectl set image statefulset myapp myapp=ikubernetes/myapp:v2
statefulset.apps/myapp image updated
[root@master ~]# kubectl get pods -l app=myapp-pod -w
NAME      READY   STATUS              RESTARTS   AGE
myapp-0   1/1     Running             0          7m53s
myapp-1   0/1     ContainerCreating   0          8s
myapp-2   1/1     Running             0          80s
[root@master ~]# kubectl describe pods/myapp-0
    Image:          ikubernetes/myapp:v2

猜你喜欢

转载自blog.csdn.net/zisefeizhu/article/details/88359049