k8s 1.22.3 uses persistent volume storage class StorageClass+NFS pv dynamic provisioning

1. Environmental preparation

CentOS Linux release 7.7.1908 (Core) 3.10.0-1062.el7.x86_64

kubeadm-1.22.3-0.x86_64

kubelet-1.22.3-0.x86_64

kubectl-1.22.3-0.x86_64

kubernetes-cni-0.8.7-0.x86_64

CPU name IP VIP
k8s-master01 192.168.30.106 192.168.30.115
k8s-master02 192.168.30.107
k8s-master03 192.168.30.108
k8s-node01 192.168.30.109
k8s-node02 192.168.30.110
k8s-nfs 192.168.30.114

2. What is StorageClass

StatefulSet is to solve the problem of stateful services (the corresponding Deployments and ReplicaSets are designed for stateless services), and its application scenarios include:

Stable persistent storage, that is, Pods can still access the same persistent data after rescheduling, which is implemented based on PVCs

Stable network flag, that is, after Pod rescheduling, its PodName and HostName remain unchanged, which is implemented based on Headless Service (ie, Service without Cluster IP)

Orderly deployment and orderly expansion, that is, Pods are ordered, and should be carried out in sequence according to the defined order when deploying or expanding (that is, from 0 to N-1, before the next Pod runs, all previous Pods must be Running and Ready states), based on init containers

Orderly shrink, orderly delete (i.e. from N-1 to 0)

As can be seen from the above application scenarios, StatefulSet consists of the following parts:

Headless Service for defining network flags (DNS domain)

volumeClaimTemplates for creating PersistentVolumes

Define a StatefulSet for a specific application

The DNS format of each Pod in the StatefulSet is statefulSetName-{0..N-1}.serviceName.namespace.svc.cluster.local, where

serviceName is the name of the Headless Service

0..N-1 is the serial number of the Pod, starting from 0 to N-1

statefulSetName is the name of the StatefulSet

The namespace is the namespace where the service is located. Headless Service and StatefulSet must be in the same namespace

.cluster.local为Cluster Domain

Using StatefulSets

StatefulSets are suitable for applications that have one or more of the following requirements:

Stable, unique network logo.

Stable, persistent storage.

Deploy and scale orderly and gracefully.

Orderly, graceful deletion and termination.

Orderly, automatic rolling upgrades.

In the above, stable is synonymous with persistence in Pod (res)scheduling. If the application does not require any stable identifiers, ordered deployments, deletions, and scales, the application should be deployed using a controller that provides a set of stateless replicas, such as Deployment or ReplicaSet may be more suitable for your stateless needs.

3. Why do you need StorageClass

In a large-scale Kubernetes cluster, there may be thousands of PVCs, which means that the operation and maintenance personnel must realize the creation of these multiple PVs. In addition, as the project needs, new PVCs will be continuously submitted , then the operation and maintenance personnel need to continuously add new PVs that meet the requirements, otherwise the new Pod will fail to be created because the PVC cannot be bound to the PV. Moreover, the request to a certain storage space through the PVC is likely to be insufficient. To meet the various requirements of applications for storage devices, and different applications may have different requirements for storage performance, such as read and write speed, concurrent performance, etc. In order to solve this problem, Kubernetes has introduced a new The resource object: StorageClass, through the definition of StorageClass, administrators can define storage resources as a certain type of resources, such as fast storage, slow storage, etc., users can intuitively know the storage resources according to the description of StorageClass. Specific characteristics, so that you can apply for appropriate storage resources according to the characteristics of the application

Four, StorageClass deployment process

To use StorageClass, we have to install the corresponding automatic configuration program. For example, we use nfs for the storage backend here, then we need to use an nfs-client automatic configuration program, which we also call Provisioner. This program uses our The nfs server that has been configured to automatically create persistent volumes, that is, automatically create PVs for us.

To build StorageClass+NFS, there are roughly the following steps:

1. Create an available NFS Serve

2. Create a Service Account. This is used to control the permissions of the NFS provisioner running in the k8s cluster

3. Create StorageClass. Responsible for establishing PVC and calling NFS provisioner to perform scheduled work, and let PV and PVC establish management

4. Create NFS provisioner. There are two functions, one is to create a mount point (volume) under the NFS shared directory, and the other is to build a PV and associate the PV with the NFS mount point

5. Create StorageClass

1. Create an NFS server

I will not explain it here, you can refer to the article I provided at the beginning

IP: 192.168.30.114
 
exportfs
/data/volumes   192.168.30.0/24

2. Configure account and related permissions

vim nfs-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default        #根据实际环境设定namespace,下面类同
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
    # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

kubectl apply -f nfs-rbac.yaml

3. Create StroageClass for NFS resources

vim nfs-storageClass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: test-nfs-storage #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
parameters:
#  archiveOnDelete: "false"
  archiveOnDelete: "true"
reclaimPolicy: Retain

kubectl apply -f nfs-storageClass.yaml

4. Create NFS provisioner

vim nfs-provisioner.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default  #与RBAC文件中的namespace保持一致
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          #image: quay.io/external_storage/nfs-client-provisioner:latest
          #这里特别注意,在k8s-1.20以后版本中使用上面提供的包,并不好用,这里我折腾了好久,才解决,后来在官方的github上,别人提的问题中建议使用下面这个包才解决的,我这里是下载后,传到我自已的仓库里
          #easzlab/nfs-subdir-external-provisioner:v4.0.1 
          image: registry-op.test.cn/nfs-subdir-external-provisioner:v4.0.1
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: test-nfs-storage  #provisioner名称,请确保该名称与 nfs-StorageClass.yaml文件中的provisioner名称保持一致
            - name: NFS_SERVER
              value: 192.168.30.114   #NFS Server IP地址
            - name: NFS_PATH
              value: "/data/volumes"    #NFS挂载卷
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.30.114  #NFS Server IP地址
            path: "/data/volumes"     #NFS 挂载卷
      imagePullSecrets:
      - name: registry-op.test.cn

kubectl apply -f nfs-provisions.yaml

5. Check the status

# kubectl get sc
NAME                  PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage   test-nfs-storage   Retain          Immediate           false                  24h

6. Create a test pod to check if the deployment is successful

1. Create PVC

vim test-claim.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    #与nfs-storageClass.yaml metadata.name保持一致
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" 
spec:
  storageClassName: "managed-nfs-storage"
  accessModes:
    - ReadWriteMany
    #- ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

kubectl apply -f test-claim.yaml

2. Check the PVC status

To ensure that the status is Bound, if it is pending, there must be a problem, and you need to further check the reason

# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
test-claim   Bound    pvc-6324e17a-0a33-4a64-b0bb-e187f51a8f30   10Gi       RWX            managed-nfs-storage   3d19h

3. Create a test pod

vim test-pod.yaml

kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"   #创建一个SUCCESS文件后退出
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim  #与PVC名称保持一致

kubectl apply -f test-pod.yaml

4. View pod status

# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS          REASON   AGE
pvc-6324e17a-0a33-4a64-b0bb-e187f51a8f30   10Gi       RWX            Delete           Bound    default/test-claim   managed-nfs-storage            3d19h

5. Inspection results

Log in to 192.168.30.114 and check if there is a file just created in the nfs directory

# ll /data/volumes/default-test-claim-pvc-6324e17a-0a33-4a64-b0bb-e187f51a8f30/  #文件规则是按照${namespace}-${pvcName}-${pvName}创建的
total 0
-rw-r--r-- 1 root root 0 Dec  3 16:16 SUCCESS  #看到这个文件,证明是成功了 

Seven, create a statefulset service

1. Create a headless service

vim nginx-statefulset.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx-headless
spec:
  clusterIP: None   #None值,就是表示无头服务
  selector:
    app: nginx
  ports:
  - name: web
    port: 80
    protocol: TCP
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  podManagementPolicy: OrderedReady  #pod名-> 0-N,删除N->0
  replicas: 3  #三个副本
  revisionHistoryLimit: 10
  serviceName: nginx-headless
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:  #name没写,会默认生成的
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: registry-op.test.cn/nginx:1.14.9
        ports:
        - containerPort: 80
        volumeMounts:
        - name: web #填vcp名字
          mountPath: /var/www/html
      imagePullSecrets:
      - name: registry-op.test.cn
  volumeClaimTemplates:
  - metadata:
      name: web
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: managed-nfs-storage  #存储类名,指向我们已经建好的
      volumeMode: Filesystem
      resources:
        requests:
          storage: 512M

kubectl apply -f nginx-statefulset.yaml

2. Inspection results

# kubectl get pods -l app=nginx
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          22h
web-1   1/1     Running   0          22h
web-2   1/1     Running   0          22h

----

# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
 
web-web-0    Bound    pvc-1fa25092-9516-41aa-ac9d-0eabdabda849   512M       RWO            managed-nfs-storage   23h
web-web-1    Bound    pvc-90cf9923-e5d8-4195-bffb-b9e5f14c11ae   512M       RWO            managed-nfs-storage   23h
web-web-2    Bound    pvc-bd4eccfd-8b65-4135-83fe-d57f8a9d9a74   512M       RWO            managed-nfs-storage   23h

--

# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS          REASON   AGE
 
pvc-1fa25092-9516-41aa-ac9d-0eabdabda849   512M       RWO            Retain           Bound    default/web-web-0    managed-nfs-storage            23h
pvc-90cf9923-e5d8-4195-bffb-b9e5f14c11ae   512M       RWO            Retain           Bound    default/web-web-1    managed-nfs-storage            23h
pvc-bd4eccfd-8b65-4135-83fe-d57f8a9d9a74   512M       RWO            Retain           Bound    default/web-web-2    managed-nfs-storage            23h

--

# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS          REASON   AGE
 
pvc-1fa25092-9516-41aa-ac9d-0eabdabda849   512M       RWO            Retain           Bound    default/web-web-0    managed-nfs-storage            23h
pvc-90cf9923-e5d8-4195-bffb-b9e5f14c11ae   512M       RWO            Retain           Bound    default/web-web-1    managed-nfs-storage            23h
pvc-bd4eccfd-8b65-4135-83fe-d57f8a9d9a74   512M       RWO            Retain           Bound    default/web-web-2    managed-nfs-storage            23h

#View on NFS Server

# ll /data/volumes/
total 20
drwxrwxrwx 2 root root 4096 Dec  6 15:01 default-web-web-0-pvc-1fa25092-9516-41aa-ac9d-0eabdabda849
drwxrwxrwx 2 root root 4096 Dec  6 15:01 default-web-web-1-pvc-90cf9923-e5d8-4195-bffb-b9e5f14c11ae
drwxrwxrwx 2 root root 4096 Dec  6 15:02 default-web-web-2-pvc-bd4eccfd-8b65-4135-83fe-d57f8a9d9a74
 

3. Test

#Write three index.html files to the three file directories respectively

#Install a curl service to test the already created Statefulset service

#create curl

kubectl run curl --image=radial/busyboxplus:curl -n default -i --tty

#Enter the curl container

kubectl exec -it curl /bin/sh -n default

---

[ root@curl:/ ]$ nslookup nginx-headless
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
 
Name:      nginx-headless
Address 1: 10.244.3.33 web-0.nginx-headless.default.svc.cluster.local
Address 2: 10.244.3.35 web-2.nginx-headless.default.svc.cluster.local
Address 3: 10.244.3.34 web-1.nginx-headless.default.svc.cluster.local

[ root@curl:/ ]$ curl -v http://10.244.3.33/index.html
> GET /index.html HTTP/1.1
> User-Agent: curl/7.35.0
> Host: 10.244.3.33
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx
< Date: Tue, 07 Dec 2021 06:34:08 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 6
< Last-Modified: Mon, 06 Dec 2021 07:01:48 GMT
< Connection: keep-alive
< ETag: "61adb55c-6"
< Accept-Ranges: bytes
<
web-0  ##注意这个是我前面写入文件的内容
 
[ root@curl:/ ]$ curl -v http://10.244.3.34/index.html
> GET /index.html HTTP/1.1
> User-Agent: curl/7.35.0
> Host: 10.244.3.34
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx
< Date: Tue, 07 Dec 2021 06:35:13 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 6
< Last-Modified: Mon, 06 Dec 2021 07:01:59 GMT
< Connection: keep-alive
< ETag: "61adb567-6"
< Accept-Ranges: bytes
<
web-1

[ root@curl:/ ]$ curl -v http://10.244.3.35/index.html
> GET /index.html HTTP/1.1
> User-Agent: curl/7.35.0
> Host: 10.244.3.35
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx
< Date: Tue, 07 Dec 2021 06:35:29 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 6
< Last-Modified: Mon, 06 Dec 2021 07:02:07 GMT
< Connection: keep-alive
< ETag: "61adb56f-6"
< Accept-Ranges: bytes
<
web-2


#Afterwards, you can test to delete the pod, recreate the pod, and expand and shrink the capacity, observe the changes of pvc and pv, and then you can see that the IP of the pod changes, but the name remains the same, it can still be accessed normally, and it can also be accessed to the original content.

Eight, about the impact of StorageClass recycling strategy on data

1. The first configuration

   archiveOnDelete: "false"  
   reclaimPolicy: Delete   #默认没有配置,默认值为Delete

#Test Results

1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
3.删除PVC后,PV被删除且NFS Server对应数据被删除

2. The second configuration

   archiveOnDelete: "false"  
   reclaimPolicy: Retain  

#Test Results

1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中

3. The third configuration

   archiveOnDelete: "ture"  
   reclaimPolicy: Retain  

#Test Results

1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中

4. The fourth configuration

  archiveOnDelete: "ture"  
  reclaimPolicy: Delete 

#Test Results

1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中

Summary: Except for the first configuration, the other three configurations still retain the data after the PV/PVC is deleted

9. Frequently Asked Questions

1. How to set the default StorageClass

There are two methods, one is to use kubectl patch, the other is to directly indicate in the yaml file

#kubectl patch

#设置default时是为"true"
# kubectl patch storageclass managed-nfs-storage -p  '{ "metadata" : { "annotations" :{"storageclass.kubernetes.io/is-default-class": "true"}}}'
 
# kubectl get sc  #名字后面有个"default"字段
NAME                            PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage (default)   test-nfs-storage   Retain          Immediate           false                  28h
 
#取消default,值为"false"
 
# kubectl patch storageclass managed-nfs-storage -p  '{ "metadata" : { "annotations" :{"storageclass.kubernetes.io/is-default-class": "false"}}}'
 
# kubectl get sc
NAME                  PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage   test-nfs-storage   Retain          Immediate           false                  28h

#yaml file

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
  annotations:
    "storageclass.kubernetes.io/is-default-class": "true"   #添加此注释,这个变为default storageclass
provisioner: test-nfs-storage #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
parameters:
#  archiveOnDelete: "false"
  archiveOnDelete: "true"
reclaimPolicy: Retain

2. How to use the default StorageClass

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-www
#  annotations:
#    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"  ##这里就可以不用指定
spec:
#  storageClassName: "managed-nfs-storage"  ##这里就可以不用指定
  accessModes:
    - ReadWriteMany
    #- ReadWriteOnce
  resources:
    requests:
      storage: 10Gi


Reference link:

StorageClass+NFS using persistent volumes in the k8s-1.22.3 version - Knowing that the concept of k8s storage is specifically mentioned in the previous article, and this article mainly introduces the configuration of nfs and static storage ( Guihai Listening Snow: PVC persistent volume is used in K8S-v1.20) 1. Environment preparation CentOS Linux release 7.7.1908 (Core) 3.10.0-1062.el7.x86_64 kubead… https://zhuanlan.zhihu.com/p/447663656

Guess you like

Origin blog.csdn.net/a772304419/article/details/126659402