PV and PVC in kubernetes

Table of contents

1. Introduction to PV and PVC

2. Relationship between PV and PVC

3. Create static PV

1. Configure nfs storage

2. Define PV

 3. Define PVC

4. Test access

4. Build StorageClass + nfs-client-provisioner to realize dynamic PV creation of NFS

1. Configure nfs service

2.Create Service Account

3. Use Deployment to create NFS Provisioner

3.1 Since selfLink is enabled in version 1.20, k8s version 1.20+ will report an error when dynamically generating pv through nfs provisioner. The solution is as follows

3.2Create NFS Provisioner 

4. Create StorageClass

5. Create PVC and Pod tests


1. Introduction to PV and PVC

The full name of PV is Persistent Volume, which is a persistent storage volume. K8S creates durable storage resource objects by logically dividing the specified storage device space.

The full name of PVC is Persistent Volume Claim, which is a request for persistent storage. It is a request and binding for PV resource objects, and it is also a storage volume type that Pod can mount and use.

PV is a resource in the cluster. PVC is a request for these resources and an index check of the resources. 

2. Relationship between PV and PVC

 The interaction between PV and PVC follows the life cycle:

Provisioning ---> Binding ---> Using ---> Releasing ---> Recycling

  • Provisioning, that is, the creation of PV, can create PV directly (static method) or dynamically create it using StorageClass
  • Binding, assign PV to PVC
  • Using, the Pod uses the Volume through the PVC, and can prevent the deletion of the PVC in use through the admission control StorageProtection (1.9 and earlier versions are PVCProtection)
  • Releasing, Pod releases Volume and deletes PVC
  • Reclaiming, recycling PV, you can keep the PV for next use, or you can delete it directly from the cloud storage

 According to the 5 stages, there are 4 states of PV:

  • Available: Indicates available status and has not been bound by any PVC.
  • Bound: Indicates that the PV has been bound to the PVC
  • Released (released): Indicates that the PVC has been deleted, but the resources have not yet been reclaimed by the cluster.
  • Failed: Indicates that the automatic recycling of the PV failed.

The specific process of PV from creation to destruction: 

  1. After a PV is created, its status will change to Available, waiting to be bound by the PVC.
  2. Once bound by a PVC, the status of the PV will change to Bound, and it can be used by Pods with corresponding PVCs defined.
  3. After the Pod is used, the PV will be released, and the status of the PV will change to Released.
  4. The PV that becomes Released will be recycled according to the defined recycling strategy. There are three recycling strategies, Retain, Delete and Recycle.

PV recycling strategy:

  • Retain (retain): When the user deletes the PVC bound to it, the PV is marked as released (the PVC is unbound from the PV but the recycling policy has not yet been implemented), and the previous data is still saved on the PV, but the PV cannot You need to manually process the data and delete the PV.
  • Delete: Delete the backend storage resources connected to the PV. For dynamically configured PVs, the default recycling policy is Delete. Indicates that when the user deletes the corresponding PVC, the dynamically configured volume will be automatically deleted. (Only supported by AWS EBS, GCE PD, Azure Disk and Cinder)
  • Recycle: If the user deletes the PVC, the data on the volume will be deleted, but the volume will not be deleted. (Only supported by NFS and HostPath)

3. Create static PV

Create a static PV using

  1. Prepare storage devices and shared directories
  2. Manually create PV resources, configure storage volume type access mode (RWO RWX ROX RWOP), storage space size, recycling policy (Retain Recycle Delete), etc.
  3. Create a PVC resource, configure the access mode for requesting PV resources (necessary condition, must be an access mode that PV can support) storage space size (by default, choose the nearest PV greater than or equal to the specified size) to bind PV
  4. Create Pod and Pod controller resources to mount PVC storage volumes, configure the storage volume type as persistentVolumeClaim, and define the storage volume mount point directory in the container configuration  

1. Configure nfs storage

mkdir /data/v{1..5}

vim /etc/exports
/data/v1 192.168.88.0/24(rw,no_root_squash,sync)
/data/v2 192.168.88.0/24(rw,no_root_squash,sync)
/data/v3 192.168.88.0/24(rw,no_root_squash,sync)
/data/v4 192.168.88.0/24(rw,no_root_squash,sync)
/data/v5 192.168.88.0/24(rw,no_root_squash,sync)

exportfs -arv

showmount -e

2. Define PV

vim pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:        #由于 PV 是集群级别的资源,即 PV 可以跨 namespace 使用,所以 PV 的 metadata 中不用配置 namespace
  name: pv01
  labels:
    name: pv01
spec:
  nfs:                        #定义存储类型
    path: /data/v1            #定义挂载卷路径
    server: 192.168.88.60     #定义服务器名称或地址
  accessModes:                #定义访问模型
  - ReadWriteOnce
  - ReadWriteMany
  capacity:                   #定义存储能力,一般用于设置存储空间
    storage: 1Gi              #指定大小
  storageClassName: slow      #自定义存储类名称,此配置用于绑定具有相同类别的PVC和PV
  persistentVolumeReclaimPolicy: Retain  #回收策略(Retain/Delete/Recycle)
--- 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv02
  labels:
    name: pv02
spec:
  nfs:
    path: /data/v2
    server: 192.168.88.60
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv03
  labels:
    name: pv03
spec:
  nfs:
    path: /data/v3
    server: 192.168.88.60
  accessModes:
  - ReadWriteOnce
  - ReadWriteMany
  capacity:
    storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv04
  labels:
    name: pv04
spec:
  nfs:
    path: /data/v4
    server: 192.168.88.60
  accessModes:
  - ReadWriteOnce
  - ReadWriteMany
  capacity:
    storage: 4Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv05
  labels:
    name: pv05
spec:
  nfs:
    path: /data/v5
    server: 192.168.88.60
  accessModes:
  - ReadWriteOnce
  - ReadWriteMany
  capacity:
    storage: 5Gi

kubectl apply -f pv.yaml

 3. Define PVC

The access mode of pvc is defined as multi-channel read and write. This access mode must be among the access modes defined by pv previously. Define the size of the PVC application to be 2Gi. At this time, the PVC will automatically match the multi-channel read and write PV with a size of 2Gi. The status of the successful matching of the PVC is Bound.

vim pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim   #定义类型为pvc
metadata:
  name: mypvc-a
  namespace: default
spec:
  accessModes:                #定义pvc的访问模式
  - ReadWriteMany
  resources:
    requests:
      storage: 2Gi            #定义请求pv的大小
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-vol-pvc
  namespace: default
spec:
  containers:
  - name: myapp
    image: nginx:1.14
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
  volumes:
  - name: html
    persistentVolumeClaim:    #定义挂载的pvc详细信息
      claimName: mypvc-a      #挂载pvc的名称

kubectl apply -f pvc.yaml

4. Test access

在存储服务器上创建index.html,并写入数据,通过访问Pod进行查看,可以获取到相应的页面。
cd /data/v3/
echo "welcome to use pv3" > index.html

kubectl get pods -o wide

curl  10.244.1.37
welcome to use pv3

4. Build StorageClass + nfs-client-provisioner to realize dynamic PV creation of NFS

The dynamic PV creation supported by Kubernetes itself does not include NFS, so you need to use an external storage volume plug-in to allocate PVs. For details, see: https://kubernetes.io/zh/docs/concepts/storage/storage-classes/

The volume plug-in is called Provisioner (storage allocator), and NFS uses nfs-client. This external volume plug-in will automatically create PV using the configured NFS server.
Provisioner: used to specify the type of Volume plug-in, including built-in plug-ins (such as kubernetes.io/aws-ebs) and external plug-ins (such as ceph.com/cephfs provided by external-storage).

Create and use dynamic PV

  1. Prepare storage devices and shared directories
  2. If it is an external storage volume plug-in, you need to create a serviceaccount account (the account used by the Pod) and do RBAC authorization (create a role to grant the operation permission of related resource objects, and then bind the account and role), so that the serviceaccount account has access to the PV. Operation permissions for resources such as PVC StorageClass
  3. Create the Pod of the external storage volume plug-in provisioner, use the serviceaccount account as the user of the Pod in the configuration, and set the relevant environment variable parameters
  4. Create a StorageClass (SC) resource and reference the plug-in of the storage volume plug-in (PROVISIONER_NAME) in the configuration
  5. Create a PVC resource, and set the StorageClass resource name, access mode, and storage space size in the configuration. Creating PVC resources will automatically create related PV resources.
  6. Create a Pod resource to mount the PVC storage volume, configure the storage volume type as persistentVolumeClaim, and define the storage volume mount point directory in the container configuration.
     

1. Configure nfs service

mkdir /opt/k8s
chmod 777 /data/volumes

vim /etc/exports
/data/volumes 192.168.88.0/24(rw,no_root_squash,sync)

exportfs -arv

2.Create Service Account

Service Account: Used to manage the permissions of NFS Provisioner to run in the k8s cluster, and set nfs-client rules for PV, PVC, StorageClass, etc.

vim nfs-client-rbac.yaml
#创建 Service Account 账户,用来管理 NFS Provisioner 在 k8s 集群中运行的权限
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
---
#创建集群角色
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-client-provisioner-clusterrole
rules:
- apiGroups: [""]
  resources: ["persistentvolumes"]
  verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
  resources: ["persistentvolumesclaims"]
  verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["events"]
 verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
---
#集群角色绑定
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: nfs-client-provisioner-clusterrolebinding
subjects:
- kind: ServiceAccount
  name: nfs-client-provisioner
  namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-clusterrole
  apiGroup: rbac.authorization.k8s.io

  verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
  resources: ["endpoints"]

kubectl apply -f nfs-client-rbac.yaml

3. Use Deployment to create NFS Provisioner

NFS Provisioner (nfs-client) has two functions: one is to create a mount point (volume) under the NFS shared directory, and the other is to associate the PV with the NFS mount point.

3.1 Since selfLink is enabled in version 1.20, k8s version 1.20+ will report an error when dynamically generating pv through nfs provisioner. The solution is as follows

vim /etc/kubernetes/manifests/kube-apiserver.yaml
spec:
  containers:
  - command:
    - kube-apiserver
    - --feature-gates=RemoveSelfLink=false   #添加这行
    - --advertise-address=192.168.88.70

kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
kubectl delete pods kube-apiserver -n kube-system 
kubectl get pods -n kube-system | grep apiserver

3.2Create NFS Provisioner 

vim nfs-client-provisioner.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
      - name: nfs-client-provisioner
        image: quay.io/external_storage/nfs-client-provisioner:latest
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: nfs-client-root
          mountPath: /persistentvolumes
        env:
        - name: PROVISIONER_NAME
          value: nfs-storage       #配置provisioner的Name,确保该名称与StorageClass资源中的provisioner名称保持一致
        - name: NFS_SERVER
          value: 192.168.88.60     #配置绑定的nfs服务器
        - name: NFS_PATH
          value: /data/volumes     #配置绑定的nfs服务器目录
      volumes:                     #申明nfs数据卷
      - name: nfs-client-root
        nfs:
          server: 192.168.88.60
          path: /data/volumes
    
kubectl apply -f nfs-client-provisioner.yaml     
kubectl get pod                              

4. Create StorageClass

StorageClass: Responsible for establishing PVC and calling NFS provisioner to perform scheduled work, and associate PV with PVC

vim nfs-client-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client-storageclass
provisioner: nfs-storage      #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
parameters:
  archiveOnDelete: "false"     #false表示在删除PVC时不会对数据目录进行打包存档,即删除数据;为ture时就会自动对数据目录进行打包存档,存档文件以archived开头


kubectl apply -f nfs-client-storageclass.yaml
kubectl get storageclass

5. Create PVC and Pod tests

vim test-pvc-pod.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-nfs-pvc
  #annotations: volume.beta.kubernetes.io/storage-class: "nfs-client-storageclass"     #另一种SC配置方式,(annotations也可表示为注释字段)
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: nfs-client-storageclass    #关联StorageClass对象
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: test-storageclass-pod
spec:
  containers:
  - name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command:
    - "/bin/sh"
    - "-c"
    args:
    - "sleep 3600"
    volumeMounts:
    - name: nfs-pvc
      mountPath: /mnt
  restartPolicy: Never
  volumes:
  - name: nfs-pvc
    persistentVolumeClaim:
      claimName: test-nfs-pvc      #与PVC名称保持一致

#PVC 通过 StorageClass 自动申请到空间
kubectl get pvc
#查看 NFS 服务器上是否生成对应的目录,自动创建的 PV 会以 ${namespace}-${pvcName}-${pvName} 的目录格式放到 NFS 服务器上
ls /data/volumes

#进入 Pod 在挂载目录 /mnt 下写一个文件,然后查看 NFS 服务器上是否存在该文件
kubectl exec -it test-storageclass-pod sh
/ # cd /mnt/
/mnt # echo 'this is test file' > test.txt

#发现 NFS 服务器上存在,说明验证成功
cat /data/volumes/default-test-nfs-pvc-pvc-bff2245e-990d-4119-a846-06f898f95efb

 

Guess you like

Origin blog.csdn.net/q1y2y3/article/details/132278916