Persistent storage PV and PVC

1.PV and PVC

PV:

Persistent Volume (PV) for short is a piece of storage in a cluster that can be provisioned by administrators in advance.
Common storage configurations such as NFS and Ceph can be configured. Compared with volumes, it provides more functions, such as 生命周期管理、大小的限制.
There are two ways to provision PV volumes:静态供应或动态供应。

Static:
The cluster administrator creates many PVs in advance, which can reflect the characteristics of storage resources in the definition of PVs.

Dynamic:
Cluster administrators do not need to create PVs in advance, but describe back-end storage resources through the settings of StorageClass, and mark the type and characteristics of storage. The user applies for the storage type by creating a PVC, and the system will automatically complete the PV creation and binding with the PVC. If the Class declared by the PVC is empty "", it means that the PVC does not use dynamic mode.

PVC:

PersistentVolumeClaim (PVC) expresses the user's request for storage. Just like Pod consumes Node's resources, PVC consumes PV resources. PVC can apply for the size and access mode of storage space.

insert image description here

The PV is created by the administrator to connect to the back-end storage, and the PV resource is used by the personnel or the administrator to create the PVC.

2.PV resource recovery

When users no longer use their storage volumes, they can delete the PVC object from the API, allowing the resource to be reclaimed for reuse. The reclamation policy of the PV object tells the cluster what to do with the data volume when it is released from the claim. Currently, data volumes can be Retained, Recycled, or Deleted.

Retain: Retain resources

The retention policy allows administrators to manually reclaim resources. After the PVC is deleted, the PV still exists, and the corresponding data volume is regarded as "released", and the administrator can manually reclaim resources.

Delete: delete data

Plug-in support is required. If supported, PV and related back-end storage resources will also be automatically deleted when PVC is deleted; dynamic volume defaults to delete.

Recycle: Recycle (deprecated)

The recycling strategy has been deprecated in favor of dynamic provisioning. If supported by the underlying volume plugin, the reclamation policy Recycle will perform some basic scrubbing (rm -rf /thevolume/*) operations on the volume before allowing the volume to be used for new PVC claims.

3. Access Mode

ReadWriteOnce:

Volumes can be mounted read-write by a node. The ReadWriteOnce access mode also allows multiple Pods running on the same node to access the volume, abbreviated as RWO.

ReadOnlyMany:

Volumes can be mounted read-only by multiple nodes, abbreviated ROX.

ReadWriteMany:

Volumes can be mounted read-write by multiple nodes, abbreviated as RWX.

ReadWriteOncePod:

Volumes can be mounted read-write by a single Pod. If you want to ensure that only one Pod in the entire cluster can read or write to the PVC, use the ReadWriteOncePod access mode. This only supports CSI volumes and requires Kubernetes 1.22+, abbreviated as RWOP.

Note: Your storage support does not support this access mode, see the specific situation: official documentation

4. Storage classification

File storage (Filesystem):

Some data may need to be used by multiple nodes, such as user avatars, user uploaded files, etc. Implementation methods: NFS, NAS, FTP, etc.; NFS and FTP are not recommended.

Block storage (block):

Some data can only be used by one node, or the entire mount of a bare disk, such as database, redis, etc. The implementation methods are Ceph, GlusterFS, public cloud, etc.

Object storage:

A storage method directly implemented by program code, a common implementation method for stateless cloud native applications; implementation method: generally cloud storage that conforms to the 53 standard, such as AWS 53 storage, Minio, etc.

5. Create PV volume

List two examples, other types can refer to the official website, because my resources are limited.

5.1. Example: Create NAS/NFS type PV

NFS is not recommended for production. NAS can be used.

Prepare an NFS server

Step 1: Prepare an NFS server, I have installed a little more, you can also just install the NFS server

[root@localhost ~]# yum -y install nfs* rpcbind

Step 2: Install the nfs client on all nodes, otherwise it will not be recognized. Every node that needs to mount nfs needs to be installed.

[root@k8s-master01 ~]# yum -y install nfs-utils

Step 3: Create a shared directory on the server

[root@localhost ~]# mkdir -p /data/k8s

Step 4: Server configuration shared directory

[root@localhost ~]# cat /etc/exports
/data/k8s/ *(rw,sync,no_subtree_check,no_root_squash)
[root@localhost ~]# exportfs -r
[root@localhost ~]# systemctl restart nfs rpcbind

Step 5: There is another machine to mount the shared directory to test whether it is successful

[root@k8s-master01 hgfs]# mount -t nfs 192.168.10.6:/data/k8s /mnt
[root@k8s-master01 mnt]# mkdir hah
[root@k8s-master01 mnt]# ls
111.txt  hah
# 在nfs服务端查看
[root@localhost k8s]# ls
111.txt  hah

Create a PV

Step 1: Write a PV yaml file

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs
spec:
  capacity:
    storage: 2Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs-slow
  nfs:
    path: /data/k8s
    server: 192.168.10.6

Explain the above yaml file:

  • Not to mention the general beginning
  • capacity: capacity configuration, how much capacity is used by this pv, of course, it needs back-end storage support
  • volumeMode: the mode of the volume, described in directory 4
  • accessModes: the access mode of the PV, which is described in directory 3
  • accessClassName: the class of the PV, a specific PV can only bind a specific PVC
  • persistentVolumeReclaimPolicy: Recycling policy, described in directory 2
  • nfs: NFS service configuration, which contains shared directory and server IP

Step 2: Execute the yaml file to create this PV; note that the success of the PV has nothing to do with whether the backend storage is normal.

[root@k8s-master01 ~]# kubectl create -f pv-nfs.yaml 
persistentvolume/pv-nfs created
[root@k8s-master01 ~]# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv-nfs   2Gi        RWO            Recycle          Available           nfs-slow                54s

Step 3: Talk about the status of PV (STATUS)

  • Available: Available, there is no idle resource bound by PVC
  • Bound: has been bound, has been bound by PVC
  • Released: Released, PVC removed, resource not reused
  • Failed: Failed, automatic recycling failed

5.2. Example: Create a PV of type hostPath

When you do not have a reliable storage, but the data cannot be lost, it can be mounted to the path of the host, even if the data is still restarted.

Step 1: Create a pv-host.yaml file

apiVersion: v1
kind: PersistentVolume
metadata:
  name: host-pv-volume
  labels:
    type: local
spec:
  storageClassName: hostpath
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"  # 宿主机路径

Step 2: Execute the yaml file to create PV

[root@k8s-master01 ~]# kubectl create -f pv-host.yaml 
persistentvolume/host-pv-volume created
您在 /var/spool/mail/root 中有新邮件
[root@k8s-master01 ~]# kubectl get pv
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
host-pv-volume   1Gi        RWO            Retain           Available           hostpath                9s
pv-nfs           2Gi        RWO            Recycle          Available           nfs-slow                30m

6.PVC

6.1. The relationship between Pod, PVC and PV

insert image description here
From a picture, let's talk about the binding between PV and PVC, and how Pod mounts PVC

  • First create a PV, its name is pv-nfs, and the storageClassName name is nfs-slow (this name is not unique, that is, other PVs can also be called this name)
  • Then create a PVC, its name is test-pv-claim, and the storageClassName name is nfs-slow (the PVC binding PV is also judged based on this name. According to the characteristics mentioned in the previous point, a PVC can be bound with multiple storageClassName names. It is the PV of nfs-slow) Note that the resource size set by the PVC should be equal to or smaller than the PV.
  • Finally, when the Pod is used, a volume is created. The name is test-pv-storage. The PVC resource used by this volume should be written with the name of the PVC, that is, test-pv-claim; it is mounted when the container in the Pod mounts the resource. The name is the name of the volumes, which is test-pv-storage.

6.2. Create a PVC to mount to the Pod

Note: pvc needs to be in the same namespace as Pod, but pv does not need it; because it is in the default space, it is not specified, so you need to pay attention.

Step 1: Write a yaml file for PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pv-claim
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 2Gi  # 小于等于pv
  storageClassName: nfs-slow

Step 2: Execute the yaml file to create a PVC; from the following state, the binding has been successful, the pv-nfs is bound, and the type is nfs-slow, which are all specified above.

[root@k8s-master01 ~]# kubectl create -f pvc-nfs.yaml 
persistentvolumeclaim/test-pv-claim created
[root@k8s-master01 ~]# kubectl get pvc
NAME            STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-pv-claim   Bound    pv-nfs   2Gi        RWO            nfs-slow       20s

Step 3: Create a Pod to mount the PVC

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: dp-nginx
  name: dp-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dp-nginx
  strategy: {
    
    }
  template:
    metadata:
      labels:
        app: dp-nginx
    spec:
      volumes:
      - name: test-pv-storage
        persistentVolumeClaim:
          claimName: test-pv-claim
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: test-pv-storage

Step 4: Create a Pod, enter the Pod to view the mount status

[root@k8s-master01 ~]# kubectl replace -f dp-nginx.yaml 
deployment.apps/dp-nginx replaced
[root@k8s-master01 ~]# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
dp-nginx-fcd88d6f8-prxcd   1/1     Running   0          22s
[root@k8s-master01 ~]# kubectl exec -ti dp-nginx-fcd88d6f8-prxcd -- bash
root@dp-nginx-fcd88d6f8-prxcd:/# df -Th
Filesystem              Type     Size  Used Avail Use% Mounted on
overlay                 overlay   17G  5.2G   12G  31% /
tmpfs                   tmpfs     64M     0   64M   0% /dev
tmpfs                   tmpfs    2.0G     0  2.0G   0% /sys/fs/cgroup
shm                     tmpfs     64M     0   64M   0% /dev/shm
/dev/mapper/centos-root xfs       17G  5.2G   12G  31% /etc/hosts
192.168.10.6:/data/k8s  nfs4      17G  2.4G   15G  14% /usr/share/nginx/html
tmpfs                   tmpfs    3.8G   12K  3.8G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                   tmpfs    2.0G     0  2.0G   0% /proc/acpi
tmpfs                   tmpfs    2.0G     0  2.0G   0% /proc/scsi
tmpfs                   tmpfs    2.0G     0  2.0G   0% /sys/firmware
# 可以看到10.6的共享目录已经挂载到了/nginx/html下

Step 5: Test, create a resource in the shared directory of the nfs server, enter the Pod container to check whether it exists

# nfs服务器中创建资源
[root@localhost k8s]# ls
111.txt  hah
# 在Pod的nginx容器中查看
root@dp-nginx-fcd88d6f8-prxcd:/# cd /usr/share/nginx/html/
root@dp-nginx-fcd88d6f8-prxcd:/usr/share/nginx/html# ls
111.txt  hah
# 内容存在,说明创建成功

Guess you like

Origin blog.csdn.net/qq_42527269/article/details/123348981