Kubernetes: (15) PV and PVC's "Enmity and Enmity"

Table of contents

One: PV and PVC

Two: PV and PVC life cycle

2.1 Provisioning

2.2 Binding

2.3 Using

2.4 Releasing

2.5 Reclaiming

2.6 Recycling

Three: access mode

3.1 PV access mode (accessModes)

3.2 PV recovery policy (persistentVolumeReclaimPolicy) 

3.3 Status of PV

Four: Experimental verification

4.1 Install NFS

4.2 Create pv.yaml file

4.3 PVC 

4.4 Experiment

Five: About statefulset 

5.1 The start and stop sequence of statefulset

5.2 StatefulSet usage scenarios

One: PV and PVC

At present, there are many ways and types of storage, and the parameters of various storages also require very professional technicians to understand. In the Kubernetes cluster, it is convenient for our use and management. Kubernetes proposes the concepts of PV and PVC, so that the managers of the Kubernetes cluster can focus on the Kubernetes cluster without worrying about the back-end storage devices.

pv : Equivalent to a disk partition

pvc: equivalent to disk request

PersistentVolumeClaim (PVC) is a request for user storage

PVC的使用逻辑:在pod中定义一个存储卷(该存储卷类型为PVC),
定义的时候直接指定大小,pvc必须与对应的pv建立关系,
pvc会根据定义去pv申请,而pv是由存储空间创建出来的。
pv和pvc是kubernetes抽象出来的一种存储资源。
  • PV: The meaning of persistent volume is an abstraction of the underlying shared storage
  • PVC (Persistent Volume Claim) is a declaration of persistent volume request and storage requirements (PVC is actually a resource request application issued by the user to the kubernetes system.)

As can be seen from the figure above, the underlying storage can be of various types, including NFS, Ceph, CIFS, etc., and Kubernetes will abstract these storages into PVs. PV, or Persistent Volume, is a storage resource configured in the cluster. PVC, or Persistent Volume Claim, is a request for user storage. Usually, we define a storage volume in a Pod. When defining, we specify the relevant information of the storage volume, such as space size, readable and writable attributes, and so on. However, PVC is not a real storage space. Some kind of connection must be established between the Pod's PVC and PV, so that the Pod can call the actual storage space.

apiVersion: v1  
kind: PersistentVolume
metadata:
  name: pv2
spec:
  nfs: # 存储类型,与底层真正存储对应
  capacity:  # 存储能力,目前只支持存储空间的设置
    storage: 2Gi
  accessModes:  # 访问模式
  storageClassName: # 存储类别
  persistentVolumeReclaimPolicy: # 回收策略
 

After using PV and PVC, the work can be further subdivided:

Storage: storage engineer maintenance
PV:  kubernetes administrator maintenance
PVC: kubernetes user maintenance


Two: PV and PVC life cycle

In fact, both PV and PVC follow the following life cycle:

2.1 Provisioning

  • Provisioning, the configuration phase. Generally speaking, there are two ways to provide PV - static and dynamic.
  • The so-called static provisioning means that the Kubernetes administrator creates multiple PVs, the storage space and other attributes of these PVs have been determined, and they have been associated with real storage devices. PVCs in Pods can request these PVs as needed.
  • The so-called dynamic provision needs to rely on the support of StorageClass. At this time, Kubernetes will try to dynamically create PV for PVC. The advantage of this is to avoid this situation: some PVCs are allocated to PVs that far exceed their resource requirements, or there are many PVs with less resources in the system, but a PVC with high resource requirements cannot be satisfied Condition.

2.2 Binding

In the case of dynamic configuration, after a user creates or has created a PVC with a specific number, the process of binding a PVC to a PV.
If there is no PV that meets the requirements of the PVC request, the PVC will not be created, and as a result, the corresponding Pod will not be created.

2.3 Using

That is, after the PVC is bound to the PC, the Pod uses the storage space.

2.4 Releasing

When the Pod is deleted or the resources of the PV are used up, Kubernetes will delete the PVC object, and the PV resources will be recycled accordingly, and the PV will be in this state at this time. However, the PV at this time needs to be processed before the Pod can store information on the storage volume before it can be used.

2.5 Reclaiming

The recovery strategy of PV is the process of processing the released PV.

2.6 Recycling

According to the configuration, sometimes the PV will be erased to delete all the information on the storage space, and the storage resource can be used again.


Three: access mode

3.1 PV access mode (accessModes)

model explain

ReadWriteOnce

(WOW)

Readable and writable, but only supported by a single node.

ReadOnlyMany

(ROX)

Read-only, can be mounted by multiple nodes.

ReadWriteMany

(RWX)

Multiple readable and writable. This storage can be shared by multiple nodes in a read-write manner. Not every type of storage supports these three methods. For example, the shared method currently supports relatively few, and the more commonly used one is NFS. When a PVC is bound to a PV, it is usually bound according to two conditions, one is the size of the storage, and the other is the access mode.

3.2 PV recovery policy (persistentVolumeReclaimPolicy) 

Strategy explain
Retain Do not clean up, keep the Volume (requires manual cleaning)
Recycle Delete data, ie rm -rf /thevolumel* (only supported by NFS and HostPath)
Delete Delete storage resources, such as deleting AWS EBS volumes (only supported by AWS EBS, GCE PD, Azure Disk and Cinder)

3.3 Status of PV

state explain
Available available
Bound Already assigned to PVC
Released The PVC is unbound but the recycling policy has not yet been implemented
Failed An error occurred

Four: Experimental verification

4.1 Install NFS

# 1、创建目录
[root@k8s ~]# mkdir /root/data/{pv1,pv2,pv3} -pv
 
# 2、暴露服务
[root@k8s ~]# vim /etc/exports
/root/data/pv1  192.168.137.0/24(rw,sync,no_root_squash)
/root/data/pv2  192.168.137.0/24(rw,sync,no_root_squash)
/root/data/pv3  192.168.137.0/24(rw,sync,no_root_squash)
 
# 3、重启服务
[root@k8s ~]#  systemctl restart nfs
 

4.2 Create pv.yaml file

apiVersion: v1
kind: PersistentVolume
metadata:
  name:  pv1
spec:
  capacity:
    storage: 10Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain  #回收策略
  storageClassName: nfs    #类别名字
  nfs:                       #nfs存储
    path: /root/data/pv1      #nfs挂载路径
    server: 192.168.137.20    #对应的nfs服务器
 
 
[root@k8s pv]# kubectl apply -f pv.yaml 
persistentvolume/pv1 created
[root@k8s pv]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv1    10Gi       RWX            Retain           Available           nfs                     4s
 

4.3 PVC 

PVC is an application for resources, which is used to declare demand information for storage space, access mode, and storage category. Here is the resource manifest file:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc
  namespace: dev
spec:
  accessModes: # 访问模式
  selector: # 采用标签对PV选择
  storageClassName: # 存储类别
  resources: # 请求空间
    requests:
      storage: 5Gi

Description of key configuration parameters of PVC:

  • Access Modes (accessModes): Used to describe the access rights of user applications to storage resources
  • Selection conditions (selector): Through the setting of Label Selector, PVC can be screened for PVs that already exist in the system
  • Storage class (storageClassName): When defining a PVC, you can set the required back-end storage class. Only PVs with this class can be selected by the system
  • Resource Request (Resources): Describes a request for a storage resource

4.4 Experiment

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc1
  namespace: dev
spec:
  accessModes: 
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc2
  namespace: dev
spec:
  accessModes: 
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc3
  namespace: dev
spec:
  accessModes: 
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi # 如果pvc大于pv,则绑定不上
# 1、创建pvc
[root@k8s ~]# kubectl create -f pvc.yaml
persistentvolumeclaim/pvc1 created
persistentvolumeclaim/pvc2 created
persistentvolumeclaim/pvc3 created
 
# 2、查看pvc
[root@k8s ~]# kubectl get pvc  -n dev -o wide
NAME   STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
pvc1   Bound    pv1      1Gi        RWX                           15s   Filesystem
pvc2   Bound    pv2      2Gi        RWX                           15s   Filesystem
pvc3   Bound    pv3      3Gi        RWX                           15s   Filesystem
 
# 3、查看pv
[root@k8s k8s]# kubectl get pv -n dev
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM      STORAGECLASS   REASON   AGE
pv1    1Gi        RWX            Retain           Bound    dev/pvc1                           4m25s
pv2    2Gi        RWX            Retain           Bound    dev/pvc2                           4m25s
pv3    3Gi        RWX            Retain           Bound    dev/pvc3  

Create pods.yaml, use pv 

apiVersion: v1
kind: PersistentVolume
metadata:
  name:  pv2
spec:
  capacity:
    storage: 2Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /root/data/pv2           #路径和地址全部指向nfs机器
    server: 192.168.137.20
 
---
 
apiVersion: v1
kind: PersistentVolume
metadata:
  name:  pv3
spec:
  capacity:
    storage: 1Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /root/data/pv3
    server: 192.168.137.20
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: none    #无头service
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet   #一个接着一个创建
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "nginx"
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: wangyanglinux/myapp:v1
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteMany" ]
      storageClassName: "nfs"
      resources:
        requests:
          storage: 1Gi
[root@k8s pv]# kubectl get pvc
NAME        STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
www-web-0   Bound    pv3      1Gi        RWX            nfs            10s
www-web-1   Bound    pv2      2Gi        RWX            nfs            4s
[root@k8s pv]# kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          17s
web-1   1/1     Running   0          11s
web-2   0/1     Pending   0          5s
[root@k8s pv]# kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          28s
web-1   1/1     Running   0          22s
web-2   0/1     Pending   0          16s
[root@k8spv]# kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          29s
web-1   1/1     Running   0          23s

Effect test 

#查看pv2的存储
 
[root@k8s pv]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE
pv2    2Gi        RWX            Retain           Bound    default/www-web-1   nfs                     93s
pv3    1Gi        RWX            Retain           Bound    default/www-web-0   nfs                     93s
[root@k8s pv]# kubectl describe pv pv2
Name:            pv2
Labels:          <none>
Annotations:     pv.kubernetes.io/bound-by-controller: yes
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    nfs
Status:          Bound
Claim:           default/www-web-1
Reclaim Policy:  Retain
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        2Gi
Node Affinity:   <none>
Message:         
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.137.20
    Path:      /root/data/pv2
    ReadOnly:  false
Events:        <none>
#进入挂载目录创建index.html文件
[root@k8s pv]# cd /root/data/pv2
[root@k8s pv2]# ls
[root@k8s pv2]# vim index.html
[root@k8s pv2]# cat index.html 
aaaaaa
[root@k8s pv2]# chmod 777 index.html 
[root@k8s pv2]# kubectl get pv -o wide
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE     VOLUMEMODE
pv2    2Gi        RWX            Retain           Bound    default/www-web-1   nfs                     4m20s   Filesystem
pv3    1Gi        RWX            Retain           Bound    default/www-web-0   nfs                     4m20s   Filesystem
 
 
[root@k8s pv2]# kubectl get pods -o wide
NAME    READY   STATUS    RESTARTS   AGE     IP             NODE          NOMINATED NODE   READINESS GATES
web-0   1/1     Running   0          4m14s   10.150.2.120   k8s-node-02   <none>           <none>
web-1   1/1     Running   0          4m8s    10.150.1.96    k8s-node-01   <none>           <none>
web-2   0/1     Pending   0          4m2s    <none>         <none>        <none>           <none>
 
[root@k8s pv2]# curl 10.150.1.96
aaaaaa
 
statefulset访问的名称一样,当删除pod的时候 ,名称不变。地址会变。例如web-1一样。
[root@k8s pv2]# kubectl delete pods web-1
pod "web-1" deleted
[root@k8s pv2]# kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          9m26s
web-1   1/1     Running   0          58s
web-2   0/1     Pending   0          9m14s
[root@k8s pv2]# kubectl get pods -o wide
NAME    READY   STATUS    RESTARTS   AGE     IP             NODE          NOMINATED NODE   READINESS GATES
web-0   1/1     Running   0          9m35s   10.150.2.120   k8s-node-02   <none>           <none>
web-1   1/1     Running   0          67s     10.150.1.97    k8s-node-01   <none>           <none>
web-2   0/1     Pending   0          9m23s   <none>         <none>        <none>           <none>
 
#新的IP地址访问,一样可以访问到
[root@k8s-master-01 pv2]# curl 10.150.1.97
aaaaaa
[root@k8s-master-01 pv2]# 
 
其余pv类似
 

Five: About statefulset 

  • The pattern for matching Pod name (network identifier) ​​is: (statefulset name)-(serial number), such as the above example: web-0, web-1, web-2        
  • StatefulSet creates a DNS domain name for each Pod replica. The format of this domain name is:                    $(podname).(headlessserver name), which means that the communication between services is through the Pod domain name instead of Pod lP, because when the Pod is located on the Node When a failure occurs, the Pod will be drifted to other Nodes, and the Pod IP will change, but the Pod domain name will not change
  • tatefulSet uses the Headless service to control the domain name of the Pod. The FQDN of this domain name is: (servicename).(namespace).svc.cluster.local, where "cluster.local" refers to the domain name of the cluster
  • According to the volumeClaimTemplates, create a pvc for each Pod. The pvc naming rules match the pattern: (volumeClaimTemplates.name)-(pod_name), such as the above volumeMounts.name=www, Podname=web-[0-2], so it is created The PVCs are www-web-0, www-web-1, www-web-2
  • Deleting a Pod will not delete its pvc, manually deleting the pvc will automatically release the pv
[root@k8s pv2]# kubectl get pods
NAME      READY   STATUS    RESTARTS   AGE
test-pd   1/1     Running   0          14s
web-0     1/1     Running   0          16m
web-1     1/1     Running   0          8m16s
web-2     0/1     Pending   0          16m
[root@k8s pv2]# kubectl exec -it test-pd -- sh
/ # ping web-0.nginx
ping: bad address 'web-0.nginx'
/ # ping web-0.nginx
PING web-0.nginx (10.150.2.120): 56 data bytes
64 bytes from 10.150.2.120: seq=0 ttl=64 time=4.891 ms
64 bytes from 10.150.2.120: seq=1 ttl=64 time=0.209 ms
64 bytes from 10.150.2.120: seq=2 ttl=64 time=0.196 ms
64 bytes from 10.150.2.120: seq=3 ttl=64 time=0.131 ms
64 bytes from 10.150.2.120: seq=4 ttl=64 time=0.128 ms

5.1 The start and stop sequence of statefulset

Orderly deployment: When deploying a StatefulSet, if there are multiple Pod copies, they will be created sequentially (from 0 to N-1) and all previous Pods must be in the Running and Ready states before the next Pod runs.
Ordered deletion: When Pods are deleted, the order in which they are terminated is from N-1 to 0.
Orderly expansion: When performing an expansion operation on a Pod, the same as deployment, the Pods in front of it must be in the Running and Ready states.

5.2 StatefulSet usage scenarios

  • Stable persistent storage, that is, the Pod can still access the same persistent data after rescheduling, based on PVC. A stable network identifier, that is, its PodName and HostName remain unchanged after the Pod is rescheduled.
  • Orderly deployment and orderly expansion are implemented based on init containers.
  • orderly contraction.
[root@k8s pv2]# kubectl get pods -o wide -n kube-system 
NAME                                    READY   STATUS    RESTARTS   AGE   IP              NODE            NOMINATED NODE   READINESS GATES
coredns-f68b4c98f-nkqlm                 1/1     Running   2          22d   10.150.0.7      k8s-master-01   <none>           <none>
coredns-f68b4c98f-wzrrq                 1/1     Running   2          22d   10.150.0.6      k8s-master-01   <none>           <none>
etcd-k8s-master-01                      1/1     Running   3          22d   192.168.223.30   k8s-master-01   <none>           <none>
kube-apiserver-k8s-master-01            1/1     Running   3          22d   192.168.223.30   k8s-master-01   <none>           <none>
kube-controller-manager-k8s-master-01   1/1     Running   4          22d   192.168.223.30   k8s-master-01   <none>           <none>
kube-flannel-ds-8zj9t                   1/1     Running   1          11d   192.168.223.30   k8s-node-01     <none>           <none>
kube-flannel-ds-jmq5p                   1/1     Running   0          11d   192.168.223.30   k8s-node-02     <none>           <none>
kube-flannel-ds-vjt8b                   1/1     Running   4          11d   192.168.223.30   k8s-master-01   <none>           <none>
kube-proxy-kl2qj                        1/1     Running   2          22d   192.168.223.30   k8s-master-01   <none>           <none>
kube-proxy-rrlg4                        1/1     Running   1          22d   192.168.223.9  k8s-node-01     <none>           <none>
kube-proxy-tc2nd                        1/1     Running   0          22d   192.168.223.10   k8s-node-02     <none>           <none>
kube-scheduler-k8s-master-01            1/1     Running   4          22d   192.168.223.30  k8s-master-01   <none>           <none>
[root@k8s-master-01 pv2]# dig  -t A nginx.default.svc.cluster.local. @10.150.0.7
 
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.8 <<>> -t A nginx.default.svc.cluster.local. @10.244.0.7
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26852
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
 
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;nginx.default.svc.cluster.local. IN	A
 
;; ANSWER SECTION:
nginx.default.svc.cluster.local. 30 IN	A	10.111.55.241
 
;; Query time: 7 msec
;; SERVER: 10.150.0.7#53(10.150.0.7)
;; WHEN: 一 08月 06 00:00:38 CST 2021
;; MSG SIZE  rcvd: 107

Delete the corresponding pod, svc, statefulset, pv, pvc 

[root@k8s pv]# kubectl get pods
NAME      READY   STATUS    RESTARTS   AGE
test-pd   1/1     Running   0          18m
web-0     1/1     Running   0          34m
web-1     1/1     Running   0          26m
web-2     0/1     Pending   0          34m
 
[root@k8s pv]# kubectl delete -f pod.yaml 
service "nginx" deleted
statefulset.apps "web" deleted
 
[root@k8s pv]# kubectl get pods
NAME      READY   STATUS        RESTARTS   AGE
test-pd   1/1     Running       0          18m
web-0     0/1     Terminating   0          35m
web-1     0/1     Terminating   0          26m
 
[root@k8s pv]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   22d
 
[root@k8s pv]# kubectl delete statefulsets.apps  --all
No resources found
[root@k8s pv]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE
pv2    2Gi        RWX            Retain           Bound    default/www-web-1   nfs                     35m
pv3    1Gi        RWX            Retain           Bound    default/www-web-0   nfs                     35m
 
[root@k8s pv]# kubectl get pvc
NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
www-web-0   Bound     pv3      1Gi        RWX            nfs            35m
www-web-1   Bound     pv2      2Gi        RWX            nfs            35m
www-web-2   Pending                                      nfs            35m
 
 
 
 
[root@k8s pv]# kubectl delete pvc --all
persistentvolumeclaim "www-web-0" deleted
persistentvolumeclaim "www-web-1" deleted
persistentvolumeclaim "www-web-2" deleted
 
#查看pv显示release状态
[root@k8s pv]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM               STORAGECLASS   REASON   AGE
pv2    2Gi        RWX            Retain           Released   default/www-web-1   nfs                     36m
pv3    1Gi        RWX            Retain           Released   default/www-web-0   nfs                     36m
 
#编辑pv2的yaml格式、因为 claimRef的显示所以一直显示release的状态,可以通过edit修改pv2的yaml,删除对应的claimRef的那一段
 
[root@k8s pv]# kubectl edit pv pv2 -o yaml
 
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"pv2"},"spec":{"accessModes":["ReadWriteMany"],"capacity":{"storage":"2Gi"},"nfs":{"path":"/root/data/pv2","server":"192.168.15.31"},"persistentVolumeReclaimPolicy":"Retain","storageClassName":"nfs"}}
    pv.kubernetes.io/bound-by-controller: "yes"
  creationTimestamp: "2021-12-26T15:34:19Z"
  finalizers:
  - kubernetes.io/pv-protection
  name: pv2
  resourceVersion: "501755"
  uid: 7b9f8b31-f111-4064-9ec7-d06e55f6bebd
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 2Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: www-web-1
    namespace: default
    resourceVersion: "498363"
    uid: 7d47eaf8-8bed-40fc-b790-18e93a8a0398
  nfs:
    path: /root/data/pv2
"/tmp/kubectl-edit-euy6w.yaml" 37L, 1260C
#这时候发现状态已经为 Available状态
[root@k8s pv]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM               STORAGECLASS   REASON   AGE
pv2    2Gi        RWX            Retain           Available                       nfs                     44m
pv3    1Gi        RWX            Retain           Released    default/www-web-0   nfs                     44m

Guess you like

Origin blog.csdn.net/ver_mouth__/article/details/126213331