Official website Reference:
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#
1, describes
the management and storage management are calculated different instances of the problem. PersistentVolume subsystem provides an API for users and administrators from using the API stored in the abstract the details of how storage is provided. To this end, we have introduced two new API resources: persistenvolume and persistenvolumeclaim.
1) PersistentVolume (PV) is set by the storage administrator, it is a cluster of resources, is the same as node cluster resources. PV is similar to the plug of a volume, but its lifetime is independent of the use of any single PV Pod. This API object capture storage implementation details, including NFS, iSCSI storage system or specific cloud provider.
2) PersistentVolumeClaim (PVC) is stored in the user request. pod consume node resources, pvc consumption of PV resources. Pods can request a particular level of resources (CPU and memory). Statement can request a particular size and access mode (e.g., which may be a read / write or read-only times).
Although PersistentVolumeClaims abstract storage resources allows the user, but users often need PersistentVolumes with different properties (e.g., properties) to solve different problems. Cluster administrators need to be able to provide a variety of persistent volumes, volumes of these lasting difference lies not only in size and access patterns, without the need to disclose the details of how these volumes are implemented to the user. For these requirements, there is a resource StorageClass
2, the type of persistent volumes of
PersistentVolume implemented as plug-in type. Kubernetes currently supports the following plug-ins:
GCEPersistentDisk, AWSElasticBlockStore, AzureFile, AzureDisk, CSI, FC (Fiber Channel), FlexVolume, Flocker, NFS, iSCSI, RBD (Ceph Block Device), CephFS, Cinder, GlusterFS, VsphereVolume, Quobyte Volumes, HostPath, Portworx Volumes, ScaleIO Volumes, StorageOS
3, access mode
on a host of resources in any way supported by the provider can be installed on a continuous roll. When displayed in the table below, the suppliers have different capabilities, and each PV access mode is set to a specific mode supported by the particular volume. For example, NFS may support a plurality of read / write the client, but a particular NFS PV can be exported to the server as a CD-ROM. Each PV assembly has its own access mode capability description of specific PV.
Access mode as follows:
RWO-readwriteonce: single node is charged reader
ROX-Readonlymany: a number of nodes to be loaded read
RWX-readwritemany: reading and writing many nodes are charged
4, life cycle
pv is the cluster resources. pvc is a request for these resources, but also serve as a claim check for resources, pv There are two configurations: static or dynamic.
1) Static
Cluster Administrator to create multiple pv. They carry details of real storage available for cluster users. They exist in Kubernetes API are available for use.
2) dynamic
static pv user's PersistentVolumeClaim when the administrator created does not match, the cluster may try to dynamically provide volume for the PVC. This setting is based StorageClasses: PVC must request a storage class, the administrator must configure the class has been created and can be dynamically set. Request class "" statement effectively disable dynamic set for themselves. To enable dynamic storage provisioning based storage class cluster administrator needs to enable DefaultStorageClass admission controller on the API server. For example, it can be achieved by ensuring comma DefaultStorageClass API server components located --enable acceptment plugins flag separated list of ordered values. For more information on the API server command line flags, see kube API server documentation
Binding
user creates a PersistentVolumeClaim, or in the case of dynamic allocation of resources, have created a PersistentVolumeClaim, the storage and having a specific PersistentVolumeClaim particular access request. Monitoring the master control loop of the new pvc, find matching PV (if possible), and bind them together. If PV is a dynamic configuration for a new PVC, the loop will always be bound to the PV PVC. Otherwise, users will always get at least what they required, but may exceed the capacity of the required content. Once bound, PersistentVolumeClaim binding is exclusive, no matter how they are bound. PVC is binding to PV-one mapping, use ClaimRef, ClaimRef is a two-way binding between persistenvolume and persistenvolumeclaim.
If there is no match of volume, the statement will indefinitely remain unbound state. When the volume of available matches will be binding declaration. For example, the configuration of many 50Gi pv cluster does not match the request 100Gi of PVC. When a 100Gi added to the cluster of PV, PVC can be bound.
Persistent protection declaration
aimed at ensuring long-lasting volume pvc being used by the pod will not be deleted from the system, as this may result in data loss. If a user deletes PVC Pod is being used, the PVC is not deleted immediately. PVC removal will be delayed until the PVC is no longer in use by any pod. In addition, if an administrator to remove the binding of PVC PV, PV is not removed immediately. PV removed is postponed until PV no longer bound to the PVC.
Recovery strategy
when the user finishes their volume use, they can remove objects from the API allows PVC recycling in.
Retain reservations: Allows you to manually reclaim resources. When you delete PersistentVolumeClaim, PersistentVolume still exists, and the volume is considered "release."
Delete: Delete reclaim strategy to support volume plug-in, delete PersistentVolume objects, and external infrastructure (such as AWS EBS, GCE PD, Azure cinder disk or volume) related storage assets in Kubernetes the deletion from.
Recycle Recycling: Basic erase (rm-rf / the volume /
*) Warning: The recovery strategy has been deprecated. Instead, the recommended approach is to use dynamic resource allocation.
The state of
volumes can be in a certain state the following:
the Available (Available): Block idle resources have not been any binding statement
Bound (bound): volume has been binding declaration
Released (released): the statement was deleted, but resources have not been restated cluster
failed (failed): automatic recovery of the volume fails
5、实际操作
[root@k8smaster pp]# more pv.yaml #创建PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv1
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /sharedir
server: 192.168.23.100
[root@k8smaster pp]# kubectl create -f pv.yaml
persistentvolume/mypv1 created
[root@k8smaster pp]# kubectl get pv #STATUS 为 Available
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mypv1 1Gi RWO Recycle Available nfs 36s
[root@k8smaster pp]#
PV capacity specified capacity of 1G.
accessModes access mode is designated ReadWriteOnce, access modes are supported:
ReadWriteOnce - the PV can mount to a single node in read-write mode.
ReadOnlyMany - PV able to mount a plurality of nodes in read-only mode.
ReadWriteMany - PV able to mount a plurality of nodes in read-write mode.
persistentVolumeReclaimPolicy specify when the PV recycling strategy Recycle, support strategies:
Retain - need to be manually recovered.
Recycle - clearing the data in the PV effect equivalent to the implementation rm -rf / thevolume / *.
Delete - to delete the corresponding storage resources on the Storage Provider, such as AWS EBS, GCE PD, Azure Disk , OpenStack Cinder Volume and so on.
storageClassName the specified class PV is nfs. Is equivalent to setting a classification for the PV, PVC can specify the class of the respective request class PV.
PV on the NFS server specifies the corresponding directory.
[root@k8smaster pp]# more pvc.yaml #只需要指定 PV 的容量,访问模式和 class
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
[root@k8smaster pp]# kubectl create -f pvc.yaml
persistentvolumeclaim/mypvc1 created
[root@k8smaster pp]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mypvc1 Bound mypv1 1Gi RWO nfs 9s
[root@k8smaster pp]# kubectl get pv #'ve Bound to mypv1, the application is successful
NAME in CAPACITY ACCESS the MODES RECLAIM POLICY the STATUS CLAIM storageClass REASON AGE
mypv1 1Gi RWO Recycle Bound default / mypvc1 nfs 12m
[root @ k8smaster PP] #
[root @ k8smaster PP] # More pv-POD .yaml # Volume like format using common by persistentVolumeClaim specify mypvc1 application in volumes in Volume
apiVersion: V1
kind: Pod
Metadata:
name: PV-POD
spec:
Containers:
- name: PV-POD-CTN
Image: 192.168. 23.100: 5000 / Tomcat: v2
volumeMounts:
- name: pv-Volume
MountPath: / tmp / config
Volumes:
- name: pv-volume
persistentVolumeClaim:
claimName: mypvc1
restartPolicy: Never
[root@k8smaster pp]# kubectl create -f pv-pod.yaml
pod/pv-pod created
[root@k8smaster pp]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pv-pod 1/1 Running 0 13s
[root@k8smaster pp]# kubectl exec -it pv-pod /bin/bash
root@pv-pod:/usr/local/tomcat# cd /tmp/config/
root@pv-pod:/tmp/config# ls -lrt
total 8
-rw-r--r-- 1 nobody nogroup 7 Feb 19 14:19 node01.log
-rw-r--r-- 1 nobody nogroup 7 Feb 19 14:19 node02.log
root@pv-pod:/tmp/config# echo "pv-pod" >pv-pod.log #在pod中创建文件
root@pv-pod:/tmp/config# ls -lrt
total 12
-rw-r--r-- 1 nobody nogroup 7 Feb 19 14:19 node01.log
-rw-r--r-- 1 nobody nogroup 7 Feb 19 14:19 node02.log
-rw-r--r-- 1 nobody nogroup 7 Feb 19 2020 pv-pod.log
[@ k8snode01 the root SHAREDIR] # pwd # nfs client view in the generated file
/ SHAREDIR
[@ k8snode01 the root SHAREDIR] # LS -lrt
Total 12 is
-rw-R & lt - r--. 1 nfsnobody is nfsnobody is 22:19. 7 On Feb. 19 amdha01 .log
-rw-R & lt - r--. 1 nfsnobody is nfsnobody is 22:19. 7 On Feb. 19 node02.log
-rw-R & lt - r--. 1 nfsnobody is nfsnobody is PV-2020. 7 On Feb 20 is pod.log
[@ k8snode01 the root SHAREDIR] # More pod.log-pv
pv-POD
[root @ k8snode01 SHAREDIR] #
[root @ k8smaster PP] # kubectl the Delete pv-POD POD # exit POD
POD "pv-POD" deleted
[root @ k8smaster PP] # kubectl the Delete PVC mypvc1 # delete pvc recycling resources
[root @ k8smaster pp] #kubectl get pod # When PVC mypvc1 be deleted, found Kubernetes launched a new Pod, Pod of this action is to remove data PV mypv1 requires busybox: 1.27 mirroring
NAME AGE RESTARTS the STATUS READY
Recycler-for-mypv1 0/1 ImagePullBackOff 0 29S
[root @ k8smaster PP] #
链接:https://pan.baidu.com/s/13d-i8FWF0miLx2XYNchffw #busybox:1.27镜像下载
提取码:zekc
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Failed 4m39s kubelet, k8snode02 Failed to pull image "busybox:1.27": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Warning Failed 4m13s kubelet, k8snode02 Failed to pull image "busybox:1.27": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Normal Scheduled 3m32s default-scheduler Successfully assigned default/recycler-for-mypv1 to k8snode02
Warning Failed 3m30s kubelet, k8snode02 Failed to pull image "busybox:1.27": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: TLS handshake timeout
Normal Pulling 2m46s (x4 over 4m55s) kubelet, k8snode02 Pulling image "busybox:1.27"
Warning Failed 2m7s (x4 over 4m39s) kubelet, k8snode02 Error: ErrImagePull
Warning Failed 2m7s kubelet, k8snode02 Failed to pull image "busybox:1.27": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/library/busybox/manifests/1.27: net/http: TLS handshake timeout
Normal BackOff 103s (x6 over 4m39s) kubelet, k8snode02 Back-off pulling image "busybox:1.27"
Warning Failed 89s (x7 over 4m39s) kubelet, k8snode02 Error: ImagePullBackOff
[root@k8smaster pp]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mypv1 1Gi RWO Recycle Available nfs 3h10m
[root @ k8snode01 SHAREDIR] # LS -lrt # After deleting pvc, disk file is deleted Cleanup
Total 0
[root @ k8snode01 SHAREDIR] #