PV (Persistent Volume)
And Volume difference
- Only network storage
- Independent Pod
PV supports mainstream storage solutions:
- GCEPersistentDisk
- AWSElasticBlockStore
- AzureFile
- AzureDisk
- FC (Fibre Channel)
- Flocker
- NFS
- iSCSI
- RBD (Ceph Block Device)
- CephFS
- Cinder (OpenStack block storage)
- Glusterfs
- VsphereVolume
- Quobyte Volumes
- HostPath (single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)
- VMware Photon
Creating PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /data/k8s/
server: 172.16.1.131
accessModes
- ReadWriteOnce - PV in read-write mounted to a node
- ReadOnlyMany - PV in read-only mode to mount a plurality of nodes
- ReadWriteMany - PV in read-write mode to mount a plurality of nodes
Reclaim
Currently supported recovery strategy:
- Retain - allows the user to manually recovered
- Recycle - data ( "rm -rf / thevolume / *") on the Delete PV
- Delete - Delete PV
Phase
- Available - PV can be used
- Bound - PV is bound to PVC
- Released - bound PVC is deleted, it can be Reclaim
- Failed - automatic failure recovery
PVC (Persistent Volume Claim)
Create a PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
Phase
Pending - waiting for an available PV
Bound - PV is bound to the PVC
Lost - bound not find PV
In the Pod
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: dockerfile/nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim