The first chapter, Introduction
The default data containers are non-persistent case of, after the demise of the container also followed data loss, so Docker provides a mechanism for Volume persistent data storage. Similar, Kubernetes provide a more powerful and rich plug Volume mechanisms to solve the problem of data persistence between container and container share data.
When different Docker, Kubernetes Volume lifecycle Pod binding container and hang up after the restart when Kubelet container again, Volume data still remains and delete Pod, Volume will be cleaned up.
Whether the data is lost depends on the type of Volume, such as emptyDir data will be lost, while the PV data will not be lost
PersistentVolume (pv) and PersistentVolumeClaim (pvc) are two API resources k8s provided for storing the abstract details. Administrators focused on how to provide storage capabilities by pv without attention to how users use the same user only needs to mount pvc into the container without the need to focus on how the use of technology to achieve storage volumes.
pv pvc relations and relations with the pod and the node is similar to the former the latter consumes resources. pv pvc may apply to the specified size and storage resources set the access mode.
The second chapter, pv pvc knowledge
Life cycle
pv and pvc follow the life cycle:
1. supply ready. Administrators create multiple pv available to the user in the cluster.
2. binding. Pvc users to create and assign resources and access to the desired mode. Before finding usable pv, pvc remain unbound state.
3. Use. Pvc users can use the same volume as in the pod.
4. Release. Pvc users to recover deleted storage resources, pv will become "released" state. Since the data is retained before, these data need to be processed according to different strategies, otherwise it can not be used by other pvc these storage resources.
5. recovered. pv you can set three recovery strategy: Reserved (Retain), recycling (Recycle) and delete (Delete).
Retention policy to allow data retention manual processing.
Delete Delete policy and external storage resources associated pv need to plug-in support.
Recovery strategy will perform cleanup operations, it can be used after the new pvc, required plug-in support.
pv property
pv has the following attributes:
capacity. Currently only supports the storage size, the future may support the IOPS and throughput.
Access mode. ReadWriteOnce: write a single node. ReadOnlyMany: multi-node read-only. ReadWriteMany: multiple nodes to read and write. Only one mode to mount.
Recovery strategy. Currently NFS and HostPath support recycling. AWS, EBS, GCE, PD and Cinder support removed.
stage. Divided Available (unbound pvc), Bound (bound), Released (pvc resources have been deleted but not recovered), Failed (automatic recovery failure) </ pre>
pvc property
Access mode. Pv of the same semantics. Specific pattern used when requesting resources.
Resources. Number of storage resources APPLICATIONS </ pre>
pv type
emptyDir
hostPath
gcePersistentDisk
awsElasticBlockStore
nfs
iscsi
flocker
GlusterFS
rbd
cephfs
gitRepo
secret
persistentVolumeClaim
downwardAPI
azureFileVolume
................
........ (以下省略)
Volume type currently used
emptyDir
If the Pod set emptyDir type Volume, Pod was assigned to the Node time, creates emptyDir, as long as the Pod run on the Node, emptyDir there will be (container hung up does not result in loss of data emptyDir), but if the Pod is deleted from the Node (Pod is deleted, or Pod migrate), emptyDir will be deleted and permanently lost.
hostPath
hostPath able to mount the file system on Pod Node to go inside. If you need to use a file on Pod Node, may be used hostPath </ pre>
NFS
NFS is an acronym for Network File System, that is network file system. Kubernetes by simple configuration can be mounted to the Pod in the NFS, NFS and the data is permanently stored, while simultaneously supporting NFS write operation.
gcePersistentDisk
gcePersistentDisk disk can be mounted permanently on the GCE to the container, you need to run VM Kubernetes in the GCE
awsElasticBlockStore
awsElasticBlockStore EBS disc can be mounted on the container AWS, Kubernetes needs to run at the AWS EC2.
gitRepo
gitRepo volume of git down to the path specified in the container
Projected Volume
Projected volume Volume plurality of source mapped to the same directory, support secret, downwardAPI and configMap </ pre>
The third chapter, a simple example
1, emptyDir (node-level storage, life cycle and the same pod)
# cat emptydir.yaml #pod中有两个container挂载同一个emptyDir,nginx提供web服务,busybox则循环向挂载目录下的index.html文件写入数据
apiVersion: v1
kind: Pod
metadata:
name: emptydir
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
ports:
- name: myapp
containerPort: 80
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
- name: busybox
image: busybox:latest
volumeMounts: #见pod的volume挂载到container中
- name: html #名称需要和 volumes中的name一致
mountPath: /data #挂载volume在container中的路径
command:
- "/bin/sh"
- "-c"
- "while true; do echo $(date) >> /data/index.html; sleep 2; done"
volumes: #创建pod可以使用的volume,可以有多个
- name: html
emptyDir: {} #volume类型,默认不限制使用空间
View pod running and access
# kubectl apply -f emptydir.yaml
pod/emptydir created
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
emptydir 2/2 Running 0 76s 10.244.3.34 huoban-k8s-node01
# while true; do curl 10.244.3.34; sleep 1; done
Fri Sep 20 03:34:38 UTC 2019
Fri Sep 20 03:34:40 UTC 2019
Fri Sep 20 03:34:42 UTC 2019
......
2, hostPath (node-level storage, life cycle and the same node)
#cat host-path.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: pod-hostpath
namespace: default
spec:
containers: - name: myapp
image: ikubernetes/myapp:v1
volumeMounts: - name: html
mountPath: /usr/share/nginx/html
volumes: - name: html
hostPath:
path: "/data/pod/volumel" #依据type的值来确定挂载路径是否需要创建 type: DirectoryOrCreate #挂载目录不存在则创建
# kubectl apply -f host-path.yaml
pod/pod-hostpath created
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
pod-hostpath 1/1 Running 0 58s 10.244.5.11 huoban-k8s-node03
Manually create the directory and add the html file test
# ssh node3 "ls -l /data/pod/volumel"
.
# ssh node03 "touch /data/pod/volumel/index.html"
# ssh node03 "ls -al /data/pod/volumel"
total 8
drwxr-xr-x 2 root root 4096 Sep 20 15:00 .
drwxr-xr-x 3 root root 4096 Sep 20 14:56 ..
-rw-r--r-- 1 root root 0 Sep 20 15:00 index.html
# echo "node03" > /data/pod/volumel/index.html #在node03服务器上执行
# cat /data/pod/volumel/index.html
node03
# curl 10.244.5.11
node03
#删除服务,数据还在
# kubectl delete -f host-path.yaml
pod "pod-hostpath" deleted
# ssh node03 "ls -al /data/pod/volumel"
total 12
drwxr-xr-x 2 root root 4096 Sep 20 15:00 .
drwxr-xr-x 3 root root 4096 Sep 20 14:56 ..
-rw-r--r-- 1 root root 7 Sep 20 15:04 index.html
3, NFS (permanent storage, life cycle and same NFS server, not deleted)
1、在master上操作
# cat pod-nfs-vol.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-nfs
namespace: default
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
volumes:
- name: html
nfs:
path: "/data/volumes/v1" #该目录在NFS server上必须存在并可以被集群中node节点可以挂载,node节点需要安装nfs-utils,可以执行NFS 挂载操作
server: 172.16.17.10 #该server需要安装NFS 服务,并共享path中的目录或文件
2、在其中一台node节点上做测试
# mount -t nfs 172.16.17.10:/data/volumes/v1 /mnt 在任意一节点上进行挂载测试,确定可以挂载是否可以成功,需要安装nfs-utils工具包
# df -h |grep mnt #查看挂载状态
172.16.17.10:/data/volumes/v1 77G 3.5G 74G 5% /mnt
# umount /mnt #确认没有问题后卸载
3、测试运行
# kubectl apply -f pod-nfs-vol.yaml #创建pod
pod "pod-nfs" created
# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
pod-nfs 1/1 Running 0 17s 10.244.1.154 huoban-k8s-node01
4、在节点上测试创建文件
#在NFS server上添加一个测试HTML文件
# cd /data/volumes/v1/ #挂载目录
# echo "<h1>NFS Server volume v1</h1>" > index.html
5、访问一下并测试数据是否丢失
# curl 10.244.1.154
<h1>NFS Server volume v1</h1>
# kubectl delete pod pod-nfs #删除刚刚创建的pod
pod "pod-nfs" deleted
# kubectl apply -f pod-nfs-vol.yaml #再重新创建
pod "pod-nfs" created
# kubectl get pod -o wide #查看新创建后pod所在的node节点级IP地址
NAME READY STATUS RESTARTS AGE IP NODE
pod-nfs 1/1 Running 0 17s 10.244.2.192 huoban-k8s-node02
# curl 10.244.2.192 #再次访问一下,文件依然存在,文件不会随着pod的终结而销毁
<h1>NFS Server volume v1</h1>
4, create PV and PVC (created with NFS)
NFS server上创建多个挂载目录,并共享
# cat /etc/exports
/data/volumes/v1 172.16.0.0/16(rw,no_root_squash)
/data/volumes/v2 172.16.0.0/16(rw,no_root_squash)
/data/volumes/v3 172.16.0.0/16(rw,no_root_squash)
/data/volumes/v4 172.16.0.0/16(rw,no_root_squash)
/data/volumes/v5 172.16.0.0/16(rw,no_root_squash)
# ll /data/volumes/
总用量 0
drwxr-xr-x 2 root root 24 2019-09-20 16:28 v1
drwxr-xr-x 2 root root 24 2019-09-20 16:28 v2
drwxr-xr-x 2 root root 24 2019-09-20 16:28 v3
drwxr-xr-x 2 root root 24 2019-09-20 16:28 v4
drwxr-xr-x 2 root root 24 2019-09-20 16:28 v5
# exportfs
/data/volumes/v1 172.16.0.0/16
/data/volumes/v2 172.16.0.0/16
/data/volumes/v3 172.16.0.0/16
/data/volumes/v4 172.16.0.0/16
/data/volumes/v5 172.16.0.0/16
# showmount -e
Export list for huoban-k8s-nfs:
/data/volumes/v5 172.16.0.0/16
/data/volumes/v4 172.16.0.0/16
/data/volumes/v3 172.16.0.0/16
/data/volumes/v2 172.16.0.0/16
/data/volumes/v1 172.16.0.0/16
将NFS server共享的目录创建为PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-vol-001 #不允许定义名称空间,应为pv是属于集群级别的
spec:
capacity: #pv的大小
storage: 5Gi
accessModes: #访问的模型,具体访问模型官方文档链接: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes
- ReadWriteOnce #支持的访问模型与具体的共享存储设备类型有关,具体见上方链接
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /data/volumes/v1
server: 172.16.17.10
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-vol-02
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /data/volumes/v2
server: 172.16.17.10
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-vol-03
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /data/volumes/v3
server: 172.16.17.10
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-vol-04
spec:
capacity:
storage: 15Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /data/volumes/v4
server: 172.16.17.10
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-vol-05
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /data/volumes/v5
server: 172.16.17.10
# kubectl apply -f nfs-vol.yaml
persistentvolume "nfs-vol-01" created
persistentvolume "nfs-vol-02" created
persistentvolume "nfs-vol-03" created
persistentvolume "nfs-vol-04" created
persistentvolume "nfs-vol-05" created
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-vol-01 5Gi RWO,RWX Recycle Available 7s
nfs-vol-02 5Gi RWO Recycle Available 7s
nfs-vol-03 10Gi RWO,RWX Recycle Available 7s
nfs-vol-04 15Gi RWO Recycle Available 7s
nfs-vol-05 20Gi RWO,RWX Recycle Available 7s
#创建一个PVC
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
namespace: default
spec:
accessModes: ["ReadWriteOnce"] #pvc的访问模式一定是pv访问模式的子集
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
name: pod-pvc
namespace: default
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
volumes:
- name: html
persistentVolumeClaim:
claimName: my-pvc
# kubectl apply -f pod-pvc-vol.yaml
persistentvolumeclaim "my-pvc" created
pod "pod-pvc" created
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-pvc Bound nfs-vol-02 5Gi RWO 1m
# kubectl get pv #查看pv状态的变化,nfs-vol-02被 default名称空间下my-pvc申请并绑定
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON A
nfs-vol-01 5Gi RWO,RWX Recycle Available 9
nfs-vol-02 5Gi RWO Recycle Bound default/my-pvc 9
nfs-vol-03 10Gi RWO,RWX Recycle Available 9
nfs-vol-04 15Gi RWO Recycle Available 9
nfs-vol-05 20Gi RWO,RWX Recycle Available
# 查看下pod的创建信息
# kubectl describe pod pod-pvc
......
Volumes:
html:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: my-pvc
ReadOnly: false
default-token-tcwjz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tcwjz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 8m (x2 over 8m) default-scheduler pod has unbound PersistentVolumeClaims (repeated 2 times)
Normal Scheduled 8m default-scheduler Successfully assigned pod-pvc to huoban-k8s-node01
Normal SuccessfulMountVolume 8m kubelet, huoban-k8s-node01 MountVolume.SetUp succeeded for volume "default-token-tcwjz"
Normal SuccessfulMountVolume 8m kubelet, huoban-k8s-node01 MountVolume.SetUp succeeded for volume "nfs-vol-02"
Normal Pulled 8m kubelet, huoban-k8s-node01 Container image "ikubernetes/myapp:v1" already present on machine
Normal Created 7m kubelet, huoban-k8s-node01 Created container
Normal Started 7m kubelet, huoban-k8s-node01 Started container
#注意:处于绑定状态下的pv无法直接被删除,如果需要删除被绑定的pv,需要先删除申请绑定的PVC