1 Know PV/PVC/StorageClass
Managing storage is an obvious problem in managing computing. The PersistentVolume subsystem provides an API for users and administrators to abstract how to provide storage details based on consumption patterns. To this end, we introduced two new API resources: PersistentVolume and PersistentVolumeClaim.
PersistentVolume (PV) is a section of network storage configured by the administrator in the cluster . It is a resource in the cluster, just like a node is a cluster resource. PV is a capacity plug-in, such as Volumes, but its life cycle is independent of any single pod that uses PV . This API object captures detailed information about storage implementations, including NFS, iSCSI, or cloud provider-specific storage systems.
PersistentVolumeClaim (PVC) is a storage request made by the user. It is similar to pod. Pod consumes node resources, and PVC consumes PV resources . Pods can request specific levels of resources (CPU and memory). The statement can request a specific size and access mode (for example, it can be read/write once or read-only multiple times).
Although PersistentVolumeClaims allows users to use abstract storage resources, PersistentVolumes usually require different attributes (such as performance) for different problems. Cluster administrators need to be able to provide various PersistentVolumes in different ways, not just size and access mode, without letting users know how these volumes are implemented. For these needs, there are StorageClass resources .
StorageClass provides a way for administrators to describe the "class" of storage they provide. Different classes may be mapped to service quality levels, or backup strategies, or arbitrary strategies determined by the cluster administrator. Kubernetes itself is self-evident for what category represents. This concept is sometimes called a "profile" in other storage systems.
There is a one-to-one correspondence between PVC and PV.
1.1 Life cycle
PV is a resource in the cluster. The PVC is a request for these resources and also acts as a check on the resources. The interaction between PV and PVC follows the following life cycle:
Provisioning ——-> Binding ——–>Using——>Releasing——>Recycling
(1) Provisioning
provides storage persistence support through a storage system or cloud platform outside the cluster.
(1-1) Static provisioning Static : The cluster administrator creates multiple PVs. They carry detailed information about the actual storage available to the users of the cluster. They exist in the Kubernetes API and can be used for consumption.
(1-2) Dynamic provision of Dynamic : When the static PV created by the administrator does not match the user's PersistentVolumeClaim, the cluster may try to dynamically configure the volume for the PVC. This configuration is based on StorageClasses: PVC must request a class, and the administrator must have created and configured the class in order to be dynamically configured. The declaration of this class is required to effectively disable dynamic configuration for itself.
(2) Binding
users to create pvc and specify the required resources and access mode. Until the available PV is found, the PVC will remain unbound.
(3)Using
users can use pvc in pod like volume.
(4) Release the Releasing
user to delete pvc to reclaim storage resources, and the pv will become "released" status. Since the previous data is still retained, these data need to be processed according to different strategies, otherwise these storage resources cannot be used by other PVCs.
(5) Recycling Recycling
pv can set three recycling strategies: Retain, Recycle and Delete.
(5-1) Retention strategy: allow manual processing of retained data.
(5-2) Deletion strategy: pv and externally associated storage resources will be deleted, and plug-in support is required.
(5-3) Recycling strategy: The cleanup operation will be executed, and then it can be used by the new PVC, which requires plug-in support.
Note: At present, only NFS and HostPath type volumes support the recovery strategy. AWS EBS, GCE PD, Azure Disk and Cinder support the delete strategy.
1.2 PV type
1.3 PV volume stage status
(1) Available
resources have not been claimed to be used
(2) Bound
volume has been bound to claim
(3) Released
claim is deleted, the volume is in the released state, but has not been reclaimed by the cluster.
(4) Failed
volume automatic recovery failed
1.4 pv three access methods
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes.
In the Kubenetes system, there are three access methods for each persistent storage volume: ReadWriteOnce (RWO), ReadOnlyMany (ROX), ReadWriteMany (RWX).
In the current definition, these three methods are all at the node level, that is, for a Persistent Volume, if it is RWO, it can only be mounted on a certain Kubernetes worker node (hereinafter referred to as the node). When trying to mount on other nodes again, the system will report a Multi-Attach error (of course, in the case of only one schedulable node, even RWO can be used by multiple Pods at the same time).
If it is RWX, it can be mounted on multiple nodes at the same time and used by different Pods.
Three access methods.
2 Demo: Create PV static provision
(1) First create a directory corresponding to the storage volume on the nfs server
cd /data/volumes
mkdir v{1,2,3,4,5}
echo "<h1>static stor 01</h1>" > v1/index.html
echo "<h1>static stor 02</h1>" > v2/index.html
echo "<h1>static stor 03</h1>" > v3/index.html
echo "<h1>static stor 04</h1>" > v4/index.html
echo "<h1>static stor 05</h1>" > v5/index.html
(2) Modify the configuration of nfs
#vim /etc/exports
/data/volumes/v1 *(rw,no_root_squash)
/data/volumes/v2 *(rw,no_root_squash)
/data/volumes/v3 *(rw,no_root_squash)
/data/volumes/v4 *(rw,no_root_squash)
/data/volumes/v5 *(rw,no_root_squash)
Among them, rw: read/write permission, the parameter of read-only permission is ro;
among them, no_root_squash: the attributes of the NFS server shared directory user, if the user is root, then the shared directory has root permissions for this shared directory.
(3) View the configuration of nfs
#exportfs -arv
(4) Make the configuration effective
#showmount -e
2.1 Create persistent volume pv
Create 5 PVs with different storage sizes and different readability. pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
name: pv001
spec:
nfs:
path: /data/volumes/v1
server: myuse1
accessModes: ["ReadWriteMany","ReadWriteOnce"]
storageClassName: slow
capacity:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv002
labels:
name: pv002
spec:
nfs:
path: /data/volumes/v2
server: myuse2
accessModes: ["ReadWriteOnce"]
storageClassName: slow
capacity:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv003
labels:
name: pv003
spec:
nfs:
path: /data/volumes/v3
server: myuse3
accessModes: ["ReadWriteMany","ReadWriteOnce"]
storageClassName: slow
capacity:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv004
labels:
name: pv004
spec:
nfs:
path: /data/volumes/v4
server: myuse4
accessModes: ["ReadWriteMany","ReadWriteOnce"]
storageClassName: slow
capacity:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv005
labels:
name: pv005
spec:
nfs:
path: /data/volumes/v5
server: myuse5
accessModes: ["ReadWriteMany","ReadWriteOnce"]
storageClassName: slow
capacity:
storage: 15Gi
#kubectl apply -f pv.yaml
#kubectl get pv
#kubectl delete pv pv001 Delete the specified pv
(1) Recycling strategy: Retain
(2) Available resources have not been claimed
PV can have a class, which is specified by setting the storageClassName attribute to the name of StorageClass. PVs of a specific category can only be bound to the PVC requesting that category. The PV without storageClassName has no class and can only be bound to the PVC that does not require a specific class.
In the past, the annotation volume.beta.kubernetes.io/storage-class was used instead of the storageClassName attribute. The annotation still works, but it will no longer be applicable in future Kubernetes versions.
2.2 Create a persistent volume declaration PVC
2.2.1 Specify selector and storageclass
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc
namespace: default
spec:
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 2Gi
storageClassName: slow
selector:
matchLabels:
name: pv003
#kubectl get pvc
#kubectl get pv
2.2.2 Specify only storageclass
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc
namespace: default
spec:
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 6Gi
storageClassName: slow
#kubectl get pv To
create a pvc, 6G storage is required; so it will not match pv001, pv002, pv003
#kubectl get pvc
2.2.3 Delete pv in terminiting state
#kubectl patch pvc mypvc -p '{"metadata":{"finalizers":null}}'
2.3 Use PVC in pod
2.3 1 Test tomcat image
#docker pull tomcat
#docker run -id -p 8080:8080 --name=c_tomcat -v /data/volumes/:/usr/local/tomcat/webapps tomcat:latest
#docker exec -it c_tomcat /bin/bash
http://10.23.241.97:8080/v1/index.html
http://10.23.241.97:8080/v2/index.html
#docker stop c_tomcat
2.3.2 File deployment.yaml
apiVersion: v1
kind: Pod
metadata:
name: vol-pvc
namespace: default
labels:
app: myapp
spec:
volumes:
- name: html
persistentVolumeClaim:
claimName: mypvc
containers:
- name: myapp
image: tomcat:latest
volumeMounts:
- name: html
mountPath: /usr/local/tomcat/webapps
deploy
#kubectl apply -f deployment.yaml
#kubectl exec -it vol-pvc -n default -- /bin/bash
Create a subdirectory in the directory /data/volumes/v4
#mkdir aa
#cp index.html ./aa
Enter the pod to view the operation of tomcat
2.3.3 File service.yaml
kind: Service
apiVersion: v1
metadata:
name: mysvc
namespace: default
spec:
type: NodePort
ports:
- port: 8123
nodePort: 32001
targetPort: 8080
selector:
app: myapp
There are mainly three types of
ports involved in Service: (1) port where port represents the port exposed by the service on clusterIP, and clusterIP: Port is the entrance provided to the cluster to access the kubernetes service.
(2) The targetPort
is also the containerPort, and the targetPort is the port on the pod. The data coming from the port and nodePort will finally flow into the targetPort of the back-end pod through the kube-proxy and enter the container.
(3) nodePort
nodeIP: nodePort is the entrance provided to access the kubernetes service from outside the cluster.
In general, port and nodePort are both service ports. The former is exposed to access services from within the cluster, and the latter is exposed to services accessed from outside the cluster. The data arriving from these two ports need to pass through the reverse proxy kube-proxy to flow into the targetPort of the back-end specific pod, and then enter the container on the pod.
The type of service creation is different and can be divided into different modes:
(1) ClusterIP : the default mode . According to whether the ClusterIP is generated or not, it can be divided into two types: ordinary Service and Headless Service:
(1-1) Ordinary Service: By assigning a fixed virtual IP (Cluster IP) accessible within the cluster to the Kubernetes Service, the access within the cluster is realized. The most common way.
(1-2) Headless Service: This service does not allocate Cluster IP, nor does it use kube-proxy for reverse proxy and load balancing. Instead, the DNS provides a stable network ID for access, and DNS will directly resolve the backend of the headless service into a podIP list. Mainly used by StatefulSet.
(2) NodePort : In addition to using the Cluster IP, it also maps the service port to the same port of each node in the cluster to access the service from outside the cluster through nodeIP: nodePort.
View webpage
2.4 Delete PVC
The general deletion steps are: first delete pod, then pvc and finally pv
(1) delete pod
#kubectl delete pod volumeop-basic-csgvg-275778418 -n kubeflow
#kubectl delete pod volumeop-basic-csgvg-3408782246 -n kubeflow
(2) Delete pvc
#kubectl delete pvc volumeop-basic-csgvg-my-pvc -n kubeflow
(3) Delete pv
When pvc is deleted, pv will be deleted automatically.
3 Realize PV dynamic provision based on storage class
When deploying stateful applications in k8s, data persistence storage is usually required.
Based on the storage class, realize the automatic PV supply;
(create the storage class, specify the address and the shared mount volume directory in the resource list to achieve persistent storage)
Download project:
for file in class.yaml deployment.yaml rbac.yaml test-claim.yaml ;
do wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/$file ;
done
3.1 class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs
parameters:
archiveOnDelete: "false"
provisioner: fuseim.pri/ifs # or choose another name, must match deployment’s env PROVISIONER_NAME’
3.2 rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
3.3 deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.0.165
- name: NFS_PATH
value: /some/path
volumes:
- name: nfs-client-root
nfs:
server: 192.168.0.165
path: /some/path
Create storage class
#kubectl create -f class.yaml
#kubectl create -f rbac.yaml
#kubectl create -f deployment.yaml
3.4 Modify a storageclass as the default
default indicates that this storageclass is the default.
# kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
4 test
Deploy a PVC or declared storage application, test whether PV is automatically created and automatically bound to PVC,
4.1 test-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
test
#kubectl create -f test-claim.yaml
4.2 nginx-demo.yaml
Nginx application deployed in StatefulSet
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-nfs-storage"
resources:
requests:
storage: 1Gi