Docker (20)-Docker k8s-Kubernetes storage-Volumes configuration management-persistent volume-dynamic static allocation

1 Introduction

Official website

  • PersistentVolume (Persistent Volume, PV for short) is a part of the network storage provided by the administrator in the cluster. Just like the nodes in the cluster, PV is also a resource in the cluster. It is also a volume plug-in like Volume, but its life cycle is independent of the Pod that uses it. The PV API object captures the implementation details of NFS, ISCSI, or other cloud storage systems.

  • PersistentVolumeClaim (Persistent Volume Claim, or PVC for short) is a storage request made by users. It is similar to Pod, Pod consumes Node resources, while PVC consumes PV resources. Pod can request specific resources (such as CPU and memory). The PVC can request a specified size and access mode (it can be mapped as one-time read-write or multiple read-only).

  • There are two ways to provide PV: static and dynamic.
    Static PV: The cluster administrator creates multiple PVs that carry detailed information about the actual storage that is available to cluster users. They exist in the Kubernetes API and can be used for storage.
    Dynamic PV: When the static PV created by the administrator does not match the user's PVC, the cluster may try to specifically supply the volume to the PVC. This supply is based on StorageClass.

  • The binding of PVC and PV is a one-to-one mapping. If no matching PV is found, the PVC will remain unbound indefinitely.

  • Using
    Pod to use PVC is like using volume. The cluster checks the PVC, finds the bound PV, and maps the PV to the Pod. For PVs that support multiple access modes, users can specify the mode they want to use. Once the user has a PVC and the PVC is bound, the PV will always belong to the user as long as the user needs it. The user schedules the Pod and accesses the PV by including the PVC in the volume block of the Pod.

  • Release
    When the user finishes using the PV, they can delete the PVC object through the API. When the PVC is deleted, the corresponding PV is considered to be "released", but it cannot be used by another PVC. The belonging of the previous PVC still exists in the PV and must be disposed of according to the strategy.


  • The recycling strategy of reclaiming the PV tells the cluster what the cluster should do with the PV after the PV is released. Currently, PV can be Retained, Recycled or Deleted. Reservation allows the resource to be re-declared manually. For PV volumes that support the delete operation, the delete operation will remove the PV object from Kubernetes and the corresponding external storage (such as AWS EBS, GCE PD, Azure Disk, or Cinder volume). Dynamically provisioned volumes will always be deleted.

  • Access mode
    ReadWriteOnce-the volume can only be mapped by a single node in a read-write manner
    ReadOnlyMany-the volume can be mapped in a read-only manner by multiple nodes
    ReadWriteMany-the volume can be mapped
    in the command line in a read-write manner by multiple nodes , The access mode can be abbreviated as:
    RWO-ReadWriteOnce
    ROX-ReadOnlyMany
    RWX-ReadWriteMany

  • Recycling Strategies
    Retain: reservations need to manually recover
    Recycle: Recycling, automatically delete volume data
    Delete: Delete, storage associated with the asset, such as AWS EBS, GCE PD, Azure Disk , or OpenStack Cinder volume will be deleted
    Currently, only NFS and HostPath supports recycling, and AWS EBS, GCE PD, Azure Disk, or OpenStack Cinder volumes support delete operations.

  • Status:
    Available: Idle resource, not bound to PVC.
    Bound: Bound to a PVC.
    Released: PVC has been deleted, but PV has not been recycled by the cluster.
    Failed: PV has failed in automatic recycling. The
    command line can display the PV. The name of the bound PVC.

2. NFS PV example (static allocation)

2.1. Ensure that the environment is clean

[root@server2 volumes]# kubectl get pod 
NAME     READY   STATUS    RESTARTS   AGE
nfs-pd   1/1     Running   0          12m
[root@server2 volumes]# kubectl delete -f nfs.yaml    ##先清理环境
[root@server2 volumes]# kubectl get pv
No resources found
[root@server2 volumes]# kubectl get pvc
No resources found in default namespace.
[root@server2 volumes]# kubectl get pod
No resources found in default namespace.

Insert picture description here
Insert picture description here

2.2 Create the required resources


## 1. 安装配置NFS服务:(前面已经做过了)
# yum install -y nfs-utils
# mkdir -m 777 /nfsdata
# vim /etc/exports
#      /nfsdata	*(rw,sync,no_root_squash)
# systemctl enable --now rpcbind
# systemctl enbale --now nfs

## 2. server1和每个节点的环境
[root@server1 nfsdata]# mkdir pv1 pv2 pv3   ##创建相应的目录
[root@server1 nfsdata]# ll
total 0
drwxr-xr-x 2 root root 6 Feb 25 11:21 pv1
drwxr-xr-x 2 root root 6 Feb 25 11:21 pv2
drwxr-xr-x 2 root root 6 Feb 25 11:21 pv3
[root@server1 pv1]# echo www.westos.org > index.html   ##分别书写测试文件
[root@server1 pv2]# echo www.redhat.org > index.html
[root@server1 pv3]# echo www.baidu.com > index.html

[root@server3 ~]# yum install nfs-utils -y   ##都需要安装nfs服务
[root@server4 ~]# yum install nfs-utils -y 

Insert picture description here

2.3 Write pv, pvc, pod files and test

2.3.1 Create pv

[root@server2 volumes]# vim pv1.yaml 
[root@server2 volumes]# cat pv1.yaml    ##pv文件
apiVersion: v1
kind: PersistentVolume       ##pv模型
metadata:
  name: pv1
spec:
  capacity:
    storage: 5Gi           ##大小限制,并不一定会都用完
  volumeMode: Filesystem   ##卷模式是文件系统
  accessModes:
    - ReadWriteOnce       ## 单点读写
  persistentVolumeReclaimPolicy: Recycle    ##回收
  storageClassName: nfs
  nfs:
    path: /nfsdata/pv1
    server: 172.25.13.1
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv2
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs
  nfs:
    path: /nfsdata/pv2
    server: 172.25.13.1
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv3
spec:
  capacity:
    storage: 20Gi
  volumeMode: Filesystem
  accessModes:
    - ReadOnlyMany
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs
  nfs:
    path: /nfsdata/pv3
    server: 172.25.13.1
 
[root@server2 volumes]# kubectl apply -f pv1.yaml    ##应用
persistentvolume/pv1 created
persistentvolume/pv2 created
persistentvolume/pv3 created
[root@server2 volumes]# kubectl get pv   ##查看pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv1    5Gi        RWO            Recycle          Available           nfs                     6s
pv2    10Gi       RWX            Recycle          Available           nfs                     6s
pv3    20Gi       ROX            Recycle          Available           nfs                     6s

Insert picture description here

2.3.2 Create pvc and pod

[root@server2 volumes]# vim pvc.yaml 
[root@server2 volumes]# cat pvc.yaml    ##创建pvc和pod
apiVersion: v1
kind: PersistentVolumeClaim     ##pvc模式
metadata:
  name: pvc1
spec:(下面的内容可以理解为匹配规则,如果匹配不到pvc就会一直等待合适的pv出现,处于pending状态)
  storageClassName: nfs   ##类名nfs
  accessModes:
    - ReadWriteOnce       ##匹配单点读写
  resources:
    requests:
      storage: 5Gi       ##匹配的pv大小必须在5G以内
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc2
spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

---
apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: myapp:v1
    name: nginx
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: nfs-pv
  volumes:
  - name: nfs-pv
    persistentVolumeClaim:
      claimName: pvc1

---
apiVersion: v1
kind: Pod
metadata:
  name: test-pd-2
spec:
  containers:
  - image: myapp:v1
    name: nginx
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: nfs-pv-2
  volumes:
  - name: nfs-pv-2
    persistentVolumeClaim:    ##指定pvc
      claimName: pvc2
      
[root@server2 volumes]# kubectl  apply -f pvc.yaml 
[root@server2 volumes]# kubectl get pvc
NAME   STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc1   Bound    pv1      5Gi        RWO            nfs            9s
pvc2   Bound    pv2      10Gi       RWX            nfs            9s
[root@server2 volumes]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM          STORAGECLASS   REASON   AGE
pv1    5Gi        RWO            Recycle          Bound       default/pvc1   nfs                     2m59s
pv2    10Gi       RWX            Recycle          Bound       default/pvc2   nfs                     2m59s
pv3    20Gi       ROX            Recycle          Available                  nfs                     2m59s
[root@server2 volumes]# kubectl get pod   

Insert picture description here

2.3.3 Test

[root@server2 volumes]# kubectl get pod -o wide 
NAME        READY   STATUS    RESTARTS   AGE    IP               NODE      NOMINATED NODE   READINESS GATES
test-pd     1/1     Running   0          112s   10.244.141.208   server3   <none>           <none>
test-pd-2   1/1     Running   0          112s   10.244.22.8      server4   <none>           <none>
[root@server2 volumes]# curl 10.244.141.208    ##访问ip,观察是否是自己书写的对应文件
www.westos.org 
[root@server2 volumes]# curl 10.244.22.8   
www.redhat.org

Insert picture description here

2.3.4 Supplementary commands

##删除
[root@server2 volumes]# kubectl delete pod 加pod名
[root@server2 volumes]# kubectl delete pv 加pv名
[root@server2 volumes]# kubectl delete pvc 加pvc名

3. Dynamic allocation

3.1 Introduction

  • StorageClass provides a way to describe storage classes. Different classes may be mapped to different service quality levels and backup strategies or other strategies.

  • Each StorageClass contains provisioner, parameters, and reclaimPolicy fields, which will be used when the StorageClass needs to dynamically allocate PersistentVolume.

  • StorageClass attribute
    Provisioner (storage allocator): used to decide which volume plug-in to use to allocate PV, this field must be specified. You can specify an internal distributor or an external distributor. The code address of the external allocator is: kubernetes-incubator/external-storage, which includes NFS and Ceph.
    Reclaim Policy: Specify the reclaim policy of the Persistent Volume created through the reclaimPolicy field. The reclaim policy includes Delete or Retain. If it is not specified, the default is Delete.
    More attributes view: https://kubernetes.io/zh/docs/concepts/storage/storage-classes/

  • NFS Client Provisioner is an automatic provisioner that uses NFS as storage and automatically creates PV and corresponding PVC. It does not provide NFS storage itself, and requires an external NFS storage service.
    PV is provided in the naming format of ${namespace}-${pvcName}-${pvName} (on the NFS server) When the
    PV is recovered, the naming format is archived-${namespace}-${pvcName}-${pvName} (On the NFS server)
    nfs-client-provisioner source code address: https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client

    nfs-client-provisioner source code address
    nfs-client-provisioner source code address (new)
    the new version used in the experiment below

3.2 Example

## 1.清理环境
[root@server2 volumes]# kubectl delete -f pvc.yaml   ##清理环境
[root@server2 volumes]# kubectl delete -f pv1.yaml
[root@server1 ~]# cd /nfsdata/     ##删除nfs端的数据
[root@server1 nfsdata]# ls
pv1  pv2  pv3
[root@server1 nfsdata]# rm -fr *

## 2.下载镜像,并上传
[root@server1 nfsdata]# docker search k8s-staging-sig-storage
NAME                                      DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
yuufnn/nfs-external-provisioner           gcr.io/k8s-staging-sig-storage/nfs-subdir-ex…   0                    
heegor/nfs-subdir-external-provisioner    Image backup for gcr.io/k8s-staging-sig-stor…   0                    
zelaxyz/nfs-subdir-external-provisioner   #Dockerfile FROM gcr.io/k8s-staging-sig-stor…   0                    
yuufnn/nfs-subdir-external-provisioner    gcr.io/k8s-staging-sig-storage/nfs-subdir-ex…   0                    
[root@server1 nfsdata]# docker pull heegor/nfs-subdir-external-provisioner:v4.0.0
[root@server1 nfsdata]# docker tag heegor/nfs-subdir-external-provisioner:v4.0.0 reg.westos.org/library/nfs-subdir-external-provisioner:v4.0.0
[root@server1 nfsdata]# docker push reg.westos.org/library/nfs-subdir-external-provisioner:v4.0.0

## 3.配置
[root@server2 volumes]# mkdir nfs-client
[root@server2 volumes]# cd nfs-client/
[root@server2 nfs-client]# pwd
/root/volumes/nfs-client
[root@server2 nfs-client]# vim nfs-client-provisioner.yaml
[root@server2 nfs-client]# cat nfs-client-provisioner.yaml     ##动态分配源码
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-client-provisioner           ##新建一个namespace
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs-client-provisioner
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs-client-provisioner
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: nfs-subdir-external-provisioner:v4.0.0
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 172.25.13.1
            - name: NFS_PATH
              value: /nfsdata
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.25.13.1
            path: /nfsdata
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true"     ##是否在回收之后自动备份,生成备份文件夹

[root@server2 nfs-client]# vim pvc.yaml   ##测试文件,pvc和pod 
[root@server2 nfs-client]# cat pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  storageClassName: managed-nfs-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
---
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: myapp:v1
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/usr/share/nginx/html"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim

[root@server2 nfs-client]# kubectl create namespace nfs-client-provisioner  ##创建相应的namespace,方便管理
[root@server2 nfs-client]# kubectl apply -f nfs-client-provisioner.yaml   ##应用动态分配u
[root@server2 nfs-client]# kubectl get pod -n nfs-client-provisioner   ##查看生成的分配器pod
[root@server2 nfs-client]# kubectl get sc      ##StorageClass
  
[root@server2 nfs-client]# kubectl apply -f pvc.yaml     ##应用测试文件
[root@server2 nfs-client]# kubectl get pv      ##
[root@server2 nfs-client]# kubectl get pvc     ##


## 4. 测试
[root@server1 nfsdata]# ls   ##按照命名规则生成数据卷
default-test-claim-pvc-bc952d4e-47a5-4ac4-9d95-5cd2e6132ebf
[root@server1 nfsdata]# cd default-test-claim-pvc-bc952d4e-47a5-4ac4-9d95-5cd2e6132ebf/
[root@server1 default-test-claim-pvc-bc952d4e-47a5-4ac4-9d95-5cd2e6132ebf]# echo www.westos.org > index.html
[root@server1 default-test-claim-pvc-bc952d4e-47a5-4ac4-9d95-5cd2e6132ebf]# 


[root@server2 nfs-client]# kubectl get pod -o wide 
NAME       READY   STATUS    RESTARTS   AGE    IP             NODE      NOMINATED NODE   READINESS GATES
test-pod   1/1     Running   0          5m5s   10.244.22.10   server4   <none>           <none>
[root@server2 nfs-client]# curl 10.244.22.10
www.westos.org

3.2.2. Download the mirror and upload

Insert picture description here
Insert picture description here

3.2.3 Configuration

Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here

3.2.4 Test

Insert picture description here
Insert picture description here

4. The default StorageClass

  • The default StorageClass will be used to dynamically configure storage for PersistentVolumeClaims without specific storage class requirements: (There can only be one default StorageClass)
    If there is no default StorageClass, and the PVC does not specify the value of storageClassName, it means that it can only be used with storageClassName. The PV of "" is bound.

4.1 The case without StorageClass

[root@server2 nfs-client]# vim demo.yaml 
[root@server2 nfs-client]# cat demo.yaml     ##测试没有StorageClass
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim-2
spec:
#  storageClassName: managed-nfs-storage
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 5Gi

[root@server2 nfs-client]# kubectl apply -f demo.yaml 
[root@server2 nfs-client]# kubectl get pvc    ##没有指定一直处于pending状态

Insert picture description here

4.2 Set the default StorageClass

kubectl patch storageclass <your-class-name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'  ##模板
[root@server2 nfs-client]# kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'   ##指定sc
[root@server2 nfs-client]# kubectl get sc   ##查看是否设置成功
NAME                            PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage (default)   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  58m


##查看效果
[root@server2 nfs-client]# kubectl delete -f demo.yaml 
[root@server2 nfs-client]# kubectl apply -f demo.yaml 
[root@server2 nfs-client]# kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
test-claim     Bound    pvc-bc952d4e-47a5-4ac4-9d95-5cd2e6132ebf   2Gi        RWX            managed-nfs-storage   55m
test-claim-2   Bound    pvc-2262d8b4-c660-4301-aad5-2ec59516f14e   5Gi        ROX            managed-nfs-storage   2s


Insert picture description here
Insert picture description here

Guess you like

Origin blog.csdn.net/qwerty1372431588/article/details/114065723