Kubernetes Certification Exam Self-Study Series | Dynamic Volume Supply

Source: "CKA/CKAD Test Guide: From Docker to Kubernetes Complete Raiders"

Organize the reading notes while studying, and share them with everyone. If the copyright is violated, it will be deleted. Thank you for your support!

Attach a summary post: Kubernetes Certification Exam Self-study Series | Summary


6.5.1 Workflow of storageClass

When defining storageClass, an allocator (provisioner) must be included. Different allocators specify what backend storage is used when dynamically creating pv.

You can use aws-ebs as the back-end storage of PV, or use lv as the back-end storage of PV, or use hostPath as the back-end storage of PV.

When using storageClass to dynamically create pv, you should choose an appropriate allocator according to the use of back-end storage, but like
http://lvmplugin.csi.alibabacloud.com and http://hostpath.csi.k8s.io Since such an allocator is not built into kubernetes, where did it come from?

These non-built-in allocators are temporarily called external allocators. These external allocators are provided by third parties and are implemented by customizing CSIDriver (container storage interface driver).

So the whole process is that when the administrator creates the storageClass, he will specify the allocator through the .provisioner field. After the administrator creates the storageClass, the user needs to specify which storageClass to use through .spec.storageClassName when defining pvc, as shown in Figure 6-8.

When creating pvc, the system will notify storageClass, and storageClass will obtain the backend storage type through its associated allocator, and then dynamically create a pv to bind with this pvc.

6.5.2 Using nfs to create dynamic volume provisioning

The shared folder has been configured with NFS before, because the configuration is relatively simple, so here NFS is used as the back-end storage to configure the dynamic volume provisioning.

Step 1: Create a directory /vdisk on the storage server 192.168.26.30 and share this directory.

[root@vms30 ~]# cat/etc/exports 
/123      *(rw,async,no_root_squash)
/zz     *(rw,async,no_root_squash)
/vdisk    *(rw,async,no_root_squash)
[root@vms30 ~]# exportfs -avr 
exporting *:/vdisk 
exporting *:/zz 
exporting *:/123
[root@vms30 ~]#

Because in kubernetes, NFS does not have a built-in allocator, so you need to download related plug-ins to create an NFS external allocator.

Step 2: Install the git client tool on vms10 first.

[root@vms10 volume]# yum install git -y
   ... 输出 ...
[root@vms10 volume]#

Step 3: Clone the project and change into the directory.

[root@vms10 volume]# git clone https://github.com/kubernetes-incubator/external-storage.git
正克隆到 'external-storage'...
   ... 输出 ...
[root@vms10 volume]#
[root@vms10 volume]# cd external-storage-master/nfs-client/deploy/
[root@vms10 deploy]#

Step 4: Deploy rbac permissions.

You need to replace the namespace specified in rbac.yaml with nsvolume, and then deploy rbac.

[root@vms10 deploy]# sed -i 's/namespace: default/namespace: nsvolume/g' rbac.yaml 
[root@vms10 deploy]#
[root@vms10 deploy]# kubectl apply -f rbac.yaml 
   ... 输出 ...
[root@vms10 deploy]#

Step 5: Modify
/etc/kubernetes/manifests/kube-apiserver.yaml.

In kubernetes v1.20 and later versions, you need to modify
/etc/kubernetes/manifests/kube-apiserver.yaml and add:

- --feature-gates=RemoveSelfLink=false

Then restart the kubelet.

6.5.3 Deploying the NFS allocator

Because the NFS allocator is not built-in, the NFS allocator needs to be created first.

Step 1: Open deployment.yaml with vim editor and modify the following content.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner 
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed 
  namespace: nsvolume
   ... 输出 ...
    spec:
      serviceAccountName: nfs-client-provisioner 
      containers:
      - name: nfs-client-provisioner 
        image: quay.io/external_storage/nfs-client-provisioner: latest 
        imagePullPolicy: IfNotPresent 
        volumeMounts:
        - name: nfs-client-root
          mountPath: /persistentvolumes
        env:
        - name: PROVISIONER_NAME 
          value: fuseim.pri/ifs 
        - name: NFS_SERVER 
          value: 192.168.26.30
        - name: NFS_PATH
          value: /vdisk
      volumes:
      - name: nfs-client-root 
        nfs:
          server: 192.168.26.30
          path: /vdisk

Step 2: Deploy the NFS allocator.

[root@vms10 deploy]# kubectl apply -f deployment.yaml 
deployment.apps/nfs-client-provisioner created 
[root@vms10 deploy]# 

Step 3: Check the running status of the pod.

[root@vms10 deploy]# kubectl get pods
NAME                                        READY    STATUS    RESTARTS    AGE
nfs-client-provisioner-7544459d44-dpjtx     1/1      Running   0           5s
[root@vms10 deploy]#  

6.5.4 Deploy storageClass

After creating the NFS allocator, let's create a storageClass that uses this allocator.

Step 1: Create storageClass.

There is a file named class.yaml in the current directory, which is used to create storageClass, the content is as follows.

[root@vms10 deploy]# cat class.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass 
metadata:
  name: managed-nfs-storage 
provisioner: fuseim.pri/ifs
parameters:
  archiveOnDelete: "false"
[root@vms10 deploy]#

This yaml file means to create a storageClass named managed-nfs-storage, using an allocator named fuseim.pri/ifs.

Step 2: Check if storageClass exists now.

[root@vms10 deploy]# kubectl get sc 
No resources found 
[root@vms10 deploy]#

Step 3: Deploy and view the storage class.

[root@vms10 deploy]# kubectl apply -f class.yaml
storageclass.storage.k8s.io/managed-nfs-storage created 
[root@vms10 deploy]# 
[root@vms10 deploy]# kubectl get sc
NANE                  PROVISIONER     RECLAIMPOLICY   VOLUMEBINDINGMODE  ... 
managed-nfs-storage   fuseim.pri/ifs   Delete          Immediate
...  [root@vms10 deploy]#

Step 4: Check whether pvc and pv exist currently.

[root@vms10 deploy]# kubectl get pvc
No resources found in nsvolume namespace.
[root@vms10 deploy]# kubectl get pv 
No resources found
[root@vms10 deploy]#

No pv and pvc currently exists.

Step 5: Now start to create pvc.

[root@vms10 deploy]# cp test-claim.yaml pvc1.yaml 
[root@vms10 deploy]# cat pvc1.yaml 
kind: PersistentVolumeClaim 
apiVersion: v1
metadata:
  name: pvc1
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany 
  resources: 
    requests:
      storage: 1Mi 
[root@vms10 deploy]#

Here, which storageClass to use is specified in the annotations, which can also be written as follows.

kind: PersistentVolumeClaim 
apiVersion: v1
metadata:
  name: pvc1
spec:
  accessModes:
    - ReadWriteMany 
  resources:
    requests:
      storage: 1Mi 
  storageClassName: managed-nfs-storage

Step 6: Now start to create pvc.

[root@vms10 deploy]# kubectl apply -f pvc1.yaml 
persistentvolumeclaim/pvc1 created
[root@vms10 deploy]#

Step 7: Check whether the pvc is created.

[root@vms10 deploy]# kubectl get pvc
NAME  STATUS  VOLUME                                     CAPACITY    ACCESS MODES    STORAGECLASS          AGE
pvc1  Bound   pvc-edce9ee1-e0c0-4527-a3e8-15b94bf45fc0   1Mi         RWX             managed-nfs-storage   4s

Step 8: View pv.

[root@vms10 deploy]# kubectl get pv 
NAME                                       CAPACITY    ACCESS MODES    RECLAIM POLICY    STATUS    CLAIM         STORAGECLASS           REASON  AGE
pvc-edce9ee1-eeco-4527-a3e8-15b94bf45fc0   1Mi         RWX             Delete            Bound     volume/pvc1   managed-nfs-storage             9s
[root@vms10 deploy]#

It can be seen from here that not only pvc1 is created, but also a
pv named pvc-edce9ee1-e0c0-4527-a3e8-15b94bf45fco is created and associated with pvc1.

Step 9: View the properties of this pv.

[root@vms10 deploy]# kubectl describe pv pvc-edce9ee1-e0c0-4527-a3e8-15b94bf45fc0
   ... 输出 ...
Source:
    Type:      NFS(an NFS mount that lasts the lifetime of apod)
    Server:    192.168.26.30
    Path:      /vdisk/ns1-pvc1-pvc-edce9ee1-e0c0-4527-a3e8-15b94bf45fc0
    ReadOnly: false
Events:        <none>
[root@vms10 deploy]#

You can see that the storage type used by this pv is NFS.

Guess you like

Origin blog.csdn.net/guolianggsta/article/details/130797059