The dynamic series Kubernetes supply (StorageClass) NFS-based PV

I. Introduction

PersistentVolume (PV) refers to the section of storage space on a storage system configured by the cluster administrator, and it is an abstraction of the underlying shared storage, a shared storage by the user application to make the kinds of resources, to achieve a "memory consumption" mechanism . Mechanism by the memory card, the PV supports multiple network storage or cloud storage systems and other back-end storage system, e.g., NFS, RBD Cinder and the like. PV is a cluster-level resource does not belong to any namespace, users of the resources needed to make PV application made by PersistentVolumeClaim (PVC) (or declaration called) to complete the binding, the consumer PV resources, which the PV application specific size and spatial access mode (e.g., rw or RO), from PVC to create a storage volume, and then the storage volume PersistentVolumeClaim Pod resources associated so, as shown below:
image

While the PVC so that the user can access the storage resources in an abstract way, but often still involve many PV properties, e.g., for performance parameters to be set at different scenarios. To this end, the cluster administrator has to offer a variety of different PV variety of ways to fill different needs of different users, deviations in both convergence will inevitably lead to the user's needs can not be met all timely and effective manner. Kubernetes introduced from version 1.4 to a new resource object StorageClass, may be used to define the storage resource class having significant characteristics (Class) rather than specific PV, for example, "fast" "slow" or "glod" "silver" " bronze "and so on. PVC sent by the user directly to the category of intent to apply, matching PV previously created by the administrator, or dynamically created on demand by PV for the user, even doing so eliminates the need to create a process of PV.
PV support the storage system may be achieved by its plug, currently, Kubernetes supports the following types of plugins.
Official Address:https://kubernetes.io/docs/concepts/storage/storage-classes/
image

From the above chart we can see the official plug-in does not support NFS dynamic supply, but we can use third-party plug-ins to achieve, the following article is talking about.

Second, install the plug-NFS

GitHub Address:https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client/deploy

image.png

1, download the required files

for file in class.yaml deployment.yaml rbac.yaml  ; do wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/$file ; done

2. Create RBAC authorization

# cat rbac.yaml
kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

3. Create a class Storageclass

# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"

4. Create a deployment NFS, and modify the corresponding IP and NFS server to mount path

# cat deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:v2.0.0
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.1.100
            - name: NFS_PATH
              value: /huoban/k8s
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.1.100
            path: /huoban/k8s

Third, creating a dynamic application example is supplied PV

The following is a schematic view of a dynamic application PV StatefulSet application of:

image

For example: create a dynamic access nginx PV

# cat nginx.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "nginx"
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      imagePullSecrets:
      - name: huoban-harbor
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: harbor.huoban.com/open/huoban-nginx:v1.1
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "managed-nfs-storage"
      resources:
        requests:
          storage: 1Gi

After the start we can see what information

# kubectl get pod,pv,pvc
NAME                                         READY   STATUS    RESTARTS   AGE
pod/nfs-client-provisioner-fcb58977d-l5cs4   1/1     Running   0          20h
pod/web-0                                    1/1     Running   0          175m
pod/web-1                                    1/1     Running   0          175m
pod/web-2                                    1/1     Running   0          175m

NAME                                                                           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS          REASON   AGE
persistentvolume/default-test-claim-pvc-e5a66781-b46e-4191-8f51-5d1a571ca530   1Mi        RWX            Delete           Bound    default/test-claim   managed-nfs-storage            20h
persistentvolume/default-www-web-0-pvc-0a578ef2-63e3-49bb-87c0-88166d3e0e65    1Gi        RWO            Delete           Bound    default/www-web-0    managed-nfs-storage            20h
persistentvolume/default-www-web-1-pvc-78061eb6-c36b-44db-9472-f2684f85a4b9    1Gi        RWO            Delete           Bound    default/www-web-1    managed-nfs-storage            20h
persistentvolume/default-www-web-2-pvc-ec760344-a35a-4048-b8aa-6452d6a62337    1Gi        RWO            Delete           Bound    default/www-web-2    managed-nfs-storage            20h

NAME                               STATUS   VOLUME                                                        CAPACITY   ACCESS MODES   STORAGECLASS          AGE
persistentvolumeclaim/test-claim   Bound    default-test-claim-pvc-e5a66781-b46e-4191-8f51-5d1a571ca530   1Mi        RWX            managed-nfs-storage   20h
persistentvolumeclaim/www-web-0    Bound    default-www-web-0-pvc-0a578ef2-63e3-49bb-87c0-88166d3e0e65    1Gi        RWO            managed-nfs-storage   20h
persistentvolumeclaim/www-web-1    Bound    default-www-web-1-pvc-78061eb6-c36b-44db-9472-f2684f85a4b9    1Gi        RWO            managed-nfs-storage   20h
persistentvolumeclaim/www-web-2    Bound    default-www-web-2-pvc-ec760344-a35a-4048-b8aa-6452d6a62337    1Gi        RWO            managed-nfs-storage   20h

Now, we can see automatically generated 3 mount directory on the NFS server, the data will continue to exist after a single pod deleted

# ll
drwxrwxrwx 2 root root 4096 Oct 23 17:31 default-www-web-0-pvc-0a578ef2-63e3-49bb-87c0-88166d3e0e65
drwxrwxrwx 2 root root 4096 Oct 23 17:31 default-www-web-1-pvc-78061eb6-c36b-44db-9472-f2684f85a4b9
drwxrwxrwx 2 root root 4096 Oct 23 17:40 default-www-web-2-pvc-ec760344-a35a-4048-b8aa-6452d6a62337

StatefulSet application has the following characteristics:

1. unique network identity

2. domain names (<statefulsetName-index> <service-name> .svc.cluster.local.) Such as: web-0.nginx.default.svc.cluster.local

3. Separate persistent storage

4. orderly deployment and delete

Guess you like

Origin blog.51cto.com/79076431/2480870