References: https: //yq.aliyun.com/articles/613036
Advantages relative to static storage, dynamic storage:
● Administrator not need to create a large number of PV as a storage resource;
● static storage applications require the user to ensure that when PVC capacity and the capacity and type of the preset read write type of PV exact match, while the dynamic storage need not be.
First created nfs service
1. Create a resource ServiceAccount
Vim serviceaccount.yaml $ apiVersion: v1 kind: ServiceAccount the Metadata: name: nfs - Provisioner # serviceaccount name, and corresponding to the following namespace: testing # serviceaccount belong to the namespace level of resources
2. Create ClusterRole resources
$ vim clusterrole.yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-provisioner-runner # clusterrole名称,clusterrole属于集群级别的资源 rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["watch", "create", "update", "patch"] - apiGroups: [""] resources: ["services", "endpoints"] verbs: ["get"] - apiGroups: ["extensions"] resources: ["]"podsecuritypolicies resourceNames: ["nfs-provisioner"] verbs: ["use"]
3. Create ClusterRoleBinding resources, both clusterrole and binding serviceaccount
$ vim clusterrolebinding.yaml kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-provisioner #clusterrolebinding的名称,后文会使用 subjects: - kind: ServiceAccount name: nfs-provisioner namespace: testing roleRef: kind: ClusterRole name: nfs-provisioner-runner apiGroup: rbac.authorization.k8s.io
4. Create a provisioner
$ vim deployment-provisioner.yaml kind: Deployment apiVersion: extensions/v1beta1 metadata: name: nfs-client-provisioner namespace: testing spec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccount: nfs-provisioner containers: - name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner # here to use Ali cloud images volumeMounts: - name: nfs-Client- root MountPath: / persistentvolumes # here to write Dead env : - name: PROVISIONER_NAME value: fuseim.pri / IFS # Name custom here, to be with below uniform - name: NFS_SERVER value: 192.168 . 186.81 # nfs service host - name: NFS_PATH value: / the Data / nfs nfs # shared path volumes: - name: nfs-Client- root nfs: Server: 192.168 . 186.81 # nfs service host path: / the Data / nfs nfs shared path #
5, create StorageClass resources
Vim storageclass- $ nfs.yaml apiVersion: storage.k8s.io / v1beta1 kind: StorageClass the Metadata: name: Managed -nfs- name storage # storage class later use Provisioner: fuseim.pri / IFS
6. Create a pvc resources
Vim pvc.yaml $ apiVersion: v1 kind: PersistentVolumeClaim the Metadata: name: the Test - the Claim # storage class name of the namespace: testing #StorageClass level resource belongs to the namespace Annotations: volume.beta.kubernetes.io / Storage-class: " managed- storage-nfs " # comment here previously created storage class association spec: accessModes: - ReadWriteMany Resources: Requests: storage: 1MI
7, to create pod resources, the use of test
$ vim pod.yaml apiVersion: v1 kind: Pod metadata: name: vol-sc-pod namespace: testing spec: containers: - name: nginx image: nginx:1.12-alpine volumeMounts: - name: html mountPath: /usr/share/nginx/html - name: alpine image: alpine volumeMounts: - name: html mountPath: /html command: ["/bin/sh","-c"] args: - while true; do echo $(hostname) $(date) >> /html/index.html; sleep 10; done terminationGracePeriodSeconds: 30 volumes: - name: html persistentVolumeClaim: claimName: test-claim # 此处为pvc的名称