NFS-based deployment k8s automatic feeding storageclass pv

In k8s deployed in the state of application, it would normally require persistent storage of data.

Back-end storage of the following ways:

1. The local storage on the host;

  (Restart pod, if the pod is scheduled to other nodes, although the data on the original node is not lost, but the application does not store data through other nodes, so it was not persistent)

2. Based on the way through the local cloud storage service, such as: (NFS, glusterfs, cephfs, awsElasticBlockStore, azureDisk, gcePersistentDisk etc.)

  (Specify the URL address and the shared directory to mount the volume in the resource list to persistent storage for data)

3. Based on the storage class, to achieve automatic supply PV;

  (Creating storage class, and specify the address shared directory to mount the volume in the resource list to achieve persistent storage)

 

Here we will introduce PV automatically based on the data supplied to the storage class that implements persistent storage

Network concept official explanation:

https://kubernetes.io/docs/concepts/storage/storage-classes/

project address:

https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client/deploy

Project Architecture:

Designed by Smbands

 principle:

  1. Create a storage class storage engineers.

  2. Cluster administrators maintain storage resources in the cluster.

  3. The user or developer to submit requirements (as long as the editor in sts in the resource list in the demand for good volumeClaimTemplates ensure correct list of resources to run) This process does not need to manually create the PVC.

Of course, this figure is drawn, detailed division of labor, but in reality a person to believe that these activities are also countless football. kube, kube Pinyin than just a bitter, but no harm to learn more.

As can be seen from the official website currently we do not support NFS storage class, but we can use NFS to support plug-ins.

NFS plug-in project in Github address of : https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client/deploy

 

Set up

Setting up an NFS service (under the same network segment with the host k8s cluster)

1   install nfs service: yum -y install NFS-safe locking utils (each node in the cluster should be installed or not supported)
 2  
3   Start nfs and set to boot from the start: systemctl Start nfs && systemctl enable nfs
 4  
5   create a shared mount directory: mkdir -pv / Data / Volumes / {V1, V2, V3}
 . 6  
. 7   edit / etc / exports file, to the shared directory 192. 168.1 . 0 / 24 this segment:
 . 8  
. 9 VI / etc / Exports
 10  
. 11 / Data / Volume / V1   192.168 . 1.0 / 24 (RW, the no_root_squash)
 12 is / Data / Volume / V2   192.168 . 1.0 / 24(RW, the no_root_squash)
 13 is / Data / Volume / V3   192.168 . 1.0 / 24 (RW, the no_root_squash)
 14  
15   Post: the exportfs - AVR
 16  
. 17 Exporting 192.168 . 1.0 / 24 : / Data / Volume / V3
 18 is Exporting 192.168 . 1.0 / 24 : / Data / Volume / V2
 . 19 Exporting 192.168 . 1.0 / 24 : / Data / Volume / V1
 20 is  
21 is   to see: the showmount - E
 22 is 
23 /data/volume/v3 192.168.1.0/24
24 /data/volume/v2 192.168.1.0/24
25 /data/volume/v1 192.168.1.0/24

2. Deploy the plug-in kubernetes in NFS (project address above)

 

 

 1 下载项目:for file in class.yaml deployment.yaml rbac.yaml test-claim.yaml ; do wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/$file ; done
 2 
 3 修改资源清单(红色地方需要修改):
 4 
 5 vim deployment.yaml
 6 
 7 apiVersion: v1
 8 kind: ServiceAccount
 9 metadata:
10   name: nfs-client-provisioner
11 ---
12 kind: Deployment
13 apiVersion: extensions/v1beta1
14 metadata:
15   name: nfs-client-provisioner
16 spec:
17   replicas: 1
18   strategy:
19     type: Recreate
20   template:
21     metadata:
22       labels:
23         app: nfs-client-provisioner
24     spec:
25       serviceAccountName: nfs-client-provisioner
26       containers:
27         - name: nfs-client-provisioner
28           image: quay.io/external_storage/nfs-client-provisioner:v2.0.0  ##默认是latest版本
29           volumeMounts:
30             - name: nfs-client-root
31               mountPath: /persistentvolumes
32           env:
33             - name: PROVISIONER_NAME
34               value: fuseim.pri/ifs  ##这里的供应者名称必须和class.yaml中的provisioner的名称一致,否则部署不成功
35             - name: NFS_SERVER
36               value: k8s-nfs      ##这里写NFS服务器的IP地址或者能解析到的主机名
37             - name: NFS_PATH
38               value: /data/volume/v1   ##这里写NFS服务器中的共享挂载目录(强调:这里的路径必须是目录中最后一层的文件夹,否则部署的应用将无权限创建目录导致Pending)
39       volumes:
40         - name: nfs-client-root
41 
42           nfs:
43             server: k8s-nfs                ##NFS服务器的IP或可解析到的主机名
44             path: /data/volume/v1  ##NFS服务器中的共享挂载目录(强调:这里的路径必须是目录中最后一层的文件夹,否则部署的应用将无权限创建目录导致Pending)

 

3.部署

 切换到此项目的目录中

1 kubectl apply -f ./

 

4.查看

 查看此NFS插件的pod是否部署成功:

1 kubectl get pods
2  
3 NAME                                      READY       STATUS         RESTARTS       AGE
4 
5 nfs-client-provisioner-8664fb9f68-57wkf   1/1        Running            0          5m43s

 

5.测试

 部署一个pvc或者声明存储的应用,测试是否自动创建出PV而且自动绑定PVC,

  1 例:PVC  2 
  3   4 
  5 vim test.yaml
  6 
  7 kind: PersistentVolumeClaim
  8 apiVersion: v1
  9 metadata:
 10   name: test-claim
 11   annotations:
 12     volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
 13 spec:
 14   accessModes:
 15     - ReadWriteMany
 16   resources:
 17     requests:
 18       storage: 1Mi
 19 
 20 例:StatefulSet方式部署的nginx应用
 21 
 22 vim nginx-demo.yaml
 23 
 24 apiVersion: v1
 25 kind: Service
 26 metadata:
 27   name: nginx
 28   labels:
 29     app: nginx
 30 spec:
 31   ports:
 32   - port: 80
 33     name: web
 34   clusterIP: None
 35   selector:
 36     app: nginx
 37 ---
 38 
 39 apiVersion: apps/v1
 40 kind: StatefulSet
 41 metadata:
 42   name: web
 43 spec:
 44   selector:
 45     matchLabels:
 46       app: nginx
 47   serviceName: "nginx"
 48   replicas: 3
 49   template:
 50     metadata:
 51       labels:
 52         app: nginx
 53     spec:
 54       terminationGracePeriodSeconds: 10
 55       containers:
 56       - name: nginx
 57         image: nginx
 58         ports:
 59         - containerPort: 80
 60           name: web
 61         volumeMounts:
 62         - name: www
 63           mountPath: /usr/share/nginx/html
 64   volumeClaimTemplates:
 65   - metadata:
 66       name: www
 67     spec:
 68       accessModes: [ "ReadWriteOnce" ]
 69       storageClassName: "managed-nfs-storage"
 70       resources:
 71         requests:
 72           storage: 1Gi
 73 
 74 部署: 75 kubectl apply -f test.yaml nginx-demo.yaml
 76  77 查看pod、svc、pv、pvc状态:
 78  79 kubectl get pv
 80 NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS          REASON   AGE
 81 pvc-5d66051e-9674-11e9-9021-000c29cc70d4   1Mi        RWX            Delete           Bound    default/test-claim   managed-nfs-storage            7m6s
 82 pvc-73235c07-9677-11e9-9021-000c29cc70d4   1Gi        RWO            Delete           Bound    default/www-web-1    managed-nfs-storage            6m15s
 83 pvc-8a58037f-9677-11e9-9021-000c29cc70d4   1Gi        RWO            Delete           Bound    default/www-web-2    managed-nfs-storage            5m36s
 84 pvc-ab7fca5a-9676-11e9-9021-000c29cc70d4   1Gi        RWO            Delete           Bound    default/www-web-0    managed-nfs-storage            7m6s
 85 
 86  87 kubectl get pvc
 88 NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
 89 test-claim   Bound    pvc-5d66051e-9674-11e9-9021-000c29cc70d4   1Mi        RWX            managed-nfs-storage   28m
 90 www-web-0    Bound    pvc-ab7fca5a-9676-11e9-9021-000c29cc70d4   1Gi        RWO            managed-nfs-storage   12m
 91 www-web-1    Bound    pvc-73235c07-9677-11e9-9021-000c29cc70d4   1Gi        RWO            managed-nfs-storage   6m32s
 92 www-web-2    Bound    pvc-8a58037f-9677-11e9-9021-000c29cc70d4   1Gi        RWO            managed-nfs-storage   5m53s
 93 
 94 kubectl get pods -owide 95 NAME                                     READY   STATUS    RESTARTS   AGE    IP             NODE        NOMINATED NODE   READINESS GATES
   nfs-client-provisioner-f9776d996-dpk6z   1/1     Running   0          12m    10.244.1.65    k8s-node1   <none>           <none>
   web-0                                    1/1     Running   0          16m    10.244.1.66    k8s-node1   <none>           <none>
   web-1                                    1/1     Running   0          10m    10.244.2.181   k8s-node2   <none>           <none>
   web-2                                    1/1     Running   0          10m    10.244.2.182   k8s-node2   <none>           <none>

   kubectl get svc

 现在查看nfs服务器中的v1目录下:

1 default-www-web-0-pvc-c32f532b-968f-11e9-9021-000c29cc70d4   default-www-web-2-pvc-d3944c4a-968f-11e9-9021-000c29cc70d4
2 default-www-web-1-pvc-ccd2a50b-968f-11e9-9021-000c29cc70d4

上面这些是k8s集群映射的目录,用来和其他存储挂载使用,从创建pod时的日志可以看出:

1 Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/7d7c45bc-968c-11e9-9021-000c29cc70d4/volumes/kubernetes.io~nfs/default-www-web-0-pvc-e6b67079-968b-11e9-9021-000c29cc70d4 --scope -- mount -t nfs k8s-nfs:/data/volume/v2/default-www-web-0-pvc-e6b67079-968b-11e9-9021-000c29cc70d4 /var/lib/kubelet/pods/7d7c45bc-968c-11e9-9021-000c29cc70d4/volumes/kubernetes.io~nfs/default-www-web-0-pvc-e6b67079-968b-11e9-9021-000c29cc70d4

 

在这些目录中创建默认访问页:

. 1 CD default -www-WEB- 0 -PVC-c32f532b-968f-11e9- 9021 - 000c29cc70d4
 2  
. 3 echo " <h1 of> the NFS Server </ h1 of> " > index.html
 . 4  
. 5  used in this case nginx pod curl command to access this
 6  
7 curl 10.244 . 1.66 
8  
9 NFS Server

Well, that's all the

Guess you like

Origin www.cnblogs.com/Smbands/p/11059843.html