k8s之StorageClass

Storage Class Resources

1. Why use Storage Class?

  • When conventional manual before mount, seemingly no problem, but in reflection, pvc in the application storage space to pv, is determined to apply pv specific to that space pv according to the specified name, access mode, capacity, size, Suppose pv capacity of 20G, WRO is defined access (reading and writing only allow a single node to the mount), and the storage application is 1OG pvc, pvc then once this is to apply the above pv space, that is, the 10 G pv space is wasted, as it only allows a single node to be mounted. Even without considering this issue, we have to manually create each pv will be more troublesome thing, this time, we need an automated tool to back us create pv.
  • The thing is that Ali provide an open source tool "nfs-client-provisioner", this thing is built by k8s nfs drive will mount a remote NFS server to a local directory, and then itself as a storage (storage).

2, the role of stroage class in the cluster?

  • pvc whereabouts are not directly nfs-client-provisioner apply for use of storage space, then, we need this resource by SC objects to apply the fundamental role of pvc SC is defined according to dynamically create pv, not only saves us Management members of time, the package may be stored for different types of pvc selection.

  • Each sc contains the following three important fields that will be in use when sc to dynamically allocate pv:
    Provisioner(供给方):提供了存储资源的存储系统。
    ReclaimPolicy:pv的回收策略,可用的值有Delete(默认)和Retiain
    Parameters(参数):存储类使用参数描述要关联到存储卷。

3, based on the following NFS service class to practice Storage
1) set up NFS service (I will master node as a nfs server):

[root@master ~]# yum -y install nfs-utils
[root@master ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[root@master ~]# mkdir /nfsdata    #创建共享目录
[root@master ~]# systemctl start rpcbind
[root@master ~]# systemctl enable rpcbind
[root@master ~]# systemctl start nfs-server
[root@master ~]# systemctl enable nfs-server
[root@master ~]# showmount -e  #确保成功挂载
Export list for master:
/nfsdata *

2) create rbac permissions:
rbac (role-based access control), that is, by associating user roles and permissions.
It is a certification from -----> ----- authorization> access mechanism.

[root@master sc]# vim rbac-rolebind.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
  namespace: default
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

// execute yaml file:

[root@master sc]# kubectl apply -f  rbac-rolebind.yaml 
serviceaccount/nfs-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created

Above we ServiceAccount named a new nfs-provisioner, and then bound ClusterRole called nfs-provisioner-runner, and the ClusterRole declare some rights, including on the pv add, delete, change, etc. authority, so we can use this to automatically create serviceAccount pv.

3) Create a nfs the Deployment
replaces the corresponding parameters inside with our own nfs configuration.

[root@master sc]# vim nfs-deployment.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
          volumeMounts:
            - name: nfs-client-root
              mountPath:  /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-deploy    #供给方的名称(自定义)
            - name: NFS_SERVER
              value: 172.16.1.30     #nfs服务器的ip地址
            - name: NFS_PATH
              value: /nfsdata      #nfs共享的目录
      volumes:  
        - name: nfs-client-root
          nfs:
            server: 172.16.1.30
            path: /nfsdata

// Import nfs-client-provisioner mirror (each node in the cluster are required to import, including master)

[root@master sc]# docker load --input  nfs-client-provisioner.tar 
5bef08742407: Loading layer  4.221MB/4.221MB
c21787dcfbf0: Loading layer  2.064MB/2.064MB
00376105a0f3: Loading layer  41.08MB/41.08MB
Loaded image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner:latest

k8s之StorageClass

//执行yaml文件:
[root@master sc]# kubectl apply -f  nfs-deployment.yaml 
deployment.extensions/nfs-client-provisioner created
//确保pod正常运行
[root@master sc]# kubectl  get pod 
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-5457694c8b-k8t4m   1/1     Running   0          23s

Effect nfs-client-provisionser tool: it is driven by a built-in remote nfs K8S nfs mount to a local directory server, and then itself as storage provide (storage provider) associated sc.

4) Create a storage class:

[root@master sc]# vim test-sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: statefu-nfs
  namespace: default
provisioner: nfs-deploy  
reclaimPolicy: Retain

We declare an object named sc statefu-nfs, noting provisioner field corresponds to the following values ​​must be variable and the value of the environment above the Deployment nfs following PROVISIONER_NAME same.

// create the resource objects:

[root@master sc]# kubectl apply -f  test-sc.yaml 
storageclass.storage.k8s.io/statefu-nfs created

5) above the SC resource object is created successfully, and then we test whether the ability to dynamically create pv.
// First we create a pvc objects:

[root@master sc]# vim test-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-claim    
  namespace: default
spec:
  storageClassName: statefu-nfs   #sc一定要指向上面创建的sc名称
  accessModes:
    - ReadWriteMany   #采用ReadWriteMany的访问模式
  resources:
    requests:
      storage: 50Mi    #请求50M的空间

// Perform yaml file to create pvc:

[root@master sc]# kubectl  apply -f  test-pvc.yaml 
persistentvolumeclaim/test-claim created

k8s之StorageClass

We can see pvc created successfully, status is already Bound, is not it also produced a volume corresponding to the object, the most important column is the STORAGECLASS, sc present value is the name of the object we just created "statefu-nfs".

// Next we look pv, verify that the dynamic created:
k8s之StorageClass

We can see that the object has been automatically generated pv an associated access mode is the RWX (as defined in the sc mode), recovery strategy Delete, the state is also Bound, is dynamically created by sc, rather we manually created .

4, deploy nginx to practice pv, pvc

Let's test our pvc objects above statement with stroage class nginx way through the deployment of a service (data persistence).

[root@master sc]# vim nginx-pod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: nginx-pod
  namespace: default
spec:
  containers:
    - name: nginx-pod
      image: nginx
      volumeMounts:    #定义数据持久化
        - name: nfs-pvc
          mountPath: /usr/share/nginx/html
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:   #指定pvc,注意下面声明的pvc指向的是上面定义的pvc名称
        claimName: test-claim   
//运行nginx,并查看pod是否正常运行:
[root@master sc]# kubectl apply -f  nginx-pod.yaml 
pod/nginx-pod created

k8s之StorageClass

// we enter the pod, create a test page file:

[root@master ~]# kubectl  exec  -it nginx-pod /bin/bash
root@nginx-pod:/# cd /usr/share/nginx/html/
root@nginx-pod:/usr/share/nginx/html# echo "<h1>welcome to Storage Class web</h1>" > index.html
root@nginx-pod:/usr/share/nginx/html# cat index.html 
<h1>welcome to Storage Class web</h1>
root@nginx-pod:/usr/share/nginx/html# exit

// if we return to a shared data directory nfs server to view file synchronization:
k8s之StorageClass

In nfs directory we can see the name is very long folder, this folder named above is our regular production.

k8s之StorageClass

We entered into the directory, you can see in the nginx data synchronization has been achieved, and to achieve a data persistence (here is not to do the test, and self-test)

Finally, to test whether nginx web page can be accessed without our written:
// Create a service resource objects associated pod on top of the mapped port number.
Yaml a complete file as follows:

kind: Pod
apiVersion: v1
metadata:
  name: nginx-pod
  namespace: default
  labels:
    app: web
spec:
  containers:
    - name: nginx-pod
      image: nginx
      volumeMounts:
        - name: nfs-pvc
          mountPath: /usr/share/nginx/html
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  namespace: default
spec:
  type: NodePort
  selector:
    app: web
  ports:
  - name: nginx
    port: 80
    targetPort: 80
    nodePort: 32134

// reload nginx, and visit:

[root@master sc]# kubectl  apply -f  nginx-pod.yaml 
pod/nginx-pod configured
service/nginx-svc created

k8s之StorageClass

k8s之StorageClass

nginx web normal visit, deployed, that's Storage Class of use, storage class production environment is just not enough, additional resources will be subject to follow-up the learning.

Guess you like

Origin blog.51cto.com/13972012/2464274