Linux-K8s storage (data persistence)

K8s storage

1. K8s storage is mainly divided into?

Temporary storage, semi-persistent storage, persistent storage

2.emptyDir

Generally speaking, emptydir is used as a temporary storage space. For example, some microservices that do not require data persistence, we can all use emptydir as a storage solution for microservice pods.

2.1 What is emptyDir

When the storage scheme of the pod is set to emptydir, when the pod is started, an empty volume will be opened in the disk space of the node where the pod is located. At the beginning, there is nothing in it. After the pod is started, the data generated by the container will be stored in In that empty volume. The empty volume becomes a temporary volume for the container in the pod to read and write data. Once the pod container disappears, the temporary volume created on the node will be destroyed as the pod is destroyed.

2.2 The purpose of emptyDir

  • Act as a temporary storage space, use emptydir when the data generated by the pod content container does not need to be persistent storage
  • Set checkpoints to recover unfinished long calculations from crash events

3.HostPath

3.1 What is HostPath

  • The hostPath type maps files or directories in the node file system to the pod. When using hostPath type storage volumes, you can also set the type field. The supported types include files, directories, etc.
  • HostPath is equivalent to the -v directory mapping in docker, but in k8s, the pod will drift. When the pod drifts to other node nodes, the pod will not read the directory across nodes. So HostPath can only be regarded as a semi-persistent storage method.

3.2 Purpose of HostPath

  • When the running container needs to access the internal structure of Docker, you can use hostPath to map the server directory to the container

4.PV、PVC

  • PV is: it is the external storage system of the k8s cluster, generally a set storage space (a directory in the file system) PV is the producer

  • PVC is: if the application needs to use persistence, you can directly apply for space from PV. PVC is the consumer

  • PV and PVC have a one-to-one correspondence. When a PV is occupied by a PVC, banding will be displayed, and other PVCs can no longer use the bound PV. However, if PVC does not find a suitable PV, it will be in pending state. Once the PVC is bound to the PV, it is equivalent to a storage volume. At this time, the PVC can be used by multiple Pods. (PVC does not support being accessed by multiple Pods, depending on the definition of the access mode accessMode).

2. Examples

1.emptyDir

1.1 Create yaml file

[root@master yaml]# vim emptydir.yaml

kind: Pod
apiVersion: v1
metadata:
  name: emptydir-consumer
spec:
  volumes:
  - name: shared-volume
    emptyDir: {
    
    }

  containers:
  - name: emptydir
    image:  busybox
    volumeMounts:
    - mountPath:  /empty_dir
      name: shared-volume
    args:
    - /bin/sh
    - -c
    - echo "hello world" > /empty_dir/hello.txt; sleep 30000
  - name: consumer
    image:  busybox
    volumeMounts:
    - mountPath:  /consumer_dir
      name: shared-volume
    args:
    - /bin/sh
    - -c
    - cat /consumer_dir/hello.txt ; sleep 30000

[root@master yaml]# kubectl  apply  -f  emptydir.yaml 
pod/emptydir-consumer created

1.2 View container logs

[root@master yaml]# kubectl  get pod
NAME       READY   STATUS    RESTARTS   AGE
emptydir   2/2     Running   0          2m18s
[root@master yaml]# kubectl  logs  emptydir 
consumer  emptydir  
[root@master yaml]# kubectl  logs  emptydir consumer 
hello world

1.3 Verify the principle of emptyDir

Check which node it is running on

[root@master yaml]# kubectl  get pod -o wide
NAME                READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
emptydir-consumer   2/2     Running   0          87s   10.244.1.3   node02   <none>           <none>

View the detailed information of the container on the node02 node

PS: 04ee0cd5f3c6 is the pod created on the node

[root@node02 ~]# docker inspect 04ee0cd5f3c6 
......
"Mounts": [
            {
                "Type": "bind",
                "Source": "/var/lib/kubelet/pods/7d0f9cf7-8673-442e-b3c1-46afe6f4fa18/volumes/kubernetes.io~empty-dir/shared-volume",
                "Destination": "/consumer_dir",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
......
[root@node02 ~]# cd /var/lib/kubelet/pods/7d0f9cf7-8673-442e-b3c1-46afe6f4fa18/volumes/kubernetes.io~empty-dir/shared-volume
[root@node02 shared-volume]# ls
hello.txt

Delete the pod and check whether the node file exists?

master

[root@master yaml]# ls
emptydir.yaml
[root@master yaml]# kubectl  delete  -f  emptydir.yaml 
pod "emptydir-consumer" deleted

node02

[root@node02 ~]# cd /var/lib/kubelet/pods/7d0f9cf7-8673-442e-b3c1-46afe6f4fa18/volumes/kubernetes.io~empty-dir/shared-volume
-bash: cd: /var/lib/kubelet/pods/7d0f9cf7-8673-442e-b3c1-46afe6f4fa18/volumes/kubernetes.io~empty-dir/shared-volume: 没有那个文件或目录

2.HostPath

2.1 Create Yaml file

[root@master yaml]# mkdir -p /data/hostpath
[root@master yaml]# vim hostpath.yaml 

kind: Pod
apiVersion: v1
metadata:
  name: pod
spec:
  volumes:
  - name: share-volume
    hostPath:
      path: "/data/hostpath"
  containers:
  - name: httpd
    image:  httpd
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: share-volume
    args:
    - /bin/bash
    - -c
    - echo "hello httpd" > /usr/share/nginx/html/index.html; sleep 30000
    
[root@master yaml]# kubectl  apply  -f  hostpath.yaml 
pod/hostpath created

2.2 View pod

View on which node the pod was created

[root@master yaml]# kubectl  get pod -o wide 
NAME   READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
pod    1/1     Running   0          2m51s   10.244.2.4   node01   <none>           <none>

Check whether there is a mapping directory file on the node01 node

[root@node01 ~]# ls /data/hostpath/
hello.txt  index.html

2.3 Verify HostPath

Delete pod

[root@master yaml]# kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
pod    1/1     Running   0          5m9s
[root@master yaml]# kubectl  delete  pod pod 
pod "pod" deleted

node01 node checks whether the mapping file exists

[root@node01 ~]# ls /data/hostpath/
hello.txt  index.html

3. Create PV and PVC based on NFS

master node01 node02 NFS
192.168.1.40 192.168.1.41 192.168.142 192.168.1.43

3.1 Install NFS

PS: Note that nfs is installed on every server.

[root@nfs ~]# yum -y install nfs-utils rpcbind
[root@nfs ~]# mkdir /nfsdata
[root@nfs ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[root@nfs ~]# systemctl start  nfs
[root@nfs ~]# systemctl start  rpcbind
[root@nfs ~]# systemctl enable  nfs
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@nfs ~]# systemctl enable  rpcbind
[root@nfs ~]# showmount  -e
Export list for nfs:
/nfsdata *

3.2 Create PV and NFS binding

[root@master yaml]# vim pv.yaml

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv
spec: 
  capacity: 
    storage:  1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy:  Recycle
  storageClassName: nfs
  nfs:  
    path: /nfsdata/pv1
    server: 192.168.1.43

[root@master yaml]# kubectl  apply  -f  pv.yaml 
persistentvolume/pv created

PS: Create nfs directory on nfs server

[root@nfs ~]# cd /nfsdata/
[root@nfs nfsdata]# ls
[root@nfs nfsdata]# mkdir pv1

Access modes supported by PV:

  • ReadWriteOnce: PV can be mounted to a single node in read-write mode.
  • ReadOnlyMany: PV can be mounted to multiple nodes in read-only mode.
  • ReadWriteMany: PV can be mounted to multiple nodes in read-write mode.

3.3 Create PVC and PV Association

[root@master yaml]# vim pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage:  200Mi
  storageClassName: nfs
  
[root@master yaml]# kubectl  apply  -f  pvc.yaml 
persistentvolumeclaim/pvc unchanged

3.4 View

[root@master yaml]# kubectl  get pv,pvc
NAME                  CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM         STORAGECLASS   REASON   AGE
persistentvolume/pv   1Gi        RWO            Recycle          Bound    default/pvc   nfs                     4m47s

NAME                        STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/pvc   Bound    pv       1Gi        RWO            nfs            2m24s

PS: STATUS is Bound, indicating that this pvc has been bound to pv

3.5 Create Pod Reference PVC

[root@master yaml]# vim pod.yaml 

kind: Pod
apiVersion: v1
metadata:
  name: pod
spec:
  volumes:
  - name: share-data
    persistentVolumeVlaim:
      vlaimName:  pvc
  containers:
  - name: pod
    image:  busybox
    args:
    - /bin/sh
    - -c
    - sleep 30000
    volumeMounts:
    - mountPath:  "/data"
      name: share-data

[root@master yaml]# kubectl  apply  -f  pod.yaml 
pod/pod created
[root@master yaml]# kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
pod    1/1     Running   0          5s

3.6 Verify that the storage is normal

NFS

[root@nfs pv1]# pwd
/nfsdata/pv1
[root@nfs pv1]# echo "hello persistenVolume" > test.txt

Master

PS: /data/test.txt is the storage directory in the container

[root@master yaml]# kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
pod    1/1     Running   0          100s
[root@master yaml]# kubectl  exec  pod  cat /data/test.txt
hello persistenVolume

4. PV space recovery

spec:
......
	persistentVolumeReclaimPolicy:  Recycle
......

PV space recycling strategy

  • Recycle: Data will be cleared and recycled automatically.
  • Retain: Manual cleaning and recycling is required.
  • Delete: A dedicated command for reclaiming space for cloud storage.
[root@master yaml]# kubectl  get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM         STORAGECLASS   REASON   AGE
pv     1Gi        RWO            Recycle          Bound    default/pvc   nfs                     23m

Verify pv recycling strategy

4.1 Delete pod, pvc resources

[root@master yaml]# kubectl  delete  pod pod 
pod "pod" deleted
[root@master yaml]# kubectl  delete  pvc pvc 
persistentvolumeclaim "pvc" deleted

4.2 View the release process of PV

Bound—Associate Recycle—Release Available—Available

[root@master yaml]# kubectl  get pv -w
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM        STORAGECLASS           REASON
pv     1Gi        RWO            Recycle          Bound       default/pvc   nfs                     33m
pv     1Gi        RWO            Recycle          Released    default/pvc   nfs                     33m
pv     1Gi        RWO            Recycle          Released                  nfs                     33m
pv     1Gi        RWO            Recycle          Available                 nfs                     33m

4.3 NFS View

[root@nfs pv1]# pwd
/nfsdata/pv1
[root@nfs pv1]# ls

4.4 Verify the retain strategy

Change the yaml file of pc

[root@master yaml]# vim pv.yaml 

......
  persistentVolumeReclaimPolicy:  Retain
......

Run the yaml file of pv and pod again

[root@master yaml]# kubectl  apply  -f  pv.yaml 
persistentvolume/pv created
[root@master yaml]# kubectl  apply  -f  pod.yaml 
pod/pod created

Create the corresponding resource, and then try to delete the PVC, and Pod, and verify whether the data still exists in the PV directory

master

[root@master yaml]# kubectl  get pod
NAME   READY   STATUS    RESTARTS   AGE
pod    1/1     Running   0          66s
[root@master yaml]# kubectl  exec  pod  touch /data/test.txt

nfs

[root@nfs pv1]# pwd
/nfsdata/pv1
[root@nfs pv1]# ls
test.txt

Delete Pod and PVC again

[root@master yaml]# kubectl  delete  pod pod 
pod "pod" deleted
[root@master yaml]# kubectl  delete  pvc pvc 
persistentvolumeclaim "pvc" deleted

Verify the data stored in the PV directory

[root@nfs pv1]# pwd
/nfsdata/pv1
[root@nfs pv1]# ls
test.txt

5. Automatically create PV and PVC

PS: Use K8s and NFS deployment to automatically create PV and PVC, the name space is test, the container is mysql, and the image is 5.7

surroundings:

master node01 node02 NFS
192.168.1.40 192.168.1.41 192.168.1.42 192.168.1.43
  • storageclass: can automatically create PV

  • volumeClaimTemplates: able to automatically create PVC

5.1 Turn on NFS

This step is completed according to step 3.1

5.2 Turn on rbac permissions

RBAC Role-Based Access Control-Quanpin Role-Based Access Control

[root@master yaml]# vim  rbac-rolebind.yaml

kind: Namespace
apiVersion: v1
metadata:
  name: test
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
  namespace:  test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: test    #如没有名称空间需要添加这个default默认否则报错
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

[root@master yaml]# kubectl  apply  -f  rbac-rolebind.yaml 
namespace/test created
serviceaccount/nfs-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created

5.3 Create nfs pod resource

[root@master yaml]# vim nfs-deployment.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  namespace:  test
spec:
  replicas: 1
  strategy:
    type: Recreate #重置
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec: 
      serviceAccount: nfs-provisioner  #指定账户
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
          volumeMounts:
            - name: nfs-client-root
              mountPath:  /persistentvolumes
          env:
            - name: PROVISIONER_NAME #容器内置变量
              value: test-www  #变量名字
            - name: NFS_SERVER
              value: 192.168.1.43
            - name: NFS_PATH  #指定NFS共享目录
              value: /nfsdata  
      volumes: #以下为指定挂载到容器内的NFS路径和IP
        - name: nfs-client-root
          nfs:
            server: 192.168.1.43
            path: /nfsdata

[root@master yaml]# kubectl  apply -f nfs-deployment.yaml 
deployment.extensions/nfs-client-provisioner created

PS: The role of the nfs-client-provisioner mirror is to mount the remote NFS server to the local directory through the built-in NFS driver of the k8s cluster, and then use itself as the storageprovisioner, and then associate it with the storageclass resource.

.4 Create stprageclass resources

[root@master yaml]# vim storageclass.yaml 

kind: StorageClass
metadata:
  name: storageclass
  namespace:  test
provisioner: test-www ##与nfs的deployment资源的env环境变量value值相同 
reclaimPolicy: Retain #回收策略

[root@master yaml]# kubectl  apply -f  storageclass.yaml 
storageclass.storage.k8s.io/storageclass created

5.5 Create Pod Resources

PS: Add the volumeClaimTemplate field to the pod resource to realize the automatic creation of pvc service

[root@master yaml]# vim mysql.yaml

apiVersion: v1
kind: Service
metadata:
  name: mysql-svc
  namespace:  test
  labels:
    app: mysql-svc
spec:
  type: NodePort
  ports:
  - name: mysql
    port: 3306
  selector:
    app: mysql-pod

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql-statefulset
  namespace:  test
spec:
  serviceName: mysql-svc
  replicas: 1
  selector:
    matchLabels:
      app: mysql-pod
  template:
    metadata:
      labels:
        app: mysql-pod
    spec:
      containers:
      - name: mysql
        image: mysql:5.7
        env:
        - name: MYSQL_ROOT_PASSWORD
          value:  123.com
        volumeMounts:
        - name: share-mysql
          mountPath:  /var/lib/mysql
  volumeClaimTemplates:  #这个字段会自动执行创建PVC
  - metadata:
      name: share-mysql
      annotations:  #这是是指定storageclass,名称要和storageclass设置的一样一致
        volume.beta.kubernetes.io/storage-class: storageclass
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi


[root@master yaml]# kubectl  apply  -f  mysql.yaml 
service/mysql-svc created
statefulset.apps/mysql-statefulset created

5.6 View pod, pv, pvc

[root@master yaml]# kubectl  get pod -n test 
NAME                                     READY   STATUS    RESTARTS   AGE
mysql-statefulset-0                      1/1     Running   0          6m9s
[root@master yaml]# kubectl  get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                                  STORAGECLASS   REASON   AGE
pvc-d5d9079e-3f24-475b-86f5-e0ded774e7f7   100Mi      RWO            Delete           Bound         test/share-mysql-mysql-statefulset-0   storageclass            2m31s
[root@master yaml]# kubectl  get pvc
NAME                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
share-mysql-mysql-statefulset-0   Bound    pvc-d5d9079e-3f24-475b-86f5-e0ded774e7f7   100Mi      RWO            storageclass   7m28s

5.7 Check if there is a persistent directory

[root@nfs nfsdata]# pwd
/nfsdata
[root@nfs nfsdata]# ls
test-share-mysql-mysql-statefulset-0-pvc-d5d9079e-3f24-475b-86f5-e0ded774e7f7

5.8 Verify data storage

master

[root@master yaml]# kubectl  get pod  -n test 
NAME                                     READY   STATUS    RESTARTS   AGE
mysql-statefulset-0                      1/1     Running   0          11m
[root@master yaml]# kubectl  exec  -it -n test  mysql-statefulset-0  bash
root@mysql-statefulset-0:/# mysql -u root -p123.com
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.32 MySQL Community Server (GPL)

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> create database test;
Query OK, 1 row affected (0.10 sec)

nfs

[root@nfs test-share-mysql-mysql-statefulset-0-pvc-d5d9079e-3f24-475b-86f5-e0ded774e7f7]# pwd 
/nfsdata/test-share-mysql-mysql-statefulset-0-pvc-d5d9079e-3f24-475b-86f5-e0ded774e7f7
[root@nfs test-share-mysql-mysql-statefulset-0-pvc-d5d9079e-3f24-475b-86f5-e0ded774e7f7]# ls
auto.cnf    client-cert.pem  ibdata1      ibtmp1              private_key.pem  server-key.pem
ca-key.pem  client-key.pem   ib_logfile0  mysql               public_key.pem   sys
ca.pem      ib_buffer_pool   ib_logfile1  performance_schema  server-cert.pem  test

5.9 Delete pod resource, whether data exists after re-creation

[root@master yaml]# kubectl  get pod  -n test 
NAME                                     READY   STATUS    RESTARTS   AGE
mysql-statefulset-0                      1/1     Running   0          16m
[root@master yaml]# kubectl  delete  pod  -n test  mysql-statefulset-0 
pod "mysql-statefulset-0" deleted
root@master yaml]# kubectl  get pod -n test -w
NAME                                     READY   STATUS    RESTARTS   AGE
mysql-statefulset-0                      1/1     Terminating   0          49s
mysql-statefulset-0                      0/1     Terminating   0          51s
mysql-statefulset-0                      0/1     Terminating   0          52s
mysql-statefulset-0                      0/1     Terminating   0          52s
mysql-statefulset-0                      0/1     Pending       0          0s
mysql-statefulset-0                      0/1     Pending       0          0s
mysql-statefulset-0                      0/1     ContainerCreating   0          0s
mysql-statefulset-0                      1/1     Running             0          1s

5.10 Log in again to check if the data exists

[root@master yaml]# kubectl  exec -it  -n test  mysql-statefulset-0  bash
root@mysql-statefulset-0:/# mysql -u root -p123.com
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.32 MySQL Community Server (GPL)

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.


mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
| test               |
+--------------------+
5 rows in set (0.01 sec)

Guess you like

Origin blog.csdn.net/weixin_45191791/article/details/109658476