Kubernetes persistent storage PV and PVC

Kubernetes persistent storage PV and PVC

1. Introduction of PV and PVC

Volume provides a very good data persistence solution, but there are still deficiencies in manageability.
Take the previous AWS EBS example, to use Volume, Pod must know the following information in advance: the
current Volume comes from AWS EBS.
The EBS Volume has been created ahead of time and the exact volume-id is known.
Pods are usually maintained by application developers, while volumes are usually maintained by storage system administrators. Developers to obtain the above information:
either ask the administrator.
Or be an administrator yourself.
This brings about a management problem: the responsibilities of application developers and system administrators are coupled together. It is acceptable if the system size is small or for a development environment. But when the cluster size becomes larger, especially for the production environment, considering efficiency and security, this becomes a problem that must be solved.

The solution given by Kubernetes is PersistentVolume and PersistentVolumeClaim.
A PersistentVolume (PV) is a volume of storage in an external storage system that is created and maintained by an administrator. Like Volumes, PVs are persistent and have a life cycle independent of Pods.
PersistentVolumeClaim (PVC) is an application (Claim) for PV. PVCs are typically created and maintained by ordinary users. When it is necessary to allocate storage resources for Pods, the user can create a PVC, specifying the storage resource capacity and access mode (such as read-only), and Kubernetes will find and provide PVs that meet the conditions.
With PersistentVolumeClaim, users only need to tell Kubernetes what kind of storage resources they need, and don't have to care about where the real space is allocated, how to access and other low-level details. The underlying information of these Storage Providers is handled by the administrator, and only the administrator should care about the details of creating PersistentVolume.

2. Implement persistent storage through NFS

2.1 configure nfs

Install nfs on all nodes

[root@k8s-master]# yum install -y nfs-common nfs-utils

Create a shared directory on the master node

[root@k8s-master]# mkdir /nfsdata

Authorize shared directory

[root@k8s-master]# chmod 666 /nfsdata

Edit the exports file

[root@k8s-master]# vim /etc/exports
/nfsdata *(rw,no_root_squash,no_all_squash,sync)

The configuration takes effect

Start rpc and nfs (note the order)

[root@k8s-master]# systemctl start rpcbind
[root@k8s-master]# systemctl start nfs
# 使配置生效
[root@k8s-master]# exportfs -r
#检查配置是否生效
[root@k8s-master]# exportfs
/nfsdata      	<world>

As a preparation, we have set up an NFS server on the k8s-master node, the directory is /nfsdata:

2.2 Create PV

Create a PV below mypv1, and the configuration file nfs-pv1.ymlis as follows :

[root@k8s-master ~]# vim nfs-pv1.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mypv1
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs
  nfs:
    path: /nfsdata
    server: 192.168.119.163  #指定nfs目录所在的机器的地址
  1. capacityThe capacity of the designated PV is 1G.

  2. accessModesThe specified access mode is ReadWriteOnce, and the supported access modes are:
    ReadWriteOnce – PV can be mounted to a single node in read-write mode.
    ReadOnlyMany – PV can be mounted to multiple nodes in read-only mode.
    ReadWriteMany – PV can be mounted to multiple nodes in read-write mode.

  3. persistentVolumeReclaimPolicySpecify when the recovery strategy of the PV is Recycle, the supported strategies are:
    Retain – the administrator needs to manually recycle.
    Recycle – Clear the data in the PV, the effect is equivalent to execution rm -rf /thevolume/*.
    Delete – Delete the corresponding storage resources on the Storage Provider, such as AWS EBS, GCE PD, Azure Disk, OpenStack Cinder Volume, etc.

  4. storageClassNameSpecify the class of the PV as nfs. It is equivalent to setting a category for PV, and PVC can specify a class to apply for a PV of the corresponding class.

  5. Specifies the directory corresponding to the PV on the NFS server.

create mypv1:

[root@k8s-master ~]# kubectl apply -f nfs-pv1.yml

insert image description here

STATUSYes Available, which means mypv1it is ready and can be applied for by PVC.

2.3 Create PVCs

Next create a PVC mypvc1, the configuration file nfs-pvc1.ymlis as follows :

[root@k8s-master ~]# vim nfs-pvc1.yml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc1
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs

PVC is very simple, you only need to specify the PV capacity, access mode and class.

Execute the command to create mypvc1:

[root@k8s-master ~]# kubectl apply -f nfs-pvc1.yml

insert image description here

kubectl get pvcFrom kubectl get pvthe output of and , we can see that Bound mypvc1has been reached mypv1, and the application is successful.

2.4 create pod

The pv and pvc have been created above, and the pvc can be used directly in the pod

[root@k8s-master ~]# vim pod1.yml 
apiVersion: v1
kind: Pod
metadata:
  name: mypod1
spec:
  containers:
    - name: mypod1
      image: busybox
      args:
      - /bin/sh
      - -c
      - sleep 30000
      volumeMounts:
      - mountPath: "/mydata"
        name: mydata
  volumes:
    - name: mydata
      persistentVolumeClaim:
        claimName: mypvc1

Similar to the format of using ordinary Volume, specify the Volume requested volumesin .persistentVolumeClaimmypvc1

Created by command mypod1:

[root@k8s-master ~]# kubectl apply -f pod1.yml

insert image description here

2.5 Verification

[root@k8s-master ~]# docker exec -it mypod1 /bin/sh
/ # ls mydata/
/ # echo "你看我像会的人吗" > mydata/hello.txt
/ # ls mydata/
hello.txt
/ # exit
[root@k8s-master ~]# ls /nfsdata/    #也可在nfs的共享目录中查看到,说明卷共享成功
hello.txt
[root@k8s-master ~]# cat /nfsdata/hello.txt 
hello
可见,在 Pod 中创建的文件 /mydata/hello.txt 确实已经保存到了 NFS 服务器目录 /nfsdata中。
如果不再需要使用 PV,可用删除 PVC 回收 PV。

在这里,可以尝试在任何一方删除文件,文件在两端都会消失;

insert image description here

3. Recovery of PV

When a PV is no longer needed, it can be recycled by deleting the PVC. The state of pv before pvc is deleted is Bound

insert image description here

delete pod

[root@k8s-master yaml]# kubectl delete pod mypod1

delete pvc

[root@k8s-master yaml]# kubectl delete pvc mypvc1

Check the status of pv again

[root@k8s-master yaml]# kubectl get pv

After deleting the pvc, the status of the pv becomes Available. At this time, after unbinding, it can be applied for by a new PVC.

Files in the /nfsdata file were deleted

insert image description here

Because the recovery policy of the PV is set to Recycle, the data will be cleared,

But this may not be the result we want. If we wish to preserve the data, we can set the policy toRetain

[root@k8s-master yaml]# vim nfs-pv1.yml

insert image description here

[root@k8s-master yaml]# kubectl apply -f nfs-pv1.yml

insert image description here

The recycling strategy has changed Retain, verify its effect through the following steps:

# 重新创建mypvc1
[root@k8s-master yaml]# kubectl apply -f nfs-pvc1.yml
# 重新创建pod,引用mypvc1
[root@k8s-master yaml]# kubectl apply -f pod1.yml
# 进入pod中,创建文件
[root@k8s-master yaml]# kubectl get pod -o wide
NAME                       READY     STATUS    RESTARTS   AGE       IP            NODE
mypod1                     1/1       Running   0          1m        172.17.44.2   192.168.119.164
# 来到164
[root@k8s-node01]# docker exec -it mypod1 /bin/sh
/ # echo '你看我会不会' > mydata/hello.txt
/ # ls mydata/
hello.txt
/ # exit

# 在nfs目录下检验
[root@k8s-master yaml]# ls /nfsdata/
hello.txt
[root@k8s-master yaml]# cat /nfsdata/hello.txt 
你看我会不会

# 删除pod
[root@k8s-master yaml]# kubectl delete -f pod1.yml 
pod "mypod1" deleted
[root@k8s-master yaml]# ls /nfsdata/
hello.txt
# 删除pvc(mypvc1)
[root@k8s-master yaml]# kubectl delete pvc mypvc1
persistentvolumeclaim "mypvc1" deleted
[root@k8s-master yaml]# ls /nfsdata/
hello.txt
[root@k8s-master yaml]# cat /nfsdata/hello.txt 
你看我会不会

# 发现数据仍然保留

insert image description here

insert image description here

Although the data mypv1in has been preserved, its PV status will always be in Releasedand cannot be applied for by other PVCs. To reuse storage resources, they can be deleted and recreated mypv1. The delete operation only deletes the PV object, and the data in the storage space will not be deleted.

[root@k8s-master yaml]# ls /nfsdata/
hello.txt
[root@k8s-master yaml]# kubectl delete pv mypv1
persistentvolume "mypv1" deleted
[root@k8s-master yaml]# ls /nfsdata/
hello.txt
[root@k8s-master yaml]# kubectl apply -f nfs-pv1.yml 
persistentvolume/mypv1 created
[root@k8s-master yaml]# kubectl get pod
No resources found in default namespace.
[root@k8s-master yaml]# kubectl get pv

insert image description here

The newly created mypv1status is Availableand can be applied for by PVC.

PV also Deletesupports the reclamation policy, which will delete the storage space corresponding to the PV on the Storage Provider. The PV of NFS does not support Delete, and the Deletesupported providers include AWS EBS, GCE PD, Azure Disk, OpenStack Cinder Volume, etc.

4. Static supply of PV/PVC

所有节点下载nfs
yum install -y nfs-common nfs-utils 

master节点作为nfs服务端
[root@k8s-master]# cat /etc/exports
/data/opv *(rw,no_root_squash,no_all_squash,sync)
[root@k8s-master]# chmod 777 -R /data/opv
# 使配置生效
[root@k8s-master]# exportfs -r
#检查配置是否生效
[root@k8s-master]# exportfs

master节点操作
#1.定义pv
[root@k8s-master yaml]# vim pv-pod.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: /data/opv  #nfs服务端共享的目录
    server: 192.168.119.163   #nfs服务器的地址
[root@k8s-master yaml]# kubectl apply -f pv-pod.yaml

#2.定义pvc和deployment
[root@k8s-master yaml]# vim pvc-pod.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        #启用数据卷的名字为wwwroot,并挂载到nginx的html目录下
        volumeMounts:
        - name: wwwroot
          mountPath: /usr/share/nginx/html
        ports:
        - containerPort: 80
    #定义数据卷名字为wwwroot,类型为pvc
      volumes:
      - name: wwwroot
        persistentVolumeClaim:
          claimName: my-pvc


---
# 定义pvc的数据来源,根据容量大小来匹配pv
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  #对应上面的名字
  name: my-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5G
[root@k8s-master yaml]# kubectl apply -f pvc-pod.yaml

#3,暴露一下端口
[root@k8s-master yaml]# vim pv-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: pv-svc
spec:
  type: NodePort
  ports:
    - port: 8080
      nodePort: 30001
      targetPort: 80
  selector:   #选择器
    app: nginx

# 4.启动
[root@k8s-master yaml]# kubectl apply -f pv-service.yaml
#5.nfs服务器操作
[root@k8s-master yaml]# echo hello >> /data/opv/index.html 
#6.访问,看效果

insert image description here

insert image description here

5. Dynamic supply of PV

In the previous example, we created the PV in advance, and then applied for the PV through the PVC and used it in the Pod. This method is called Static Provision.

Corresponding to it is Dynamic Provisioning, that is, if there is no PV that meets the PVC conditions, a PV will be created dynamically. Compared with static supply, dynamic supply has obvious advantages: there is no need to create PV in advance, which reduces the workload of administrators and is highly efficient.

Dynamic provisioning is realized through StorageClass, which defines how to create PV. The following are two examples.

StorageClass standard

insert image description here

StorageClass slow

insert image description here

Both StorageClasses create AWS EBS dynamically, the difference is standardthat gp2EBS of type is created slowand io1EBS of type is created. For parameters supported by different types of EBS, refer to AWS official documents.

StorageClass supports Deleteand Retaintwo types reclaimPolicy, the default is Delete.

As before, when PVC applies for PV, it only needs to specify StorageClass, capacity and access mode, such as:

insert image description here

In addition to AWS EBS, Kubernetes supports other provisioners that dynamically provision PV. For a complete list, please refer to https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner

6. Persistent storage of PV and PVC in applications

[root@k8s-master yaml]# kubectl delete -f pod1.yml
[root@k8s-master yaml]# vim pod1.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mydep
spec:
  selector:
    matchLabels:
      app: busy
  replicas: 1
  template:
    metadata:
      labels:
        app: busy
    spec:
      containers:
        - name: mypod1
          image: busybox
          args:
          - /bin/sh
          - -c
          - sleep 30000
          volumeMounts:
          - mountPath: "/mydata"
            name: mydata
      volumes:
        - name: mydata
          persistentVolumeClaim:
            claimName: mypvc1
            
[root@k8s-master pv]# kubectl apply -f pod1.yml 
[root@k8s-master yaml]# kubectl get pod -o wide
[root@k8s-master yaml]# docker exec -it dc31ac288bfa /bin/sh
/ # echo "我不会了" > mydata/hello.txt
/ # exit

Check which node the pod is running on, and shut down the node. Another node is found, it will take over, and the data still exists

insert image description here

7.PV, PVC in the application of mysql persistent storage actual combat project

The following demonstrates how to provide persistent storage for the MySQL database. The steps are:

  1. Create PVs and PVCs.
  2. Deploy MySQL.
  3. Add data to MySQL.
  4. To simulate a node failure, Kubernetes automatically migrates MySQL to other nodes.
  5. Verify data consistency.

First create PV and PVC, the configuration is as follows:

mysql-pv.yml

# 创建持久化目录
[root@k8s-master yaml]# mkdir /nfsdata/mysql-pv
[root@k8s-master yaml]# vim mysql-pv.yml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /nfsdata/mysql-pv
    server: 192.168.119.163
[root@k8s-master yaml]# kubectl apply -f mysql-pv.yml

mysql-pvc.yml

[root@k8s-master yaml]# vim mysql-pvc.yml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs
  
[root@k8s-master yaml]# kubectl apply -f mysql-pvc.yml

insert image description here

Next deploy MySQL, the configuration file is as follows:

[root@k8s-master yaml]# vim mysqlpod.yml 
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
  selector:
    app: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.7.5 #这里的镜像一定要选对,能确保拉取到,而且能使用变量
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pvc
          
[root@k8s-master yaml]# kubectl apply -f mysqlpod.yml

The PV of PVC mysql-pvcBound mysql-pvwill be mounted to the MySQL data directory var/lib/mysql.

MySQL is deployed to163

insert image description here

insert image description here

Because it is assigned to the master node, it cannot be shut down, so delete it and let it reassign the node

insert image description here

Wait for the image to be pulled and start the pod

1. Switch to the database mysql.

insert image description here

2. Create the database company and table t1.

[root@k8s-node01 ~]# docker exec -it e0effdbcddb4 bash
root@mysql-7f65bf577-b2ssv:/# mysql -uroot -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.7.5-m15 MySQL Community Server (GPL)

Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> create database company;
create database  company;
Query OK, 1 row affected (0.03 sec)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| company            |
| mysql              |
| performance_schema |
+--------------------+
4 rows in set (0.02 sec)
mysql> use company;
Database changed
mysql> create table t1(Sno char(9) primary key,Sname char(20) not null,Ssex char(2),Sage smallint,Sdept char(20));
Query OK, 0 rows affected (0.07 sec)

3. Insert data.

添加表数据
mysql> insert into t1(Sno,Sname,Ssex,Sage,Sdept)values('202112081','李四','男','20','CS');
Query OK, 1 row affected (0.02 sec)

mysql> insert into t1(Sno,Sname,Ssex,Sage,Sdept)values('202112082','张三','女','19','CS');
Query OK, 1 row affected (0.01 sec)

mysql> insert into t1(Sno,Sname,Ssex,Sage,Sdept)values('202112083','王五','女','18','MA');
Query OK, 1 row affected (0.00 sec)

mysql> insert into t1(Sno,Sname,Ssex,Sage,Sdept)values('202112085','麻子','男','19','IS');
Query OK, 1 row affected (0.01 sec)

4. Confirm that the data has been written.

mysql> show tables;
+-------------------+
| Tables_in_company |
+-------------------+
| t1                |
+-------------------+
1 row in set (0.01 sec)

mysql> select * from t1;
+-----------+-------+------+------+-------+
| Sno       | Sname | Ssex | Sage | Sdept |
+-----------+-------+------+------+-------+
| 202112081 |       |      |   20 | CS    |
| 202112082 |       |      |   19 | CS    |
| 202112083 |       |      |   18 | MA    |
| 202112085 |       |      |   19 | IS    |
+-----------+-------+------+------+-------+
4 rows in set (0.00 sec)

You can see that the data has been written, but Chinese characters are not recognized due to encoding problems. Is there a problem? no problem!

If there is a problem, modify the code yourself.

Closed k8s-node01to simulate node downtime failure.

# 关机node01
[root@k8s-node01 ~]# poweroff
# master查看mysql,需要等待一段时间
[root@k8s-master yaml]# kubectl get pod -o wide
NAME                       READY     STATUS    RESTARTS   AGE       IP            NODE
mysql-7f65bf577-b2ssv      1/1       Unknown   0          2h        172.17.44.2   192.168.119.164
mysql-7f65bf577-hkvld      1/1       Running   0          9s        172.17.78.2   192.168.119.163

# 验证数据的一致性:
# 由于node1节点已经宕机,master节点接管了这个任务,pod转移,需要等待一段时间,大约五分钟左右

insert image description here

[root@k8s-master yaml]# docker exec -it 7f001e01d786 bash
root@mysql-7f65bf577-hkvld:/# mysql -uroot -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.7.5-m15 MySQL Community Server (GPL)

Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| company            |
| mysql              |
| performance_schema |
+--------------------+
4 rows in set (0.01 sec)

mysql> use company;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+-------------------+
| Tables_in_company |
+-------------------+
| t1                |
+-------------------+
1 row in set (0.00 sec)

mysql> select * from t1;
+-----------+-------+------+------+-------+
| Sno       | Sname | Ssex | Sage | Sdept |
+-----------+-------+------+------+-------+
| 202112081 |       |      |   20 | CS    |
| 202112082 |       |      |   19 | CS    |
| 202112083 |       |      |   18 | MA    |
| 202112085 |       |      |   19 | IS    |
+-----------+-------+------+------+-------+
4 rows in set (0.00 sec)

insert image description here

insert image description here

Entering the new pod, the data still exists and the persistence is successful. very safe.

The MySQL service is restored and the data is intact.

Guess you like

Origin blog.csdn.net/YourMr/article/details/121945077