Kubernetes storage management PV

Use of PVs

We have learned some basic methods of using resource objects, as well as stateful applications and applications with persistent data. We have passed hostPathor emptyDirto persist our data, but obviously we still need more reliable Store to save the persistent data of the application, so that after the container is rebuilt, the previous data can still be used. But it is obvious that storage resources are very different from CPU resources and memory resources. In order to shield the underlying technical implementation details and make it more convenient for users to use, Kubernetes introduces two important resource objects, PV and PVC, to manage storage. This is also the core of our study: PV and PVC.

concept

The full name of PV is: PersistentVolume (persistent volume), which is an abstraction of the underlying shared storage. PV is created and configured by the administrator. It is related to the implementation of the specific underlying shared storage technology, such as Ceph, GlusterFS , NFS, etc., are all connected to the shared storage through the plug-in mechanism.

The full name of PVC is: PersistentVolumeClaim (persistent volume claim). PVC is a claim stored by users. PVC is similar to Pod. Pod consumes nodes, and PVC consumes PV resources. Pod can request CPU and memory, while PVC Specific storage spaces and access modes can be requested. Users who actually use storage do not need to care about the underlying storage implementation details, and only need to use PVC directly.

However, requesting a certain amount of storage space through PVC may not be enough to meet the various requirements of applications for storage devices, and different applications may have different requirements for storage performance, such as read and write speed, concurrent performance, etc. In order to solve this problem, Kubernetes introduces a new resource object: StorageClass. Through the definition of StorageClass, administrators can define storage resources as certain types of resources, such as fast storage and slow storage. The description of StorageClass can intuitively know the specific characteristics of various storage resources, so that you can apply for appropriate storage resources according to the characteristics of the application.

NFS

For the convenience of demonstration, we decided to use a relatively simple storage resource such as NFS. Next, we will install the NFS service on node 10.151.30.57 , and the data directory: /data/k8s/

  1. turn off firewall
$ systemctl stop firewalld.service
$ systemctl disable firewalld.service
  1. Install and configure nfs
$ yum -y install nfs-utils rpcbind

Shared directory setting permissions:

$ chmod 755 /data/k8s/

Configure nfs, the default configuration file of nfs is under the /etc/exports file, add the following configuration information in this file:

$ vi /etc/exports
/data/k8s  *(rw,sync,no_root_squash) 

Configuration instructions:

  • /data/k8s: is the shared data directory
  • *: Indicates that anyone has permission to connect, of course, it can also be a network segment, an IP, or a domain name
  • rw: read and write permissions
  • sync: Indicates that the file is written to the hard disk and memory at the same time
  • no_root_squash: When the user who logs in to the NFS host and uses the shared directory is root, its authority will be converted to an anonymous user, usually its UID and GID will become nobody identity

Of course, there are many configurations of nfs, interested students can search it online.

  1. To start the service
    nfs, you need to register with rpc. Once rpc is restarted, the registered files will be lost, and the services registered with it need to be restarted.

Pay attention to the startup sequence, start rpcbind first

$ systemctl start rpcbind.service
$ systemctl enable rpcbind
$ systemctl status rpcbind
● rpcbind.service - RPC bind service
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; disabled; vendor preset: enabled)
   Active: active (running) since Tue 2018-07-10 20:57:29 CST; 1min 54s ago
  Process: 17696 ExecStart=/sbin/rpcbind -w $RPCBIND_ARGS (code=exited, status=0/SUCCESS)
 Main PID: 17697 (rpcbind)
    Tasks: 1
   Memory: 1.1M
   CGroup: /system.slice/rpcbind.service
           └─17697 /sbin/rpcbind -w

Jul 10 20:57:29 master systemd[1]: Starting RPC bind service...
Jul 10 20:57:29 master systemd[1]: Started RPC bind service.

Seeing Started above proves that the startup was successful.

Then start the nfs service:

$ systemctl start nfs.service
$ systemctl enable nfs
$ systemctl status nfs
● nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
  Drop-In: /run/systemd/generator/nfs-server.service.d
           └─order-with-mounts.conf
   Active: active (exited) since Tue 2018-07-10 21:35:37 CST; 14s ago
 Main PID: 32067 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/nfs-server.service

Jul 10 21:35:37 master systemd[1]: Starting NFS server and services...
Jul 10 21:35:37 master systemd[1]: Started NFS server and services.

Seeing Started also proves that the NFS Server has started successfully.

In addition, we can also confirm it with the following command:

$ rpcinfo -p|grep nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049  nfs_acl
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    3   udp   2049  nfs_acl

View specific directory mount permissions:

$ cat /var/lib/nfs/etab
/data/k8s	*(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,secure,no_root_squash,no_all_squash)

Here we have successfully installed the nfs server, and then we will install the nfs client on the node 10.151.30.62 to verify nfs

  1. Installing nfs
    also needs to close the firewall first:
$ systemctl stop firewalld.service
$ systemctl disable firewalld.service

Then install nfs

$ yum -y install nfs-utils rpcbind

After the installation is complete, start rpc first and then nfs in the same way as above:

$ systemctl start rpcbind.service 
$ systemctl enable rpcbind.service 
$ systemctl start nfs.service    
$ systemctl enable nfs.service      
  1. Mount the data directory
    After the client is started, we mount the nfs on the client for testing:

First check whether nfs has a shared directory:

$ showmount -e 10.151.30.57
Export list for 10.151.30.57:
/data/k8s *

Then we create a new directory on the client:

$ mkdir -p /root/course/kubeadm/data

Mount the nfs shared directory to the above directory:

$ mount -t nfs 10.151.30.57:/data/k8s /root/course/kubeadm/data

After the mount is successful, create a new file in the directory above the client, and then we observe whether the file also appears under the shared directory of the nfs server:

$ touch /root/course/kubeadm/data/test.txt

Then check on the nfs server:

$ ls -ls /data/k8s/
total 4
4 -rw-r--r--. 1 root root 4 Jul 10 21:50 test.txt

If the test.txt file appears above, it proves that our nfs mount is successful.

PV

With the above NFS shared storage, we can use PV and PVC next. As a storage resource, PV mainly includes key information such as storage capacity, access mode, storage type, recycling strategy, etc. Next, let’s create a new PV object, use nfs type back-end storage, 1G storage space, access mode as ReadWriteOnce, recycling strategy For Recyle, the corresponding YAML file is as follows: (pv1-demo.yaml)

apiVersion: v1
kind: PersistentVolume
metadata:
  name:  pv1
spec:
  capacity: 
    storage: 1Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /data/k8s
    server: 10.151.30.57

There are many PV types supported by Kubernetes, such as the common Ceph, GlusterFs, NFS, and even HostPath. However, as we said before, HostPath can only be used for stand-alone testing. For more supported types, you can go to the official Kubernetes PV documentation . Because each storage type has its own characteristics, we can check the corresponding documentation to set the corresponding parameters when using it.

Then the same, directly use kubectl to create:

$ kubectl create -f pv1-demo.yaml
persistentvolume "pv1" created
$ kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM               STORAGECLASS   REASON    AGE
pv1       1Gi        RWO            Recycle          Available                                                12s

We can see that pv1 has been created successfully, and the status is Available, which means that pv1 is ready and can be applied for by PVC. Let's analyze the above attributes separately.

Capacity

Generally speaking, a PV object must specify a storage capacity , which is set through the capacity attribute of PV. Currently, only the setting of storage space is supported, that is, storage=1Gi here, but indicators such as IOPS and throughput may be added in the future Configuration.

AccessModes (access mode)

AccessModes is used to set the access mode of PV, and is used to describe the access rights of user applications to storage resources. The access rights include the following methods:

  • ReadWriteOnce (RWO): read and write permissions, but can only be mounted by a single node
  • ReadOnlyMany (ROX): read-only permission, can be mounted by multiple nodes
  • ReadWriteMany (RWX): read and write permissions, can be mounted by multiple nodes

Note: Some PVs may support multiple access modes, but only one access mode can be used when mounting, and multiple access modes will not take effect.

The picture below shows the access modes supported by some commonly used Volume plugins:
[External link image transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the image and upload it directly (img-wBOshBJF-1678865266684)(./images/access- modes.png)]

persistentVolumeReclaimPolicy (reclaim policy)

The recycling strategy of the PV I specified here is Recycle. Currently, there are three strategies supported by PV:

  • Retain - Retain data, requires administrators to manually clean up data
  • Recycle (recycle) - clear the data in the PV, the effect is equivalent to executing rm -rf /thevoluem/*
  • Delete (Delete) - The back-end storage connected to the PV completes the deletion of the volume. Of course, this is common in the storage services of cloud service providers, such as ASW EBS.

However, it should be noted that currently only NFS and HostPath support recycling policies. Of course, generally speaking, it is safer to set the strategy of Retain.

state

In the life cycle of a PV, it may be in 4 different stages:

  • Available (available): Indicates the available state and has not been bound by any PVC
  • Bound (bound): Indicates that the PVC has been bound by the PVC
  • Released: the PVC was deleted, but the resource has not been reclaimed by the cluster
  • Failed: Indicates that the automatic recycling of the PV failed

This is the declaration method of PV, and the usage method of PVC will be introduced later in the article.


Guess you like

Origin blog.csdn.net/u010674953/article/details/129557287