kubernetes Value: disk mounted to the container, PV, PVC

6.1 Introduction Volume

6.1.1. Type of volume

emptyDir- simply an empty directory for storing temporary data

hostPath- used to mount the directory from the file system nodes to work pod

nfs- to NFS mount the shared volume of the pod.

There are other such gitRepo, gcepersistenDisk

 

6.2 by volume sharing data between containers

6.2.1 Using emptyDir volume

Items associated with the life cycle of the life cycle and the volume of the pod, so when deleting pod, contents of the volume will be lost.

Use empty sample code is as follows:

apiVersion: V1 
kind: Pod 
Metadata: 
  name: Fortune 
spec: 
  Containers: 
  - Image: Luksa / Fortune 
    name: HTML-Gener 
    volumeMounts: 
    - name: HTML 
      MountPath: / usr / Share / Nginx 
      readOnly: to true 
  - Image: Nginx / Aplin 
    name : Web-Service- 
    volumeMounts: 
    - name: html 
      MountPath: / usr / Share 
      readOnly: to true 
  volumes: 
  - name: // html emptyDir a single volume called the html, mounted in the container above two 
    emptyDir: {}

  

6.3. Access to files on the file system working node

6.3.1.hostPath volume

hostPath persistent storage, content emptyDir volume with the deletion of the pod deleted.

Use hostPath will find that when deleting a pod, and the next pod uses hostPath volume on the same path to the host, the new pod will find a pod on the left of the data, but only if it is dispatched to the first node on the same pod.

So when you use hostPath sure to consider carefully, when re-starting a pod time, we must ensure that the same node before the pod.

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /test-pd
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      # directory location on host
      path: /data
      # this field is optional
      type: Directory

 

6.4 The use of persistent storage

How to ensure dispatch to any node has the same data can be used to restart after pod, which needs to be done to persistent storage.

To the data must be stored in some type of network storage (NAS) in.

It supports a variety of different ways, such as GlusterFS need to create Endpoint, Ceph / NFS and his ilk have no such trouble.

6.4.1 Using NFS storage

In an example NFS, yml code is as follows:

 

 

 

6.4.2.configmap and secert

configmap secret and can be understood as a special storage volume, but they are not provided to the Pod storage function, but provides the ability to cluster configuration information from the outside to the inside of the cluster application injection. ConfigMap played a role K8S cluster configuration center. Pod ConfigMap defines the configuration information, can be mounted to the application Pod configuration file directory stored in the form of roll, the read configuration information from configmap; may be acquired from ConfigMap variable environment variables based on the form of injection into the Pod container use. But ConfigMap is stored in plain text, if used to save the database account password such sensitive information, it is very unsafe. Such a configuration is generally sensitive information through secretto save. secretFunctionality and ConfigMap the same, but secret is to save the configuration information of Base64 encoding mechanism.

Obtaining configuration information from ConfigMap in two ways:

  • One is the use of environment variables configuration information injected into Pod container approach, which takes effect only when the Pod is created, which means ConfigMap the modified configuration information can not be updated configuration has been created Pod container application.
  • Another is ConfigMap mounting the storage volume as a Pod into the container, so that the modified configuration information ConfigMap, Pod container configuration will be updated, but this process will be a slight delay.

ConfigMap mount to use as a storage volume in the Pod:

apiVersion: V1 
kind: Pod 
Metadata: 
  name: POD-ConfigMap-2-Vol 
  Labels: 
    name: POD-ConfigMap Vol-2- 
spec: 
  Containers: 
  - name: MyApp 
    Image: ikubernetes / MyApp: V1 
    volumeMounts: 
    - name: MY- WWW-cm & lt 
      MountPath: /etc/nginx/conf.d/ # ConfigMap named my-www to mount this directory Pod container. 
  Volumes: 
  - name: WWW-My-cm & lt 
    configMap: # storage volume type selected from configMap

  Secert similar method, but the data is encrypted secert

 

6.5. Decoupled from the underlying storage technology pod

6.5.1. Introduction lasting volume and persistent volumes statement

  When a cluster users need to use persistent storage in their pod, they first create a persistent statement (PVC) list, specify the desired minimum capacity requirements, and access patterns, then the user will be persistent volumes submitted a list of claims to kubernetes API server, kubernetes will find a match lasting volume and bind it to the persistent volumes statement.

  Persistent volumes statement can be used as a volume pod in use, other users can not use the same long-lasting volume without first deleting persistent volumes declared by binding to release.

6.5.2. Creating lasting volume

PV below to create a  mypv1configuration file pv1 .yml as follows:

apiVersion: V1 
kind: PersistentVolume 
Metadata: 
  name: yh_pv1 
spec: 
  Capacity: 
    Storage: 1Gi // Capacity is designated PV capacity. 1G 
  accessModes: // access mode is designated accessModes ReadWriteOnce 
    - ReadWriteOnce             
  persistentVolumeReclaimpolicy: // persistentVolumeReclaimPolicy specify the Recycle of recovered when PV strategy Recycle 
  storageClassName: nfs // storageClassName specify the class PV is nfs. Is equivalent to setting a classification for the PV, PVC can specify the class of the respective request class PV. 
  NFS: 
    path: / NFS / Data PV // specified on the NFS server corresponding directory 
    server: 10.10.0.11

1.accessModes Designated access mode  ReadWriteOnce, access modes are supported:

  ReadWriteOnce - PV can mount to a single node in a read-write mode.
  ReadOnlyMany - PV able to mount a plurality of nodes in read-only mode.
  ReadWriteMany - PV able to mount a plurality of nodes in read-write mode.

2. persistentVolumeReclaimPolicy specify when the PV recycling strategy  Recycle, supported strategies:
  Retain - need administrator to manually recover.
  Recycle - clearing the data in the PV effect equivalent to the implementation  rm -rf /thevolume/*.
  Delete - to delete the corresponding storage resources on the Storage Provider, such as AWS EBS, GCE PD, Azure Disk , OpenStack Cinder Volume and so on.

 

Created  pv:

# kubectl apply -f pv1.yml 
persistentvolume/yh-pv1 created

 

View pv:

# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
yh-pv1   1Gi        RWO            Recycle          Available           nfs                     17m

  

STATUS As  Availableexpressed yh-pv1 ready to be applied for PVC.

6.5.3. To get lasting volume by persistent volumes statement

 

Next, create PVC  mypvc1, the configuration file  pvc1.yml as follows:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: yh-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs

  

PVC is very simple, only you need to specify PV capacity, access mode and class.

Execute command to create  mypvc1:

# kubectl apply -f pvc1.yml 
persistentvolumeclaim/yh-pvc created

View pvc

# kubectl get pvc
NAME     STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
yh-pvc   Bound    yh-pv1   1Gi        RWO            nfs            64s

 

From  kubectl get pvc and  kubectl get pv output can be seen yh- pvc1 already Bound to yh-  pv1, the application is successful.

 

# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM            STORAGECLASS   REASON   AGE
yh-pv1   1Gi        RWO            Recycle          Bound    default/yh-pvc   nfs                     47m

  

6.5.4 The use of persistent volumes declared in the pod

The above has been created pv and pvc, pod can be used directly in the pvc

Volume and similar common format, in  volumes through  persistentVolumeClaim specify  mypvc1 application Volume.

 By command creates mypod1:

Visible, files created in the Pod  /mydata/hello , it has been saved to the NFS server  /nfsdatain.

If you no longer need to use PV, delete available PVC recycling PV.

 

6.5.5. Recycling persistent volumes

When the PV is no longer needed, it can be recovered by removing the PVC.

pv pvc not removed before the state is Bound

After you remove pv pvc status changed to unbind Available ,, after this time can be a new PVC applications.

/ Nfsdata document file is deleted

 

Because PV recycling policy is set  Recycle, so the data will be cleared, but this may not be the result we want. If we want to preserve data, policies can be set  Retain.

By  kubectl apply updating PV:

 

Recycling has become policy  Retain, verify the effect of the following steps:

 

① to re-create  mypvc1.

② in the  mypv1 creation file  hello.

③  mypv1 status changes  Released.

④ PV is the data intact.

Although the  mypv1 data has been retained, but its status has been in PV  Released, other applications can not be PVC. In order to re-use storage resources, you can delete and re-create  mypv1. PV deletion just removes the objects, data storage space and are not removed.

 

The new  mypv1 status  Available, may have been applied PVC.

PV also supports  Delete recovery strategy, PV will delete the corresponding storage space on Storage Provider. The PV does not support NFS  Deleteto support  Delete the Provider have AWS EBS, GCE PD, Azure Disk , OpenStack Cinder Volume and so on.

 

6.6. Dynamic configuration of persistent volumes

6.6.1. By the definition of the type of storage resource is available StorageClass

The previous example, we created a PV advance, and then through the use of PVC in the Pod and apply for PV in this manner is called static supply (Static Provision).

The corresponding dynamic feed (Dynamical Provision), i.e., if the condition is not satisfied PVC PV, dynamically creates PV. Static supply compared to the dynamic supply has obvious advantages: no need to create PV in advance, reducing administrator workload and high efficiency.

Dynamic provisioning is implemented by StorageClass, StorageClass defines how to create a PV, following are two examples.

StorageClass standard

StorageClass slow

Both StorageClass will dynamically create AWS EBS, except that the  standard creation of a  gp2 type of EBS, and  slow create a  io1 type of EBS. EBS parameters supported by different types of official documents refer to AWS.

StorageClass support  Delete and  Retain two kinds of  reclaimPolicydefault is  Delete.

As before, PVC when applying for PV, and only need to specify StorageClass capacity and access patterns, such as:

 

In addition to AWS EBS, Kubernetes Provisioner support a variety of other dynamic supply of PV, a complete list please refer  https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner

 

6.6.2.PV && PVC in applications in persistent storage of mysql

The following shows how to provide persistent storage for the MySQL database, the following steps:

  1. Creating PV and PVC.

  2. Deploy MySQL.

  3. Add data to MySQL.

  4. Fault simulation node goes down, Kubernetes MySQL will automatically migrate to another node.

  5. Verify data consistency.

 

First create PV and PVC, configuration is as follows:

mysql-pv.yml

 

mysql-pvc.yml

Creating  mysql-pv and  mysql-pvc:

 

Next to deploy MySQL, the configuration file as follows:

 

 Of PVC  mysql-pvc Bound a PV  mysql-pv is to mount MySQL data directory  var/lib/mysql.

MySQL is deployed to  k8s-node2, the following access Service by the client  mysql:

kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -ppassword

 

Update the database:

① switch to database mysql.

② create a database table my_id.

③ a data insertion.

④ confirm the data has been written.

 Close  k8s-node2, simulated nodes of downtime.

 

Verify the consistency of the data:

 Since the node is already down node2, node1 node takes over this task.

Enter node1 through kubectl run this command in the pod, to see whether the data is still there

 

MySQL service recovery, data also intact.

Guess you like

Origin www.cnblogs.com/yaohong/p/11489164.html