k8s of storage volumes and pvc

1. storage volumes Overview

  Because the pod is a life cycle, pod a restart, which the data is gone, so we need persistent data stored in k8s, the storage volume does not belong to the container, but to the pod, that is to say in the same pod the container may share a storage volume, the storage volume can be a directory on the host, or may be mounted on an external host apparatus.

Storage volume types:

emptyDIR storage volume : pod a restart, delete storage volumes, called emptyDir storage volumes, are generally used as temporary buffer space or relationships;

hostPath storage volume : directory on the host as a storage volume, this is not the true sense of the realized data persistence;

SAN (iscsi) or NAS (nfs, cifs): network storage devices;

Distributed storage: ceph, glusterfs, cephfs, rbd

Cloud Storage: Amazon EBS, Azure Disk, Ali cloud, critical data must be backed up off-site

a.emptyDIR storage volumes

vim subtest / sub-volume demo.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: default
  labels:
    app: myapp
    tier: frontend
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v2
    imagePullPolicy: IfNotPresent
    ports:
    - name: http
      containerPort: 80
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
  - name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: html
      mountPath: /data/
    command: ["/bin/sh"]
    args: ["-c","while true;do echo $(date) >> /data/index.html; sleep 10;done"]
  volumes:
  - name: html
    emptyDir: {}

volumeMounts: the storage volume of which linked to the pod which directory
emptyDir: not setting means that no limit on the two options under this parameter

b.hostPath: Use as a directory on host storage volume

kubectl explain pods.spec.volumes.hostPath.type
DirectoryOrCreate: To mount path is a directory, create a directory does not exist;
Directory: must realize there is a directory on the host, if there is no error;
FileOrCreate: represents the mount that file, if there is no creation;
File: pledged to mount the file must already exist, otherwise an error.

cat pod-hostpath-vol.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-vol-hostpath
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v2
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html/
  volumes:
  - name: html
    hostPath:
      path: /data/pod/volume1
      type: DirectoryOrCreate

hostPath: directory on the host.
volumes can easily take the name, which is the name of the storage volume, but above volumeMounts specified time,
name and the name must be stored in the same volume, so only two were established.

c.nfs do shared storage

For convenience, the master node as a storage nfs, three nodes are performed
yum -y install nfs-utils # and then on the master boot nfs
mkdir /data/volumes
cat /etc/exports
/data/volumes 10.0.0.0/16(rw,no_root_squash)
systemctl start nfs
On node1 and node2 try to mount
mount -t nfs k8s-master:/data/volumes /mnt
cat pod-vol-nfs.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-vol-nfs
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v2
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html/
  volumes:
  - name: html
    nfs:
      path: /data/volumes
      server: k8s-master

kubectl apply f pod vol nfs.yaml
At this time, regardless of the pod is built on which node, the corresponding node is not the storage of data, data on host nfs

d.pvc and pv

  Pvc user need only mount the container storage volume without the need for what concerns technology used. Similar relations and relationships pvc pv pod and the Node , which consumes resources of the former, pvc size specified can apply to storage resources pv and set the access mode .

When you define a pod, we just need to explain to us how much of a storage volume on the line, pvc storage volumes must establish a direct relationship with the current namespace binding of PVC, PVC must establish a binding relationship with pv , and pv is the real one space on the storage device.

Pvc and pv is a one to one relationship, once a pv is bound with a pvc, pv then this can not be bundled with other pvc, a pvc can be accessed by multiple pod, pvc in the namespace, pv is the cluster level.

The master as a storage node, create a mount directory

cd /data/volumes && mkdir v{1,2,3,4,5}
cat  /etc/exports
/data/volumes/v1 10.0.0.0/16(rw,no_root_squash)
/data/volumes/v2 10.0.0.0/16(rw,no_root_squash)
/data/volumes/v3 10.0.0.0/16(rw,no_root_squash)
exportfs -arv
showmount -e
kubectl explain pv.spec.nfs

accessModes modes are:
ReadWriteOnce: single reader, may be abbreviated as RWO;
ReadOnlyMany: multiple read-only, can be abbreviated as ROX;
ReadWriteMany: multiple read and write, can be abbreviated as RWX 

# First storage device is defined as pv
cat pv-demo.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: do not add the name space pv001 # define pv, because pv is the cluster level
  labels:
    name: pv001
spec:
  nfs:
    path: /data/volumes/v1
    server: k8s-master
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity: # allocate disk space
    storage: 3Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv002
  labels:
    name: pv002
spec:
  nfs:
    path: /data/volumes/v2
    server: k8s-master
  accessModes: ["ReadWriteOnce"]
  capacity:
    storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv003
  labels:
    name: pv003
spec:
  nfs:
    path: /data/volumes/v3
    server: k8s-master
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 8Gi

kubectl apply -f pv-demo.yaml
kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   
pv001     3Gi        RWO,RWX        Retain           Available
pv002     5Gi        RWO            Retain           Available
pv003     8Gi        RWO,RWX        Retain           Available

Recycling Policy:

If a data stored in pv pvc inside, pvc later deleted, then how to deal with the data inside pv

reclaim_policy : That pvc deleted, but pv which data is not deleted, retained;

Recycle : i.e. pvc deleted, then put inside pv data also deleted;

the Delete : That pvc deleted, then put pv also deleted.

# Create a list of documents pvc
kubectl explain pods.spec.volumes.persistentVolumeClaim
cat pod-vol-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim # referred pvc
metadata:
  name: mypvc
  namespace: default # pvc and pod in the same name space
spec:
  accessModes: [ "ReadWriteMany"] # must be a subset of pv Policy
  resources:
    requests:
      storage: 7Gi # apply for a size of at least 7G pv
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-vol-pvc
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: html storage volume using the name #
      mountPath: / usr / share / nginx / html / # mount path
  volumes:
  - name: html
    persistentVolumeClaim:
      claimName: mypvc # indicates which you want to use pvc

So if the type of pod storage volume is pvc, then: pod specified pvc need to match a pv, in order to be mounted pod, after k8s 1.10, can not be manually deleted from the underlying pv.

 

Reference blog: http://blog.itpub.net/28916011/viewspace-2214804/

Guess you like

Origin www.cnblogs.com/fawaikuangtu123/p/11031171.html