Volumes type of k8s storage

1. Volumes configuration management

(1) The files in the container are temporarily stored on the disk, which brings some problems to the special applications running in the container. First, when the container crashes, the kubelet will restart the container, and the files in the container will be lost because the container will be rebuilt in a clean state. Secondly, when running multiple containers in a Pod at the same time, it is often necessary to share files between these containers. Kubernetes abstracts the Volume object to solve these two problems.
(2) A Kubernetes volume has a clear life cycle, which is the same as the Pod that wraps it. Therefore, the volume has a longer lifetime than any container running in the Pod, and the data will be retained when the container is restarted. Of course, when a Pod no longer exists, the volume will no longer exist. Perhaps more importantly, Kubernetes can support many types of volumes, and Pods can use any number of volumes at the same time.
(3) Volumes cannot be mounted to other volumes, nor can they have hard links with other volumes. Each container in the Pod must independently specify the mounting location of each volume
(4) Kubernetes supports the type of volume
1.emptyDir volume
(1) When the Pod is assigned to a node, the first created is an emptyDir volume. And as long as the Pod is running on that node, the volume will always exist. As its name indicates, the volume is initially empty. Although the paths for the containers in the Pod to mount the emptyDir volume may be the same or different, these containers can all read and write the same files in the emptyDir volume. When the Pod is deleted from the node for some reason, the data in the emptyDir volume will also be permanently deleted.
(2) Use scenarios of emptyDir:
cache space, such as disk-based merge sort.
Provide checkpoints for time-consuming computing tasks, so that the tasks can be easily resumed from the state before the crash.
When the web server container serves data, save the file obtained by the content manager container
(3) Creation and use

cat emptydir.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: vol1
spec:
  containers:
  - image: busyboxplus
    name: vm1
    stdin: true
    tty: true
    volumeMounts:
    - mountPath: /cache
      name: cache-volume
  - name: vm2
    image: myapp:v1
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: cache-volume
  volumes:
  - name: cache-volume
    emptyDir:
      medium: Memory
      sizeLimit: 100Mi

kubectl apply -f emptydir.yaml

Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
It can be seen that the file exceeds sizeLimit, and it will be evicted by kubelet after a period of time (1-2 minutes). The reason why it is not "immediately" evict is because kubelet is checked regularly, there will be a time difference
(4) the disadvantage of emptydir:
users cannot be prohibited from using memory in time. Although kubelet will squeeze out the Pod in 1-2 minutes, there is actually a risk to the node during this time; it
affects the scheduling of kubernetes, because the empty dir does not involve the resources of the node, which will cause the Pod to "secretly" use the node The scheduler does not know the memory; the
user cannot perceive that the memory is unavailable in time.
2. hostPath volume
(1) The hostPath volume can mount files or directories on the file system of the host node to your Pod. Although this is not required by most Pods, it provides a powerful escape pod for some applications.
(2) Some uses of hostPath are:
run a container that needs to access the internal mechanism of the Docker engine, and mount the /var/lib/docker path.
When running cAdvisor in the container, mount /sys with hostPath.
Allows the Pod to specify whether the given hostPath should exist before running the Pod, whether it should be created, and how it should exist.
In addition to the required path attribute, the user can optionally specify the type for the hostPath volume Insert picture description here
(3) Note that when using this type of volume, be careful, because:
multiple Pods with the same configuration (for example, created from podTemplate) Different files on the node have different behaviors on different nodes.
When Kubernetes adds resource-aware scheduling as planned, this type of scheduling mechanism will not be able to consider the resources used by hostPath.
Files or directories created on the base host can only be written by the root user. You need to run the process as root in a privileged container, or modify the file permissions on the host so that the container can write to the hostPath volume

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: myapp:v1
    name: vm1
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      path: /webdata
      type: DirectoryOrCreate

# pod调度节点自动创建相关目录

Insert picture description here
Insert picture description here
Insert picture description here
3.
nfs volume nfs volume can mount NFS (Network File System) to your Pod. Unlike emptyDir, which will be deleted when the Pod is deleted, the contents of the nfs volume will be saved when the Pod is deleted, and the volume will only be unmounted. This means that nfs volumes can be pre-filled with data, and these data can be shared between Pods

server1作为nfs共享主机
yum install nfs-utils -y
vim /etc/exports
/nfsdata        *(rw,no_root_squash)
systemctl enable --now rpcbind
systemctl enable --now nfs
[root@server1 harbor]# showmount -e
Export list for server1:
/nfsdata *
vim nfs.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nfs-pd
spec:
  containers:
  - image: myapp:v1
    name: vm1
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: test-volume
  volumes:
  - name: test-volume
    nfs:
      server: 172.25.2.1
      path: /nfsdata
kubectl apply -f nfs.yaml
yum install nfs-utils -y  #在节点处server3也要安装nfs管理工具

Insert picture description here
Insert picture description here
4. Persistent volume

(1) Static supply

# 创建NFS PV卷,持久卷(PersistentVolume,PV)是集群中的一块存储,做了三个持久卷,有不同的访问模式
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv1
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce  #单点只读,该volume只能被单个节点以读写的方式映射
  persistentVolumeReclaimPolicy: Recycle  #回收策略,
当前,只有NFS和HostPath支持回收利用,AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷支持删除操作
  storageClassName: nfs
  nfs:
    path: /nfsdata/pv1
    server: 172.25.2.1
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv2
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany  #多点读写
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs
  nfs:
    path: /nfsdata/pv2
    server: 172.25.2.1
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv3
spec:
  capacity:
    storage: 20Gi
  volumeMode: Filesystem
  accessModes:
    - ReadOnlyMany  #多点只读
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs
  nfs:
    path: /nfsdata/pv3
    server: 172.25.2.1

kubectl apply -f pv.yml

Status:
Available: Idle resource, not bound to PVC.
Bound: Bound to a PVC.
Released: PVC has been deleted, but PV has not been recycled by the cluster.
Failed: PV has failed in automatic recycling. The
command line can display the PV. Name of the bound PVC
Insert picture description here

# pvc 和pod,持久卷申领(PersistentVolumeClaim,PVC)表达的是用户对存储的请求,让pod挂载pv
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc1
spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc2
spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

---
apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: myapp:v1
    name: nginx
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: nfs-pv
  volumes:
  - name: nfs-pv
    persistentVolumeClaim:
      claimName: pvc1

---
apiVersion: v1
kind: Pod
metadata:
  name: test-pd-2
spec:
  containers:
  - image: myapp:v1
    name: nginx
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: nfs-pv-2
  volumes:
  - name: nfs-pv-2
    persistentVolumeClaim:
      claimName: pvc2

Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
(2) Dynamic provisioning of
NFS Client Provisioner is an automatic provisioner that uses NFS as storage and automatically creates PV and corresponding PVC. It does not provide NFS storage itself, and requires an external NFS storage service.

PV以 ${namespace}-${pvcName}-${pvName}的命名格式提供(在NFS服务器上)
PV回收的时候以 archieved-${namespace}-${pvcName}-${pvName} 的命名格式(在NFS服务器上)

NFS dynamic distribution source code

docker pull heegor/nfs-subdir-external-provisioner:v4.0.0  #并上传到本地仓库

Insert picture description here
Insert picture description here
In the reference file, rbac.yaml configures authorization, deployment.yml deploys NFS Client Provisioner, class.yaml creates NFS SotageClass, and modify the image location, nfs server, and backup settings according to your actual situation. It can also be deployed in other namespaces. , To facilitate pod management
Insert picture description here
Insert picture description here
a. Here I put all the above three files in deploy.yaml, the installation of dynamic volume allocator
kubectl apply -f deploy.yaml
Insert picture description here
Insert picture description here

#b pvc创建
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  storageClassName: managed-nfs-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi

Insert picture description here
Insert picture description here

#c pod使用卷
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: myapp:v1
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/usr/share/nginx/html"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim

NFS server
Insert picture description here
Insert picture description here

#c 动态创建多个卷
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim-2
spec:
  storageClassName: managed-nfs-storage
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 5Gi
---
kind: Pod
apiVersion: v1
metadata:
  name: test-pod-2
spec:
  containers:
  - name: test-pod-2
    image: myapp:v1
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/usr/share/nginx/html"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim-2

Insert picture description here
Insert picture description here
Insert picture description here
5. StorageClass resource
A. StorageClass provides a way to describe storage class (class), different classes may be mapped to different service quality levels and backup strategies or other strategies.
Learn more about
B. Each StorageClass contains provisioner, parameters and reclaimPolicy fields, which will be used when the StorageClass needs to dynamically allocate PersistentVolume.
C. StorageClass attribute
Provisioner (storage allocator): used to decide which volume plug-in to use to allocate PV, this field must be specified. You can specify an internal distributor or an external distributor. The code address of the external allocator is: kubernetes-incubator/external-storage, which includes NFS and Ceph.
Reclaim Policy: Specify the reclaim policy of the Persistent Volume created through the reclaimPolicy field. Reclaim policies include: Delete or Retain. If not specified, the default is Delete
Insert picture description here
D. The default StorageClass will be used dynamically for no specific storage class requirements PersistentVolumeClaims configuration storage: (There can only be one default StorageClass), if there is no default StorageClass, and the PVC does not specify the value of storageClassName, it means that it can only be bound to PVs whose storageClassName is also ""
kubectl patch storageclass managed-nfs-storage -p ‘{“metadata”: {“annotations”:{“storageclass.kubernetes.io/is-default-class”:“true”}}}’
Insert picture description here

Guess you like

Origin blog.csdn.net/qq_49564346/article/details/114030088