Docker (20)--Docker k8s--Kubernetes storage--Volumes configuration management

1 Introduction

  • The files in the container are temporarily stored on the disk, which brings some problems to the special applications running in the container. First, when the container crashes, the kubelet will restart the container, and the files in the container will be lost because the container will be rebuilt in a clean state. Secondly, when running multiple containers in a Pod at the same time, it is often necessary to share files between these containers. Kubernetes abstracts the Volume object to solve these two problems.

  • A Kubernetes volume has a clear life cycle, which is the same as the Pod that wraps it. Therefore, the volume has a longer lifetime than any container running in the Pod, and the data will be retained when the container is restarted. Of course, when a Pod no longer exists, the volume will no longer exist. Perhaps more importantly, Kubernetes can support many types of volumes, and Pods can use any number of volumes at the same time.

  • Volumes cannot be mounted to other volumes, nor can they have hard links with other volumes. Each container in the Pod must independently specify where each volume is mounted.

  • Kubernetes 支持下列类型的卷:
    awsElasticBlockStore 、azureDisk、azureFile、cephfs、cinder、configMap、csi
    downwardAPI、emptyDir、fc (fibre channel)、flexVolume、flocker
    gcePersistentDisk、gitRepo (deprecated)、glusterfs、hostPath、iscsi、local、
    nfs、persistentVolumeClaim、projected、portworxVolume、quobyte、rbd
    scaleIO、secret、storageos、vsphereVolume

    Official website

2. emptyDir volume

2.1 Introduction

  • When a Pod is assigned to a node, an emptyDir volume is created first, and the volume will always exist as long as the Pod is running on the node. As its name indicates, the volume is initially empty. Although the paths for the containers in the Pod to mount the emptyDir volume may be the same or different, these containers can all read and write the same files in the emptyDir volume. When the Pod is deleted from the node for some reason, the data in the emptyDir volume will also be permanently deleted.

  • Use scenarios of emptyDir:
    cache space, such as disk-based merge sort.
    Provide checkpoints for time-consuming computing tasks, so that the tasks can be easily resumed from the state before the crash.
    When the web server container serves data, save the file obtained by the content manager container.

  • By default, the emptyDir volume is stored on the media that supports the node; the media here can be disk or SSD or network storage, depending on your environment. However, you can set the emptyDir.medium field to "Memory" to tell Kubernetes to install tmpfs (memory-based file system) for you. Although tmpfs is very fast, it should be noted that it is different from disk. tmpfs will be cleared when the node restarts, and all the files you write are included in the memory consumption of the container, which is subject to the container memory limit.

  • If the file exceeds sizeLimit, it will be evicted by kubelet after a period of time (1-2 minutes). The reason why it is not "immediately" evict is because kubelet is checked regularly, and there will be a time difference here.

  • Disadvantages of emptydir:
    users cannot be prohibited from using memory in a timely manner. Although kubelet will squeeze out the Pod in 1-2 minutes, there is actually a risk to the node during this time; it
    affects the scheduling of kubernetes, because the empty dir does not involve the resources of the node, which will cause the Pod to "secretly" use the node Memory, but the scheduler does not know; the
    user cannot perceive that the memory is unavailable in time

Insert picture description here

2.2 Example


2.2.1 Normal use

[root@server2 ~]# kubectl get pod
NAME    READY   STATUS    RESTARTS   AGE
mypod   1/1     Running   1          15h
[root@server2 ~]# kubectl  delete pod mypod --force    ##删除原有的保持实验环境干净

[root@server2 ~]# mkdir volumes
[root@server2 ~]# cd volumes/     ##创建相应文件夹
[root@server2 volumes]# vim emptydir.yaml
[root@server2 volumes]# cat emptydir.yaml    ##编写emptydir配置文件
apiVersion: v1
kind: Pod
metadata:
  name: vol1
spec:
  containers:
  - image: busyboxplus
    name: vm1
    stdin: true
    tty: true
    volumeMounts:
    - mountPath: /cache
      name: cache-volume
  - name: vm2
    image: myapp:v1
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: cache-volume
  volumes:
  - name: cache-volume
    emptyDir:
      medium: Memory
      sizeLimit: 100Mi

[root@server2 volumes]# kubectl apply -f emptydir.yaml    ##应用
[root@server2 volumes]# kubectl get pod   
NAME   READY   STATUS    RESTARTS   AGE
vol1   2/2     Running   0          7s
[root@server2 volumes]# kubectl describe pod vol1    ##查看挂载详细信息

[root@server2 volumes]# kubectl attach vol1 -c vm1 -it    ##设置空卷中的信息
/ # cd cache/
/cache # echo www.westos.org > index.html
/cache # curl localhost
www.westos.org
/cache # Session ended, resume using 'kubectl attach vol1 -c vm1 -i -t' command when the pod is running
[root@server2 volumes]# kubectl  get pod -o wide
NAME   READY   STATUS    RESTARTS   AGE     IP               NODE      NOMINATED NODE   READINESS GATES
vol1   2/2     Running   1          3m49s   10.244.141.205   server3   <none>           <none>
[root@server2 volumes]# curl 10.244.141.205   ##访问信息
www.westos.org

Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here

2.2.2 File exceeds sizelimit

[root@server2 volumes]# kubectl attach vol1 -c vm1 -it 
/ # cd cache/
/cache # ls
index.html
/cache # dd if=/dev/zero of=bigfile bs=1M count=200    ##设置超过限制的内存大小
200+0 records in
200+0 records out

[root@server2 volumes]# kubectl  get pod   ##查看是否转换成驱离状态
NAME   READY   STATUS    RESTARTS   AGE 
vol1   0/2     Evicted   0          7m34s

Insert picture description here
Insert picture description here

3. hostPath 卷

3.1 Introduction

  • The hostPath volume can mount files or directories on the file system of the host node to your Pod. Although this is not required by most Pods, it provides a powerful escape pod for some applications.

  • Some uses of hostPath are:
    run a container that needs to access the internal mechanism of the Docker engine, and mount the /var/lib/docker path.
    When running cAdvisor (which can be understood as monitoring) in the container, mount /sys with hostPath.
    Allows the Pod to specify whether the given hostPath should exist before running the Pod, whether it should be created, and how it should exist.

  • In addition to the required path attribute, the user can optionally specify the type for the hostPath volume.
    Insert picture description here

  • Be careful when using this type of volume, because
    multiple Pods with the same configuration (for example, created from a podTemplate) will behave differently on different nodes due to different files on the node. (Each node accesses different resources, such as server3 accesses file1 in the docker directory, server4 accesses file2 in the docker directory)
    When Kubernetes adds resource-aware scheduling according to the plan, this type of scheduling mechanism will not be able to consider the use of hostPath Resources. (Deleting a pod does not affect the content under hostpath, so it is called an escape warehouse)
    The files or directories created on the basic host can only be written by the root user. You need to run the process as root in a privileged container, or modify the file permissions on the host so that the container can write to the hostPath volume.

3.2 Example

[root@server2 volumes]# kubectl  delete pod vol1   ##删除前一个实验pod

[root@server2 volumes]# cat hostpath.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: myapp:v1
    name: vm1
    volumeMounts:
    - mountPath: /usr/share/nginx/html    ##将此路径挂载到/webdata
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      path: /webdata
      type: DirectoryOrCreate    ##没有文件夹创建文件夹
[root@server2 volumes]# kubectl apply -f hostpath.yaml    ##应用
[root@server2 volumes]# kubectl  get pod       ##查看
[root@server2 volumes]# kubectl  get pod -o wide    ##查看挂载到哪个节点,这里是server3
NAME      READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
test-pd   1/1     Running   0          56s   10.244.141.206   server3   <none> 

##查看server3上是否存在/webdata,并书写测试文件
[root@server3 ~]# cd /webdata/
[root@server3 webdata]# echo www.westos.org > index.html
[root@server3 webdata]# cat index.html 
www.westos.org

##测试
[root@server2 volumes]# curl 10.244.141.206
www.westos.org
[root@server2 volumes]# kubectl delete -f hostpath.yaml  ##删除之后server3上的/webdata文件仍然存在,所以称为逃生仓

Insert picture description here
Insert picture description here
Insert picture description here

server3
Insert picture description here

server2
Insert picture description here

Insert picture description here

4 NFS

## 1. 书写配置文件
[root@server2 volumes]# vim nfs.yaml 
[root@server2 volumes]# cat nfs.yaml   
apiVersion: v1
kind: Pod
metadata:
  name: nfs-pd
spec:
  containers:
  - image: myapp:v1
    name: vm1
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: test-volume
  volumes:
  - name: test-volume
    nfs:
      server: 172.25.13.1
      path: /nfsdata

## 2. 172.25.13.1主机配置nfs
[root@server1 ~]# mkdir /nfsdata
[root@server1 ~]# cd /nfsdata/
[root@server1 harbor]# yum install nfs-utils -y   ##将server1作为nfs,需要下载nfs服务
[root@server1 harbor]# vim /etc/exports
[root@server1 harbor]# cat /etc/exports
/nfsdata	*(rw,no_root_squash)
[root@server1 harbor]# systemctl enable --now nfs
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@server1 harbor]# showmount -e
Export list for server1:
/nfsdata *


## 3.查看分配到哪个节点,并在该节点安装nfs
[root@server2 volumes]# kubectl apply -f nfs.yaml 
[root@server2 volumes]# kubectl get pod -o wide 
NAME     READY   STATUS              RESTARTS   AGE   IP       NODE      NOMINATED NODE   READINESS GATES
nfs-pd   0/1     ContainerCreating   0          17s   <none>   server3   <none> 
[root@server3 webdata]# yum install nfs-utils -y 

[root@server2 volumes]# kubectl get pod -o wide    #安装之后运行成功
NAME     READY   STATUS    RESTARTS   AGE     IP               NODE      NOMINATED NODE   READINESS GATES
nfs-pd   1/1     Running   0          4m12s   10.244.141.207   server3   <none>           <none>

## 4. 测试
[root@server1 nfsdata]# echo www.westos.org > index.html
[root@server2 volumes]# curl 10.244.141.207
www.westos.org


1. Writing configuration files
Insert picture description here

## 2. 172.25.13.1 host configuration nfs
Insert picture description here

Insert picture description here
Insert picture description here

3. Check which node is assigned, and install nfs on that node
Insert picture description here
Insert picture description here
Insert picture description here

4. Test
Insert picture description here
Insert picture description here
Insert picture description here

Guess you like

Origin blog.csdn.net/qwerty1372431588/article/details/114056301
Recommended