[Cloud Native] kubernetes about storage (Volume)

 

 

Table of contents

1 VolumeVolume

2 types of volumes

3 How to use

4 common types

5 PV & PVC


1 VolumeVolume

Official website address: Volume | Kubernetes

The files in the Container are temporarily stored on the disk, which brings some problems to the more important applications running in the Container. One of the problems is that files are lost when the container crashes. The kubelet will restart the container, but the container will restart in a clean state. The second problem Podoccurs when running multiple containers in the same container and sharing files.Kubernetes Volume 这一抽象概念能够解决这两个问题。

2 types of volumes

Kubernetes supports many types of volumes. A Pod can use any number of volume types simultaneously. The lifetime of the temporary volume type is the same as that of the Pod, but the persistent volume can outlive the Pod. Kubernetes also destroys temporary volumes when the Pod no longer exists; however, Kubernetes does not destroy persistent volumes. For any type of volume in a given Pod, no data is lost during container restarts.

The core of a volume is a directory that may contain data and that can be accessed by containers in a Pod. The different types of volumes used will determine how the directory is formed, what media is used to store the data, and what is stored in the directory. Commonly used volume types include configMap, emptyDir, local, nfs, secret, etc.

  • ConfigMap: Configuration files can be saved to ConfigMap in the form of key-value pairs, and can be used in the Pod as files or environment variables. ConfigMap can be used to store insensitive configuration information, such as application configuration files.

  • EmptyDir: is an empty directory that can be used to store temporary data in a Pod. When the Pod is deleted, the directory will also be deleted.

  • Local: Maps directories or files in the local file system to a Volume in the Pod, which can be used to share files or data in the Pod.

  • NFS: Mount one or more NFS shared directories on the network to the Volume in the Pod, which can be used to share data between multiple Pods.

  • Secret: Save sensitive information in Secret in the form of ciphertext, and can use it in the Pod as a file or environment variable. Secret can be used to store sensitive information, such as usernames, passwords, certificates, etc.

3 How to use

When using volumes, .spec.volumesset the volume provided for the Pod in the field and .spec.containers[*].volumeMountsdeclare the mounting location of the volume in the container in the field. The view of the file system seen by processes in a container is composed of the initial contents of their container image and the volumes mounted in the container (if defined). The root file system matches the contents of the container image. Any write operation under this file system, if allowed, will affect what is seen when subsequent processes in the container access the file system.

apiVersion: v1
kind: Pod
metadata:
  name: configmap-pod
spec:
  containers:
    - name: test
      image: busybox:1.28
      volumeMounts:
            ..........
  volumes:
    ............

4 common types

4.1 emptyDir

apiVersion: v1
kind: Pod
metadata:
  name: emptydir-example
spec:
  containers:
    - name: writer
      image: busybox
      command: ["/bin/sh", "-c", "echo 'Hello World!' > /data/hello.txt ; sleep 3600"]
      volumeMounts:
        - name: shared-data
          mountPath: /data
    - name: reader
      image: busybox
      command: ["/bin/sh", "-c", "cat /data/hello.txt ; sleep 3600"]
      volumeMounts:
        - name: shared-data
          mountPath: /data
  volumes:
    - name: shared-data
      emptyDir: {}

总结: emptyDir 是 Host 上创建的临时目录,其优点是能够方便地为 Pod 中的容器提供共享存储,不需要额外的配置。它不具备持久性,如果Pod 不存在了,emptyDir 也就没有了。根据这个特性,emptyDir 特别适合 Pod 中的容器需要临时共享存储空间的场景,比如前面的生产者消费者用例。

4.2 hostPath

apiVersion: v1
kind: Pod
metadata:
  name: busybox-hostpath
spec:
  containers:
  - name: busybox
    image: busybox
    command: ["/bin/sh", "-c", "echo 'hello' > /data/data.txt && sleep 3600"]
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    hostPath:
      path: /data/hostpath

总结: 如果 Pod 被销毀了,hostPath 对应的目录还是会被保留,从这一点来看,hostPath 的持久性比emptyDir 强。不过一旦Host 崩溃,hostPath 也就无法访问了。但是这种方式也带来另外一个问题增加了 pod 与节点的耦合。

4.3 nfs

nfs: network filesystem: network file storage system

apiVersion: v1
kind: Pod
metadata:
  name: nfs-test
spec:
  containers:
  - name: busybox
    image: busybox
    command: [ "/bin/sh", "-c", "while true; do sleep 3600; done" ]
    volumeMounts:
    - name: nfs-volume
      mountPath: /mnt/nfs
  volumes:
  - name: nfs-volume
    nfs:
      server: <NFS_SERVER_IP>
      path: /path/to/nfs/share

总结: 相对于 emptyDir 和 hostPath,这种 volume 类型的最大特点就是不依赖 Kuberees Volume 的底层基础设施由独立的存储系统管理,与 Kubernetes 集群是分离的。数据被持久化后,即使整个 Kubernetes 崩溃也不会受损。当然,运维这样的存储系统通常不是一项简单的工作,特别是对可靠性、可用性和扩展性 有较高要求的时候。

5 PV & PVC

5.1 Questions

Volume provides a very good data persistence solution, but it still has shortcomings in manageability. In the previous nfs example, to use Volume, Pod must know the following information in advance:

  • The current Volume type and confirms that the Volume has been created.

  • You must know the specific address information of the Volume.

However, Pods are usually maintained by application developers, while Volumes are usually maintained by storage system administrators. To obtain the above information, developers must either ask the administrator or become an administrator themselves. This brings about a management problem: the responsibilities of application developers and system administrators are coupled together. If the system size is small or for a development environment, this situation is acceptable. When the cluster size becomes large, especially for a production environment, considering efficiency and security, this becomes a problem that must be solved.

5.2 PV & PVC

The solution given by Kubernetes is Persistent Volume 和 Persistent Volume Claim.

PersistentVolume (PV) is a storage space in an external storage system that is created and maintained by the administrator. Like Volume, PV is persistent and its life cycle is independent of Pod.

Persistent Volume Claim (PVC) is a claim for PV. PVCs are typically created and maintained by ordinary users. When needing to allocate storage resources to a Pod, users can create a PVC and specify information such as the storage resource capacity and access mode (such as read-only). Kubernetes will find and provide PVs that meet the conditions. With PersistentVolumeClaim, users only need to tell Kubernetes what storage resources are needed, without having to worry about the underlying details such as where the real space is allocated and how to access it. The underlying information of these Storage Providers is left to the administrator, and only the administrator should care about the details of creating a PersistentVolume.

5.3 Basic use

  • Create PV

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 1Gi #指定容量大小
  accessModes: # 访问模式 
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /{nfs-server目录名称}
    server: {nfs-server IP 地址}
  • accessModes: There are 3 supported access modes:

    • ReadWriteOnce means that PV can be mounted to a single node in readwrite mode

      • This PV can only be mounted in read-write mode by a certain node, which means that this PV can only be mounted on a certain node by a Pod, and this Pod can perform read and write operations on this PV. If you try to mount this PV on other nodes, it will fail.

    • ReadOnlyMany means that PV can be mounted to multiple nodes in read-only mode.

      • This PV can be mounted in read-only mode by multiple nodes, which means that this PV can be mounted on multiple nodes by multiple Pods.

    • ReadWriteMany means that PV can be mounted to multiple nodes in read-write mode.

      • This PV can be mounted by multiple nodes in read-write mode, which means that this PV can be mounted by multiple Pods on multiple nodes.

  • persistentVolumeReclaimPolicy: Specifies that the PV's recycling policy supports three strategies:

    • Retain: After the PVC is deleted, retain the PV and its data, and manually clean up the data in the PV.

    • Delete: After the PVC is deleted, automatically delete the PV and its data.

    • Recycle: After the PVC is deleted, prepare the PV for reuse by deleting the data in the PV.

      值得注意的是,persistentVolumeReclaimPolicy只适用于一些类型的 PV,如 NFS、HostPath、iSCSI 等。对于一些云平台提供的存储,如 AWS EBS 和 Azure Disk,由于底层提供商会自动处理 PV 的回收问题,因此该属性不适用。

  • storageClassName: Specify the PV class as nfs. It is equivalent to setting a category for PV, and PVC can specify the class to apply for PV of the corresponding class.

  • Create PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs # 通过名字进行选择
  #selector:  通过标签形式
  #  matchLabels:
  #    pv-name: nfs-pv
  • Use PVC

apiVersion: v1
kind: Pod
metadata:
  name: busybox-nfs
spec:
  containers:
  - name: busybox
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo 'Hello NFS!' >> /data/index.html; sleep 1; done"]
    volumeMounts:
    - name: nfs-volume
      mountPath: /data
  volumes:
  - name: nfs-volume
    persistentVolumeClaim:
      claimName: nfs-pvc

5.4 Dynamic supply

In the previous example, we created the PV in advance, then applied for the PV through the PVC and used it in the Pod. This method is called static provision (Static Provision), and its counterpart is dynamic provision (Dynamical Provision), that is, if the requirements are not met PV for PVC conditions will be dynamically created. Compared with static provision, dynamic provision has obvious advantages: it does not need to create PV in advance, which reduces the administrator's workload and is highly efficient. Dynamic provisioning is implemented through StorageClass. StorageClass defines how to create PV, but it should be noted that each StorageClass has a preparer (Provisioner), which is used to decide which volume plug-in to use to prepare PV. This field must be specified. ( Storage class | Kubernetes ) can achieve dynamic creation. Let's take NFS as an example:

  • Define NFS Provider

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nfs-client-provisioner
      labels:
        app: nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: kube-system
    spec:
      replicas: 1
      strategy:
        type: Recreate
      selector:
        matchLabels:
          app: nfs-client-provisioner
      template:
        metadata:
          labels:
            app: nfs-client-provisioner
        spec:
          serviceAccountName: nfs-client-provisioner
          containers:
            - name: nfs-client-provisioner
              image: chronolaw/nfs-subdir-external-provisioner:v4.0.2
              volumeMounts:
                - name: nfs-client-root
                  mountPath: /persistentvolumes
              env:
                - name: PROVISIONER_NAME
                  value: k8s-sigs.io/nfs-subdir-external-provisioner
                - name: NFS_SERVER
                  value: 10.15.0.25
                - name: NFS_PATH
                  value: /root/nfs/data
          volumes:
            - name: nfs-client-root
              nfs:
                server: 10.15.0.25
                path: /root/nfs/data
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: kube-system
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: nfs-client-provisioner-runner
    rules:
      - apiGroups: [""]
        resources: ["nodes"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "create", "delete"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["create", "update", "patch"]
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: run-nfs-client-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        # replace with namespace where provisioner is deployed
        namespace: kube-system
    roleRef:
      kind: ClusterRole
      name: nfs-client-provisioner-runner
      apiGroup: rbac.authorization.k8s.io
    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: kube-system
    rules:
      - apiGroups: [""]
        resources: ["endpoints"]
        verbs: ["get", "list", "watch", "create", "update", "patch"]
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: kube-system
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        # replace with namespace where provisioner is deployed
        namespace: kube-system
    roleRef:
      kind: Role
      name: leader-locking-nfs-client-provisioner
      apiGroup: rbac.authorization.k8s.io

  • DefineStorageClass

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: mysql-nfs-sc
    provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
    parameters:
      onDelete: "remain"

  • Create dynamically using StorageClass

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: mysql
      labels:
        app: mysql
    spec:
      serviceName: mysql #headless 无头服务  保证网络标识符唯一  必须存在
      replicas: 1
      template:
        metadata:
          name: mysql
          labels:
            app: mysql
        spec:
          containers:
            - name: mysql
              image: mysql/mysql-server:8.0
              imagePullPolicy: IfNotPresent
              env:
                - name: MYSQL_ROOT_PASSWORD
                  value: root
              volumeMounts:
                - mountPath: /var/lib/mysql #自己容器写入数据目录
                  name: data    #保存到指定一个变量中 变量名字就是 data
              ports:
                - containerPort: 3306
          restartPolicy: Always
      volumeClaimTemplates:  #声明动态创建数据卷模板
        - metadata:
            name: data      # 数据卷变量名称
          spec:
            accessModes:    # 访问模式
              - ReadWriteMany
            storageClassName: mysql-nfs-sc # 使用哪个 storage class 模板存储数据
            resources:
              requests:
                storage: 2G
      selector:
        matchLabels:
          app: mysql

Guess you like

Origin blog.csdn.net/weixin_53678904/article/details/132308948