k8s practice | configmap&secret&pv&pvc

configmap&secret&pv&pvc

One, configMap

When we deploy applications on kubernetes, we often need to pass some configurations to our applications, such as database addresses, usernames and passwords, and the like. To do this, we have many options, such as:

  • We can directly write in the application configuration file when packaging the image, but the disadvantages of this method are obvious and very obvious.
  • We can pass in the env environment variable in the configuration file, but in this case, if we want to modify the env, we must modify the yaml file, and all containers need to be restarted.
  • We can go to the database or a specific place to get it when the application starts, no problem! But first, it is troublesome to implement; second, what if the configuration location changes?

Of course there are other solutions, but each solution has its own problems.

Moreover, there is another problem. If one of my configurations needs to be used by multiple applications, there is no way to share the configuration except for the third solution. That is to say, if I want to change the configuration, I have to Manually change one by one. If we have 100 applications, we have to change 100 configurations, and so on...

Kubernetes provides a good solution to this problem, using ConfigMap and Secret

The ConfigMap function was introduced in version 1.2 of kubernetes. Many applications will read configuration information from configuration files, command line parameters or environment variables. ConfigAPI provides us with a mechanism to inject configuration information into containers. ConfigMap can be used to save A single property can also be used to hold an entire configuration file or a JSON blob.

ConfigMap对像是一系列配置的集合,k8s会将这一集合注入到对应的Pod对像中,并为容器成功启动使用。注入的方式一般有两种,一种是挂载存储卷,一种是传递变量。ConfigMap被引用之前必须存在,属于名称空间级别,不能跨名称空间使用,内容明文显示。ConfigMap内容修改后,对应的pod必须重启或者重新加载配置(支持热更新的应用,不需要重启)。
Secret类似于ConfigMap,是用Base64加密,密文显示,一般存放敏感数据。一般有两种创建方式,一种是使用kubectl create创建,一种是用Secret配置文件。

1. Application scenarios

Application scenarios: mirroring is often the basis of an application, and there are many parameters or configurations that need to be customized, such as resource consumption, log location level, etc. There may be many of these configurations, so they cannot be placed in mirroring or Kubernetes Configmap is provided to provide configuration files or environment variables to the container to achieve different configurations, thus decoupling the image configuration from the image itself, so that the container application does not depend on the environment configuration.

我们经常都需要为我们的应用程序配置一些特殊的数据,比如密钥、Token 、数据库连接地址或者其他私密的信息。你的应用可能会使用一些特定的配置文件进行配置,比如settings.py文件,或者我们可以在应用的业务逻辑中读取环境变量或者某些标志来处理配置信息。
我们要做到这个,有好多种方案,比如:
1.我们可以直接在打包镜像的时候写在应用配置文件里面,但是这种方式的坏处显而易见而且非常明显。
2.我们可以在配置文件里面通过 env 环境变量传入,但是这样的话我们要修改 env 就必须去修改 yaml 文件,而且需要重启所有的 container 才行。
3.我们可以在应用启动的时候去数据库或者某个特定的地方拿,没问题!但是第一,实现起来麻烦;第二,如果配置的地方变了怎么办?

当然还有别的方案,但是各种方案都有各自的问题。
而且,还有一个问题就是,如果说我的一个配置,是要多个应用一起使用的,以上除了第三种方案,都没办法进行配置的共享,就是说我如果要改配置的话,那得一个一个手动改。假如我们有 100 个应用,就得改 100 份配置,以此类推……

kubernetes 对这个问题提供了一个很好的解决方案,就是用 ConfigMap 和 Secret。    

Pass parameters to the container:

Docker Kubernetes describe
ENTRYPOINT command Executables in containers
CMD args Arguments that need to be passed to the executable

If you need to pass parameters to the container, you can use command and args or environment variables in the Yaml file.

apiVersion: v1
kind: Pod
metadata:
  name: print-greeting
spec:
  containers:
  - name: env-print-demo
    image: hub.kaikeba.com/java12/bash:v1
    env:
    - name: GREETING
      value: "Warm greetings to"
    - name: HONORIFIC
      value: "The Most Honorable"
    - name: NAME
      value: "Kubernetes"
    command: ["echo"]
    args: ["$(GREETING) $(HONORIFIC) $(NAME)"]
    
    # 创建后,命令 echo Warm greetings to The Most Honorable Kubernetes 将在容器中运行,也就是环境变量中的值被传递到了容器中。
    # 查看pod就可以看出
    kubectl logs podname 

2. Create configMap

2.1, help document

[root@k8s-master-155-221 configmap]# kubectl create  configmap --help
......
Aliases:
configmap, cm  #可以使用cm替代

Examples:
  # Create a new configmap named my-config based on folder bar
  kubectl create configmap my-config --from-file=path/to/bar  #从目录创建  文件名称为键  文件内容为值?
  
  # Create a new configmap named my-config with specified keys instead of file basenames on disk
  kubectl create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt #从文件创建 key1为键 文件内容为值
  
  # Create a new configmap named my-config with key1=config1 and key2=config2
  kubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2   #直接命令行给定,键为key1 值为config1
  
  # Create a new configmap named my-config from the key=value pairs in the file
  kubectl create configmap my-config --from-file=path/to/bar   #从文件创建 文件名为键 文件内容为值
  
  # Create a new configmap named my-config from an env file
  kubectl create configmap my-config --from-env-file=path/to/bar.env

2.2, use the directory to create



# 指定目录
ls /docs/user-guide/configmap
#创建game.properties,ui.properties,game.cnf ui.conf ,game.yaml
# game.properties
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30

# ui.propertes
color.good=purple
color.bad=yellow
allow.textmode=true
how.nice.to.look=fairlyNice

#创建configmap ,指令
# game-config :configmap的名称
# --from-file:指定一个目录,目录下的所有内容都会被创建出来。以键值对的形式
# --from-file指定在目录下的所有文件都会被用在 ConfigMap 里面创建一个键值对,键的名字就是文件名,值就是文件的内容
kubectl create configmap game-config --from-file=docs/user-guide/configmap

# 查看configmap文件
kubectl get cm 

# 查看详细信息
kubectl get cm game-config -o yaml   

kubectl describe cm

2.3. Create according to the file

A ConfigMap can be created from a single file just by specifying it as a file

# 指定创建的文件即可
kubectl create configmap game-config-2 --from-file=/docs/user-guide/configmap/game.propertes

#查看
kubectl get configmaps game-config-2 -o yaml

–from-file This parameter can be used multiple times, you can specify game.properties, ui.propertes respectively. The effect is the same as specifying the entire directory.

2.4, text creation

# 使用--from-literal 方式直接创建configmap
# Create the ConfigMap
$ kubectl create configmap my-config --from-literal=key1=value1 --from-literal=key2=value2
configmap "my-config" created 

# Get the ConfigMap Details for my-config
$ kubectl get configmaps my-config -o yaml
apiVersion: v1
data:
  key1: value1
  key2: value2
kind: ConfigMap
metadata:
  creationTimestamp: 2017-05-31T07:21:55Z
  name: my-config
  namespace: default
  resourceVersion: "241345"
  selfLink: /api/v1/namespaces/default/configmaps/my-config
  uid: d35f0a3d-45d1-11e7-9e62-080027a46057
  
# 文字方式
kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm

#查看
kubectlget configmaps special-config -o yaml 

Use text to create, use the --from-literal parameter to pass configuration information, and change the parameter can be used multiple times.

2.5. Direct method


# 直接通过配置文件的方式创建  
# 耦合方式创建
apiVersion: v1
data:
  game.properties: |
    enemies=aliens
    lives=3
    enemies.cheat=true
    enemies.cheat.level=noGoodRotten
    secret.code.passphrase=UUDDLRLRBABAS
    secret.code.allowed=true
    secret.code.lives=30
  ui.properties: |
    color.good=purple
    color.bad=yellow
    allow.textmode=true
    how.nice.to.look=fairlyNice
kind: ConfigMap
metadata:
  name: game-config
  namespace: default

2.6. Application in pod

# 创建configMap,   special.how: very   键名:键值
apiVersion: v1
kind: ConfigMap
metadata:
  name: special-config
  namespace: default
data:
  special.how: very
  special.type: charm
  
# 创建第二个configMap
apiVersion: v1
kind: ConfigMap
metadata:
  name: env-config
  namespace: default
data:
  log_level: INFO

# 第一种方式: 在pod中使用configmap配置,使用ConfigMap来替代环境变量
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
    - name: test-container
      image: hub.kaikeba.com/library/myapp:v1 
      command: ["/bin/sh", "-c", "env"]
      env:
        - name: SPECIAL_LEVEL_KEY 
          valueFrom:
            configMapKeyRef: 
              name: special-config  # 第一种导入方式:在env中导入
              key: special.how              
        - name: SPECIAL_TYPE_KEY
          valueFrom: 
            configMapKeyRef: 
              name: special-config 
              key: special.type 
      envFrom:                      # 第二种导入方式,直接使用envFrom导入
        - configMapRef: 
            name: env-config 
  restartPolicy: Never
  
# 查看日志可以发现,环境变量注入到了容器中了,打印env就结束了
kubectl  logs  test-pod 
...
SPECIAL_TYPE_KEY=charm
SPECIAL_LEVEL_KEY=very
log_level=INFO
  
  
#第二种方式:用ConfigMap设置命令行参数
#用作命令行参数,将 ConfigMap 用作命令行参数时,需要先把 ConfigMap 的数据保存在环境变量中,然后通过 $(VAR_NAME) 的方式引用环境变量.
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
    - name: test-container
      image: hub.kaikeba.com/library/myapp:v1
      command: [ "/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ] 
      env:
        - name: SPECIAL_LEVEL_KEY 
          valueFrom: 
            configMapKeyRef: 
              name: special-config 
              key: special.how 
        - name: SPECIAL_TYPE_KEY 
          valueFrom: 
            configMapKeyRef: 
              name: special-config 
              key: special.type 
  restartPolicy: Never
  
  
# 第三种方式:通过数据卷插件使用ConfigMap
#在数据卷里面使用这个ConfigMap,有不同的选项。最基本的就是将文件填入数据卷,在这个文件中,键就是文件名,键值就是文件内容

apiversion: v1
kind: Pod
metadata:
  name: test-pod3
spec:
  containers:
    - name: test-container
      image: hub.kaikeba.com/library/myapp:v1
      command: [ "/bin/sh", "-c", "sleep 600s" ] 
      volumeMounts:
        - name: config-volume
          mountPath: /etc/config # 表示把conifg-volume挂载卷挂载到容器的/etc/config目录下
  volumes:    # 开启挂载外部configmap
    - name: config-volume
      configMap:
        name: special-config
  restartPolicy: Never
  
  # 登录容器查看/etc/config目录下是否挂载成功
  

2.7. Hot update

# ConfigMap的热更新
apiVersion: v1
kind: ConfigMap
metadata:
  name: log-config
  namespace: default
data:
  log_level:INFO
---
apiVersion: extensions/v1beta1 
kind: Deployment 
metadata: 
  name: my-nginx 
spec:
  replicas: 1
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
        - name: my-nginx
          image: hub.kaikeba.com/java12/myapp:v1 
          ports:
            - containerPort: 80 
          volumeMounts:
            - name: config-volume 
              mountPath: /etc/config 
      volumes:
        - name: config-volume 
          configMap:
            name: log-config

# 获取值
kubectl exec my-nginx-7b55868ff4-wqh2q -it -- cat /etc/config/log_level
#输出
INFO

修改ConfigMap

$ kubectl edit configmap log-config
修改log-level的值为DEBUG等待大概10秒钟时间,再次查看环境变量的值

two, secret

​ The way Secret objects store data is to store data in the form of key values. The way to call Secret in Pod resources is to access data through environment variables or storage volumes, which solves the problem of configuring sensitive data such as passwords, tokens, and keys. , without exposing these sensitive data to images or Pod Specs.

​ In addition, the data storage and printing format of the Secret object is a Base64-encoded string, so the user also needs to provide data in this type of encoding format when creating a Secret object. When accessed in the container as an environment variable or storage volume, it will be automatically decoded into plaintext format. It should be noted that if it is on the Master node, the Secret object is stored in etcd in a non-encrypted format, so the management and permissions of etcd need to be strictly controlled.
There are 4 types of Secret:

  • Service Account: used to access the Kubernetes API, automatically created by Kubernetes, and automatically mounted to the /run/secrets/kubernetes.io/serviceaccount directory of the Pod;
  • Opaque: Secret in base64 encoding format, used to store passwords, keys, information, certificates, etc., and the type identifier is generic;
  • kubernetes.io/dockerconfigjson : used to store the authentication information of the private docker registry, the type is identified as docker-registry.
  • kubernetes.io/tls: used to store certificate and private key files for the SSL communication mode, and the imperative creation type is identified as tls.

1、Service Account

Service Account is used to access the kubernetes API, automatically created by Kubernetes, and will be automatically mounted to the Pod's /run/secrets/kubernetes.io/serviceaccount directory.

Service Account does not need to be managed by ourselves. This certificate is maintained and managed by kubernetes itself.

# 创建pod
kubectl run my-nginx --image=hub.kaikeba.com/java12/nginx:v1 

# 查看证书
kubctl exec -it podName -- sh
# 进入证书目录/run/secrets/kubernetes.io/serviceaccount查看即可
ca
namespace
token

2、opaque Secret

2.1. Create an example

Opaque type data is a map type, and the value is required to be in base64 encoding format

# base64对用户名,密码加密效果演示
echo -n "admin" | base64
YWRtaW4=

echo -n "abcdefgh" | base64
YWJjZGVmZ2g=


# secret.yaml配置文件方式
apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
 password: YWJjZGVmZ2g=
 username: YWRtaW4=

2.2. How to use

# 将secret挂载到volume中
apiVersion: v1
kind: pod
metadata:
 name: secret-test
 labels:
   name: secret-test
spec:
  volumes:
  - name: secrets
    secret:
      secretName: mysecret
  containers:
  - image: hub.kaikeba.com/java12/myapp:v1
    name: db
    volumeMounts:
    - name: secrets
      mountPath: "/etc/secrets"
      readOnly: true
   


# 将secret导出到环境变量中
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: secret-deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: pod-deployment
    spec:
      containers:
      - name: pod-1
        image: hub.kaikeba.com/java12/myapp:v1
        ports:
        - containerPort: 80
        env:
        - name: TEST_USER
          valueFrom:
            secretKeyRef:
              name: mysecret
              key: username
        - name: TEST_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysecret
              key: password

3. k8s-volumes

1. What should I use volumes?

The life cycle of the disk in the container in k8s is short, which brings a series of problems

  1. When a container is damaged, kubelet will restart the container, but the files in the container will be lost - the container restarts in a clean state
  2. When many containers run in the same pod, data file sharing is often required
  3. In k8s, because podit is distributed on different nodes, the sharing of persistent data between different nodes cannot be realized, and when a node fails, it may cause permanent data loss.

volumes is used to solve the above problems

The life cycle of a Volume is independent of the container. The container in the Pod may be destroyed and rebuilt, but the Volume will be retained.

Note: The data of the docker disk mapping will be preserved, which is somewhat different from kubernetes

2. What is volume?

The volume is used to mount the data of the container and store some data needed by the container when it is running. When the container was recreated, we actually found that the volume mounted volume did not change.

A volume in kubernetes has a defined lifespan—the same as the pod that encapsulates it. Therefore, the life of the volume outlives all the containers in the pod, and the data is still preserved when the container is restarted.

Of course, when the pod ceases to exist, the volume ceases to exist, and perhaps more importantly, kubernetes supports multiple types of volumes, and a pod can use any number of volumes at the same time

[External link image transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the image and upload it directly (img-oacvq1xO-1679666378136)(assets/image-20200511204534341.png)]

3. Volume type

Types of kubenetes volumes:

[External link picture transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the picture and upload it directly (img-vT8zWo1N-1679666378138)(assets/image-20200511111331467.png)]

The first is the local volume

Like the hostPath type and the bind mount type in docker, it is the type that is directly mounted to the host file.
Like emptyDir is such a local volume, which is similar to the volume type.
Both of these points are bound to the node node.

The second is the network data volume

For example, Nfs, ClusterFs, Ceph, these are external storages that can be mounted on k8s

The third is the cloud disk

Such as AWS, Microsoft (azuredisk)

The fourth is the resources of k8s itself

Such as secret, configmap, downwardAPI

4、emptyDir

Let’s take a look at the local volume first
. EmptyDir is similar to docker’s volume. When docker deletes the container, the data volume will still exist, but if emptyDir deletes the container, the data volume will also be lost. Generally, this is only used as a temporary data volume.

Create an empty volume and mount it to the container in the Pod. Pods that delete the volume will also be deleted.

Application scenario: data sharing between containers in Pod

When a pod is assigned to a node, the emptyDir volume is first created and exists as long as the pod is running on that node. As the name of the volume says, it is initially empty, and the containers in the pod can read and write to the same files in the emptyDir volume, although the volume can be mounted on the same or different paths in each container. When a pod is deleted from a node for any reason, the data in emptyDir will be permanently deleted

Note: container crashes will not remove pods from the node, so data in the emptyDir volume is safe from container crashes

[External link picture transfer failed, the source site may have an anti-theft link mechanism, it is recommended to save the picture and upload it directly (img-xquYaKw6-1679666378139)(assets/image-20200511205218768.png)]

Usage of emptyDir

1、暂存空间,例如用于基于磁盘的合并排序
2、用作长时间计算崩溃恢复时候的检查点
3、web服务器容器提供数据时,保存内容管理器容器提取的文件

5. An example

apiVersion: v1
kind: Pod
metadata:
  name: test-pod1
spec:
  containers:
  - image: hub.kaikeba.com/library/myapp:v1
    name: test-container
    volumeMounts:
    - mountPath: /cache
      name: cache-volume
  volumes:
  - name: cache-volume
    emptyDir: {
    
    }
    

---
apiVersion: v1
kind: Pod
metadata:
  name: test-pod2
spec:
  containers:
  - image: hub.kaikeba.com/library/myapp:v1
    name: test-container
    volumeMounts:
    - mountPath: /cache
      name: cache-volume
  - name: test-1
    image: hub.kaikeba.com/library/busybox:v1
    command: ["/bin/sh","-c","sleep 3600s"]
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - mountPath: /cache
      name: cache-volume
  volumes:
  - name: cache-volume
    emptyDir: {
    
    }

6、HostPath

Mount a file or directory on the Node file system to a container in a Pod.

Application scenario: Containers in Pod need to access host files;

7. An example

apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - image: hub.kaikeba.com/library/myapp:v1
    name: test-container
    volumeMounts:
    - mountPath: /cache
      name: cache-volume
  volumes:
  - name: cache-volume
    hostPath:
      path: /data
      type: Directory

The data created here is the same as the data of the node node we are assigned to, and the created data will be updated. Deleting the container will not delete the data of the data volume.

type type

In addition to the required path attribute positions, users can also specify a type for the hostPath volume.

image-20200511114734100

8. NFS network storage

Kubernetes advanced PersistentVolume static provisioning realizes NFS network storage

NFS is a very early technology. Stand-alone storage is still very mainstream in terms of servers, but the only disadvantage of nfs is that there is no cluster version. Clustering is still relatively laborious, and the file system cannot be done. This is a very big problem. The big disadvantage is that large-scale still needs to choose some distributed storage. nfs is a network file storage server. After installing nfs, share a directory, and other servers can be mounted to the local through this directory, and write locally The files in this directory will be synchronized to the remote server to implement a shared storage function, which is generally used for shared storage of data, such as multiple web servers. It is definitely necessary to ensure the data consistency of these web servers, then it will This shared storage is used. If nfs is mounted on multiple web servers, under the root directory of the website, the website program will be placed on the nfs server. Every website and every web program can read this directory and have consistent data, so that multiple nodes can be guaranteed to provide consistent programs.

1) Take a separate server as the nfs server. Here we first build an NFS server to store our web page root directory

yum install nfs-utils -y

2) Expose the directory so that other servers can mount this directory

mkdir /opt/k8s
vim /etc/exports
/opt/k8s 192.168.30.0/24(rw,no_root_squash)

Add permissions to this network segment, readable and writable

[root@nfs ~]# systemctl start nfs

Find a node to mount and test, as long as you share this directory, you must install this client

#其他节点也需要安装nfs
yum install nfs-utils -y
mount -t nfs 192.168.30.27:/opt/k8s /mnt
cd /mnt
df -h
192.168.30.27:/opt/k8s    36G  5.8G   30G   17% /mnt
touch a.txt

Go to the server to check that the data has been shared

Deleting the data of the nfs server will also delete it.
Next, how to use K8s.
We put all the webpage directories in this directory.

# mkdir wwwroot
# vim nfs.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nfs
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: hub.kaikeba.com/library/myapp:v1
        volumeMounts:
        - name: wwwroot
          mountPath: /usr/share/nginx/html
        ports:
        - containerPort: 80
      volumes:
      - name: wwwroot
        nfs:
          server: 192.168.66.13
          path: /opt/k8s/wwwroot
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx
  type: NodePort

We write data in the web directory of the source pod, and check that it will also be shared in our nfs server directory

4. PV&PVC

1. Description of pv&pvc

There is a distinct difference between management 存储and management . Provides a set of APIs for users and administrators, abstracting away how . Here, we introduce two new API resources: and .计算PersistentVolume存储提供和消耗的细节PersistentVolume(简称PV)PersistentVolumeClaim(简称PVC)

  • PersistentVolume (Persistent Volume, referred to as PV) is a part of the network storage provided by the administrator in the cluster. Just like nodes in a cluster, a PV is also a resource in a cluster. It is also a volume plug-in like Volume, but its life cycle is independent of the Pod that uses it. PV is an API object that captures implementation details such as NFS, iSCSI, or other cloud storage systems.
  • PersistentVolumeClaim (Persistent Volume Claim, PVC for short) is a storage request from users. It is similar to Pod, Pod consumes Node resources, and PVC consumes PV resources. Pods can request specific resources (such as CPU and memory). PVC can request a specified size and access mode (can be mapped as one-time read-write or multiple read-only).

PVC allows users to consume abstract storage resources, and users often need PVs of various attributes (such as performance). Cluster administrators need to provide a variety of PVs of different sizes and different access modes without exposing the details of how these volumes are implemented to users. Because of this need, a resource is born StorageClass.

StorageClassProvides a way for an administrator to describe the level of storage he provides. Cluster administrators can map different levels to different service levels and different backend policies.

K8s is used for storage arrangement,
data persistent volume PersistentVolume referred to as pv/pvc is mainly used for arrangement of container storage

• PersistentVolume (PV): The abstraction of the creation and use of storage resources, so that storage as a resource management PV in the cluster
is considered by operation and maintenance, and is used to manage external storage

• Static: Create pv in advance, for example, create a 100G pv, 200G pv, and let those who need it use it, that is to say, pvc is connected to pv, that is to know how many pvs are created, how much space is created, and the name of the creation How much, there is a certain degree of matchability

• Dynamic

• PersistentVolumeClaim (PVC): so that users do not need to care about how much capacity is used to define specific Volume implementation details
. For example, if a service needs to be developed and deployed to use 10 G, then the resource object pvc can be used to define the use of 10 G. Don't worry about other things

pv&pvc difference

PersistentVolume(持久卷)and PersistentVolumeClaim(持久卷申请)are two API resources provided by k8s to abstract storage details.

Administrators focus on how to provide storage functions through pv without paying attention to how users use it. Similarly, users only need to mount pvc into containers without paying attention to the technology used to implement storage volumes.

The relationship between pvc and pv is similar to that between pod and node, the former consumes the resources of the latter. PVC can apply to PV for storage resources of a specified size and set the access mode, which can control storage resources through Provision -> Claim.

2. Life cycle

The life cycle of volume and claim, PV is the resources in the cluster, PVC is the request for these resources, and is also the "extraction certificate" of these resources. The interaction between PV and PVC follows the following life cycle:

  • supply

    There are two ways to provide PV: static and dynamic.

  • static

    Cluster administrators create multiple PVs that carry details of the real storage that is available to cluster users. They exist in the Kubernetes API and are available for storage usage.

  • dynamic

    When none of the static PVs created by the administrator match the user's PVC, the cluster may attempt to provision volumes exclusively to the PVC. The provisioning is based on StorageClass: the PVC must request such a class, and the administrator must have created and configured such a class for this dynamic provisioning to occur. Requesting a PVC configured with class "" effectively disables its own dynamic provisioning functionality.

  • to bind

    The user creates a PVC (or has previously created it for dynamic provisioning), specifying the required storage size and access mode. There is a control loop in the master that monitors for new PVCs, finds matching PVs (if any), and binds the PVC to the PV. If a PV was dynamically provisioned to a new PVC, the loop will always bind the PV to the PVC. In addition, users always get at least the storage they request, but the volume may exceed their request. Once bound, PVC bindings are exclusive, regardless of their binding mode.

    If no matching PV is found, the PVC will be in the unbound unbound state indefinitely. Once the PV is available, the PVC will become bound again. For example, if a PV cluster provides many 50G, it will not match a PVC that requires 100G. The PVC will not be bound until a 100G PV is added to the cluster.

  • use

    Pods use PVCs just like volumes. The cluster checks the PVC, finds the bound PV, and maps the PV to the Pod. For PVs that support multiple access modes, users can specify the mode they want to use. Once a user owns a PVC and the PVC is bound, the PV will always belong to the user as long as the user needs it. Users schedule Pods and access PVs by including PVCs in the Pod's volume block.

  • freed

    When the user finishes using the PV, they can delete the PVC object through the API. When a PVC is deleted, the corresponding PV is considered "released", but it cannot be used by another PVC. The ownership of the previous PVC still exists in this PV and must be disposed of according to the policy.

  • Recycle

    A PV's reclamation policy tells the cluster what to do with the PV after it is released. Currently, a PV can be Retained, Recycled, or Deleted. Reserved allows the resource to be declared again manually. For PV volumes that support delete operations, the delete operation will remove the PV object from Kubernetes, as well as the corresponding external storage (such as AWS EBS, GCE PD, Azure Disk, or Cinder volume). Dynamically provisioned volumes are always deleted.

3、POD&PVC

First create a container application

#vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
    containers:
    - name: nginx
      image: nginx:latest
      ports:
      - containerPort: 80
      volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumes:
    - name: www
      persistentVolumeClaim:
        claimName: my-pvc

The volume needs yaml, and the name here must correspond. Generally, the two files are put together

# vim pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

The next step is the operation and maintenance, create pv in advance

# vim pv1.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv1
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: /opt/k8s/demo1
    server: 192.168.66.13

Create pv in advance and mount the directory

I will create another pv, create the directory in advance on the nfs server, and modify the name

# vim pv2.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv2
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: /opt/k8s/demo2
    server: 192.168.66.13

Then create our pod and pvc now, here I write them together

# vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: nginx
    image: nginx:latest
    ports:
    - containerPort: 80
    volumeMounts:
      - name: www
        mountPath: /usr/share/nginx/html
  volumes:
    - name: www
      persistentVolumeClaim:
        claimName: my-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

es:
- ReadWriteMany
nfs:
path: /opt/k8s/demo1
server: 192.168.66.13


提前创建好pv,以及挂载目录

我再创建一个pv,在nfs服务器提前把目录创建好,名称修改一下

```yaml
# vim pv2.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv2
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: /opt/k8s/demo2
    server: 192.168.66.13

Then create our pod and pvc now, here I write them together

# vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: nginx
    image: nginx:latest
    ports:
    - containerPort: 80
    volumeMounts:
      - name: www
        mountPath: /usr/share/nginx/html
  volumes:
    - name: www
      persistentVolumeClaim:
        claimName: my-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

Guess you like

Origin blog.csdn.net/qq_58360406/article/details/129759462