K8s (Kubernetes) Learning (4): Controller Controller: Deployment, StatefulSet, Daemonset, Job

  • What is a Controller and its role
  • Common Controller Controller
  • How the Controller manages Pods
  • Deployment basic operation and application
  • Implement pod upgrade rollback and elastic scaling through the controller
  • Basic operation and application of StatefulSet
  • Daemonset basic operation and application
  • Job basic operation and application
  • Controller can't solve the problem

1 Controller Controller

Official website: http://kubernetes.p2hp.com/docs/concepts/architecture/controller.html

1.1 What is Controller

Kubernetes 通常不会直接创建 Pod, 而是通过 Controller 来管理 Pod 的。The deployment characteristics of the Pod are defined in the Controller, such as how many copies there are, what Node it runs on, and so on . In layman's terms, it can be considered that the Controller is an object used to manage Pods. Its core role can be summed up in one sentence:通过监控集群的公共状态,并致力于将当前状态转变为期望的状态。

Popular definition: controller can manage pods to make pods more capable of operation and maintenance

1.2 Common Controller Controller

  • DeploymentIs the most commonly used Controller. Deployments can manage multiple copies of Pods and ensure that Pods are running as expected.

    • ReplicaSet realizes the multi-copy management of Pod. ReplicaSet is automatically created when using Deployment, that is to say, Deployment manages multiple copies of Pod through ReplicaSet, and we usually do not need to use ReplicaSet directly.
  • DaemonsetUsed in scenarios where each Node runs at most one Pod copy. As its name suggests, a DaemonSet is typically used to run daemons.

  • StatefulesetIt can ensure that the name of each copy of the Pod is unchanged throughout the life cycle, while other Controllers do not provide this function. When a pod fails and needs to be deleted and restarted, the name of the pod will change, and StatefuleSet will ensure that the replicas are started, updated or deleted in a fixed order.

  • JobIt is used for applications that are deleted after running, while Pods in other Controllers usually run continuously for a long time.

1.3 How does the Controller manage Pods

注意: Controller 通过 label 关联起来 Pods

image-20230307105007568

2 Deployment

Official address: http://kubernetes.p2hp.com/docs/concepts/workloads/controllers/deployment.html

A Deployment provides declarative update capabilities for Pods and ReplicaSets.

You are responsible for describing the target state in the Deployment , and the Deployment Controller (Controller) changes the actual state at a controlled rate to make it the desired state. You can define a Deployment to create a new ReplicaSet, or delete an existing Deployment and adopt its resources through a new Deployment.

2.1 create a deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.19
          ports:
            - containerPort: 80

2.2 view deployment

# 部署应用
$ kubectl apply -f app.yaml
# 查看 deployment
$ kubectl get deployment
# 查看 pod
$ kubectl get pod -o wide
# 查看 pod 详情
$ kubectl describe pod pod-name
# 查看 deployment 详细
$ kubectl describe deployment 名称
# 查看 log
$ kubectl logs pod-name
# 进入 Pod 容器终端, -c container-name 可以指定进入哪个容器。
$ kubectl exec -it pod-name -- bash
# 输出到文件
$ kubectl get deployment nginx-deployment -o yaml >> test.yaml
  • NAMELists the names of Deployments in the namespace.
  • READYDisplays the number of "copies" of the application available. The displayed pattern is "Ready Count/Expected Count".
  • UP-TO-DATEShows the number of replicas that have been updated to achieve the desired state.
  • AVAILABLEDisplays the number of copies of the app available to the user.
  • AGEShows how long the application has been running.

Note that the desired number of replicas is .spec.replicasset according to the field 3.

2.3 scaling deployment

# 查询副本
$ kubectl get rs|replicaset
# 伸缩扩展副本
$ kubectl scale deployment nginx --replicas=5

2.4 rollback deployment

illustrate:

Deployment go-live is only triggered when the Deployment Pod template (i.e. .spec.template) changes, such as the template's tags or container image being updated. Other updates (such as expanding or shrinking the Deployment) will not trigger the go-live action.

# 查看上线状态
$ kubectl rollout status [deployment nginx-deployment | deployment/nginx]
# 查看历史
$ kubectl rollout history deployment nginx-deployment
# 查看某次历史的详细信息
$ kubectl rollout history deployment/nginx-deployment --revision=2
# 回到上个版本
$ kubectl rollout undo deployment nginx-deployment
# 回到指定版本
$ kubectl rollout undo deployment nginx-deployment --to-revision=2
# 重新部署
$ kubectl rollout restart deployment nginx-deployment
# 暂停运行,暂停后,对 deployment 的修改不会立刻生效,恢复后才应用设置
$ kubectl rollout pause deployment ngixn-deployment
# 恢复
$ kubectl rollout resume deployment nginx-deployment

2.5 delete deployment

# 删除 Deployment
$ kubectl delete deployment nginx-deployment
$ kubect delete -f nginx-deployment.yml
# 删除默认命名空间下全部资源
$ kubectl delete all --all
# 删除指定命名空间的资源
$ kubectl delete all --all -n 命名空间的名称

3 StatefulSet

3.1 What is StatefulSet

Official address: https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/statefulset/

StatefulSet is 有状态应用a workload API object used to manage.

Stateless application: An application that does not store any data in the application itself is called a stateless application.

Stateful application: The application itself needs to store related data. The application is called a stateful application.

Blog: front-end vue back-end java mysql redis es…

Data Acquisition: Stateful Application of Acquisition Procedures

StatefulSet is used to manage the deployment and expansion of a Pod collection, and provide persistent storage and persistent identifiers for these Pods.

Similar to Deployment, StatefulSet manages a set of Pods based on the same container specification. But unlike Deployments, StatefulSets maintain a sticky ID for each of their Pods. These Pods are created based on the same protocol, but are not interchangeable: each Pod has a permanent ID no matter how it is scheduled.

If you want to use storage volumes to provide persistent storage for your workloads, you can use StatefulSets as part of your solution. Although individual Pods in a StatefulSet can still fail, persistent Pod identifiers make it easier to match existing volumes with new Pods that replace failed Pods.

3.2 StatefulSet Features

StatefulSets are valuable for applications that need to meet one or more of the following requirements:

  • Stable, unique network identifier.
  • Stable, durable storage.
  • Orderly and graceful deployment and scaling.
  • Orderly, automatic rolling updates.

In the above description, "stable" means that the entire process of Pod scheduling or rescheduling is persistent. If the application does not require any stable identifier or orderly deployment, deletion, or scaling, the application should be deployed using a workload provided by a set of stateless replica controllers, such as a Deployment or ReplicaSet may be more suitable for Your stateless application deployment needs.

3.3 Restrictions

  • storage classStorage for a given Pod must be provisioned by the PersistentVolume Provisioner based on the requested , or pre-provisioned by the administrator.
  • Deleting or scaling a StatefulSet does not delete its associated storage volumes. This is done to keep data safe, and it is usually more valuable than automatically clearing all related resources of the StatefulSet.
  • StatefulSets currently require a headless service to be responsible for the pod's network identity. You are responsible for creating this service.
  • When a StatefulSet is deleted, the StatefulSet does not provide any guarantees of terminating Pods. To achieve an orderly and graceful termination of Pods in a StatefulSet, the StatefulSet can be scaled down to 0 before deletion.
  • OrderedReadyUsing rolling updates with the default pod management policy ( ), can get into a broken state that requires manual intervention to fix.

3.4 Using StatefulSets

1 Build NFS service
#安装nfs-utils
$ yum install -y rpcbind nfs-utils
#创建nfs目录
mkdir -p /root/nfs/data
#编辑/etc/exports输入如下内容 
# insecure:通过 1024 以上端口发送 rw: 读写 sync:请求时写入共享 no_root_squash:root用户有完全根目录访问权限
echo  "/root/nfs/data *(insecure,rw,sync,no_root_squash)" >> /etc/exports
#启动相关服务并配置开机自启动
systemctl start rpcbind
systemctl start nfs-server
systemctl enable rpcbind
systemctl enable nfs-server
#重新挂载 使 /etc/exports生效
exportfs -r
#查看共享情况
exportfs
2 client testing
# 1.安装客户端 所有节点安装
$ yum install -y nfs-utils
# 2.创建本地目录
$ mkdir -p /root/nfs
# 3.挂载远程nfs目录到本地
$ mount -t nfs 10.15.0.9:/root/nfs /root/nfs
# 4.写入一个测试文件
$ echo "hello nfs server" > /root/nfs/test.txt
# 5.去远程 nfs 目录查看
$ cat /root/nfs/test.txt

# 挂取消载
$ umount -f -l nfs目录
3 use statefulset
  • class.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"
  • nfs-client-provider
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: kube-system
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: chronolaw/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 10.15.0.10
            - name: NFS_PATH
              value: /root/nfs/data
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.15.0.10
            path: /root/nfs/data
  • rbac.yml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: kube-system
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
  • mysql.yml
apiVersion: v1
kind: Namespace
metadata:
  name: ems
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: mysql-nfs-sc
  namespace: ems
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  onDelete: "remain"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
  labels:
    app: mysql
  namespace: ems
spec:
  serviceName: mysql #headless 无头服务  保证网络标识符唯一  必须存在
  replicas: 1
  template:
    metadata:
      name: mysql
      labels:
        app: mysql
    spec:
      containers:
        - name: mysql
          image: mysql/mysql-server:8.0
          imagePullPolicy: IfNotPresent
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: root
          volumeMounts:
            - mountPath: /var/lib/mysql #自己容器写入数据目录
              name: data    #保存到指定一个变量中 变量名字就是 data
          ports:
            - containerPort: 3306
      restartPolicy: Always
  volumeClaimTemplates:  #声明动态创建数据卷模板
    - metadata:
        name: data      # 数据卷变量名称
        namespace: ems  # 在哪个命名空间创建数据卷
      spec:
        accessModes:    # 访问数据卷模式是什么  
          - ReadWriteMany
        storageClassName: mysql-nfs-sc # 使用哪个 storage class 模板存储数据
        resources:
          requests:
            storage: 2G
  selector:
    matchLabels:
      app: mysql
---

4 DaemonSet

4.1 What is DaemonSet

https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/daemonset/

DaemonSet ensures that all (or some) nodes run a copy of a Pod. When nodes join the cluster, a Pod will be added for them. These Pods are also recycled when a node is removed from the cluster. Deleting a DaemonSet will delete all Pods it created.

Some typical uses of DaemonSet:

  • Run cluster daemons on each node
  • Run a log collection daemon on each node
  • Run a monitoring daemon on each node

A simple usage is to start a DaemonSet on all nodes for each type of daemon. A slightly more complex usage is to deploy multiple DaemonSets for the same kind of daemon; each with different flags and different memory, CPU requirements for different hardware types.

4.2 Using DaemonSets

apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.19
        imagePullPolicy: IfNotPresent
        name: nginx
        resources: {
    
    }
      restartPolicy: Always

5 Job

5.1 What is Job

https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/job/

A Job creates one or more Pods and will continue to retry the execution of Pods until the specified number of Pods terminate successfully. As Pods complete successfully, the Job keeps track of how many Pods completed successfully. When the number reaches the specified success threshold, the task (i.e. Job) ends. Deleting a Job will clear all created Pods. The operation of suspending the Job will delete all active Pods of the Job until the Job is resumed again.

In a simple use case, you would create a Job object to run a Pod to completion in a reliable manner. The Job object starts a new Pod when the first Pod fails or is deleted (eg because of a node hardware failure or restart).

You can also use Jobs to run multiple Pods in parallel.

5.2 Using Jobs

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl:5.34.0
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  # 当前任务出现失败 最大的重试次数
  backoffLimit: 4

5.3 Jobs that are automatically cleaned up

Completed jobs generally do not need to be persisted in the system. Keeping them in the system all the time puts extra pressure on the API server. If the Job is managed by some higher-level controller, such as CronJob , then the Job can be cleaned up by the CronJob based on a specific capacity-based cleanup policy.

  • The TTL mechanism of the completed job
    • Another way to automatically clean up completed jobs (status Completeor Failed) is to use the TTL mechanism provided by the TTL controller. By setting the Job .spec.ttlSecondsAfterFinishedfield, the controller can clean up the completed resources. When the TTL controller cleans up the Job, it will delete the Job object in cascade. In other words, it deletes all dependent objects, including Pods and Jobs themselves. Note that when a Job is deleted, the system takes into account its lifecycle guarantees, such as its Finalizers.
apiVersion: batch/v1
kind: Job
metadata:
  name: pi-with-ttl
spec:
  ttlSecondsAfterFinished: 100
  template:
    spec:
      containers:
      - name: pi
        image: perl:5.34.0
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never

A job pi-with-ttlcan be automatically deleted 100 seconds after it ends. If this field is set to 0, the Job becomes automatically deleteable immediately after it finishes. If this field is not set, the Job will not be automatically cleared by the TTL controller after completion.

6 The controller cannot solve the problem

  • How to provide network services for pods
  • How to achieve load balancing among multiple Pods

Guess you like

Origin blog.csdn.net/qq_45808700/article/details/132714304