Controller of kubernetes workload

Table of contents

1. Overview

2. Deployment controller

2.1 Deployment deployment application

2.2Deployment rolling upgrade

2.2.1 Application deployment completed

2.2.2 Three ways to update the image

2.3 Deployment release failure rollback

2.4 Deployment horizontal expansion

3. DaemonSet Controller

4. Job controller

4.1 Job one-time execution

4.2 Timing task (CronJob)

V. Summary


I. Overview


 In Kubernetes, Pod is the smallest management unit, which is a group of closely related containers.

However, a single Pod cannot always be guaranteed to be available. For example, when we create an nginx Pod, the Pod is accidentally deleted for some reason, and we hope that it can automatically create a new Pod with the same attributes. Unfortunately, a simple Pod cannot meet the demand.

To this end, Kubernetes implements a series of controllers to manage pods, keeping the desired state and actual state of pods consistent

Workload Controllers (Workload Controllers) is an abstract concept of K8s for higher-level objects, deployment and management Pod. Common workload controllers:

• Deployment: stateless application deployment

• StatefulSet: Stateful application deployment

• DaemonSet: Ensure that all Nodes run the same Pod

• Job : one-time job

• Cronjob: Timing tasks

The role of the controller :

• Manage Pod objects

• Associate Pods with labels

• The controller implements Pod operation and maintenance, such as rolling update, scaling, copy management, and maintaining Pod status.

The function of Deployment:

• Manage Pods and ReplicaSets

• It has the functions of online deployment, copy setting, rolling upgrade, rollback, etc.

• Provide declarative updates, such as only updating a new Image Application scenarios: website, API, microservice


2. Deployment controller


2.1 Deployment deployment application

deployment deploys an application with 3 copies

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
  namespace: default
spec:
  replicas: 3  # 副本数量
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx  # Pod副本的标签
    spec:
      containers:
      - name: web
        image: nginx:1.15

The pod copy is 3, successfully dispatched to two nodes node1 and node2

The service is exposed, query the service port used

kubectl expose deployment web --port=80 --target-port=80 --type=NodePort

Any IP of the three nodes plus port 31812 can be accessed

2.2Deployment rolling upgrade

Business applications are basically deployed in Kubernetes through Deployment. Application updates and rollbacks are normal tasks, especially in Internet companies. Rapid iteration is an important way to capture users.

However, not every iteration is 100% normal. If it is abnormal, how to recover quickly is also something to consider. To adapt to this scenario, Deployment provides rolling update and fast rollback capabilities.

The default update method of Deployment is rolling update, which can be specified by strategy.type.

  • Recreate: Delete all Pods first, then create
  • RollingUpdate: Start the new Pod first, then replace the old Pod

2.2.1 Application deployment completed

## 部署应用
kubectl apply -f deployment.yaml

## 暴露应用服务的端口
kubectl expose deployment web --port=80 --target-port=80 --type=NodePort

2.2.2 Three ways to update the image

• kubectl apply -f xxx.yaml

• kubectl set image deployment/web nginx=nginx:1.16

• kubectl edit deployment/web

 Rolling upgrade: K8s' default strategy for pod upgrades is to gradually update old version pods with new version pods to achieve zero-downtime release without user awareness.

View the IPs and ports corresponding to the three pod services

[root@k8s-master1 ~]# kubectl get ep
NAME         ENDPOINTS                                            AGE
kubernetes   192.168.2.117:6443,192.168.2.119:6443                7d2h
pod-check    <none>                                               6h
web          10.244.159.134:80,10.244.224.10:80,10.244.36.74:80   32m

nginx:1.15 mirror upgrade to nginx:1.16

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: web
        image: nginx:1.16

Replicas are upgraded one by one to create a new one and delete the old one. The specific strategy can be configured by yourself.

Export web deployment to view the complete Deployment configuration

kubectl get deployment   web    -o yaml >  deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "7"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"kubernetes.io/change-cause":"web.v1-nginx-1.17"},"name":"web","namespace":"default"},"spec":{"replicas":3,"selector":{"matchLabels":{"app":"nginx"}},"template":{"metadata":{"labels":{"app":"nginx"}},"spec":{"containers":[{"image":"nginx:1.17","name":"web"}]}}}}
    kubernetes.io/change-cause: web.v1-nginx-1.17
  creationTimestamp: "2022-11-12T09:11:15Z"
  generation: 8
  managedFields:
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
      f:spec:
        f:progressDeadlineSeconds: {}
        f:replicas: {}
        f:revisionHistoryLimit: {}
        f:selector:
          f:matchLabels:
            .: {}
            f:app: {}
        f:strategy:
          f:rollingUpdate:
            .: {}
            f:maxSurge: {}
            f:maxUnavailable: {}
          f:type: {}
        f:template:
          f:metadata:
            f:labels:
              .: {}
              f:app: {}
          f:spec:
            f:containers:
              k:{"name":"web"}:
                .: {}
                f:imagePullPolicy: {}
                f:name: {}
                f:resources: {}
                f:terminationMessagePath: {}
                f:terminationMessagePolicy: {}
            f:dnsPolicy: {}
            f:restartPolicy: {}
            f:schedulerName: {}
            f:securityContext: {}
            f:terminationGracePeriodSeconds: {}
    manager: kubectl-client-side-apply
    operation: Update
    time: "2022-11-12T10:33:02Z"
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:kubernetes.io/change-cause: {}
      f:spec:
        f:template:
          f:spec:
            f:containers:
              k:{"name":"web"}:
                f:image: {}
    manager: kubectl
    operation: Update
    time: "2022-11-12T10:33:12Z"
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:deployment.kubernetes.io/revision: {}
      f:status:
        f:availableReplicas: {}
        f:conditions:
          .: {}
          k:{"type":"Available"}:
            .: {}
            f:lastTransitionTime: {}
            f:lastUpdateTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"Progressing"}:
            .: {}
            f:lastTransitionTime: {}
            f:lastUpdateTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
        f:observedGeneration: {}
        f:readyReplicas: {}
        f:replicas: {}
        f:updatedReplicas: {}
    manager: kube-controller-manager
    operation: Update
    time: "2022-11-12T10:33:21Z"
  name: web
  namespace: default
  resourceVersion: "1262530"
  selfLink: /apis/apps/v1/namespaces/default/deployments/web
  uid: 7b334d04-b47e-4023-bb3f-8043fd1474e2
spec:
  progressDeadlineSeconds: 600
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: nginx
  strategy:
    rollingUpdate:
      maxSurge: 25%            ### 
      maxUnavailable: 25%      ###
    type: RollingUpdate        ###
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.17
        imagePullPolicy: IfNotPresent
        name: web
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 3      ### 
  conditions:
  - lastTransitionTime: "2022-11-12T09:23:15Z"
    lastUpdateTime: "2022-11-12T09:23:15Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2022-11-12T09:11:15Z"
    lastUpdateTime: "2022-11-12T10:33:21Z"
    message: ReplicaSet "web-76f5f6d7f5" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 8
  readyReplicas: 3
  replicas: 3
  updatedReplicas: 3
  • maxSurge: Indicates the maximum number of pods that can be increased during the upgrade process. For example, maxSurge=1, replicas: 3 above means that kubennetes will start a new pod first, and then delete an old pod. During the entire upgrade process, there will be at most There are 3+1 pods
  • maxUnavailable: Indicates the maximum number of pods that cannot provide services during the upgrade process. When manSurge is not 0, this value cannot be 0. maxUnavailable = 1 means that there will be at most one pod that is unavailable during the entire upgrade process of kubernetes state
  • minReadySeconds: Indicates that kubernetes waits for the set time before upgrading. If this value is not set, kubernete will provide services assuming that the container is started. If this value is not set, the service may not run normally in some extreme cases. The default value is 0
  • type: RollingUpdate indicates that the update strategy is set to rolling update, which can be set to two values ​​of recreate and RollingUpdate, recreate indicates that all are recreated, and the default value is RollingUpdate

Of course, at this time, a health check should be added to the Pod to check the health status of the application, instead of simply relying on the running status of the container. Configuring corresponding health checks can ensure service reliability and continuity.

2.3 Deployment release failure rollback

Publish failed to return to normal version

# 查看历史发布版本
kubectl rollout history deployment/web 
# 回滚上一个版本
kubectl rollout undo deployment/web	
# 回滚历史指定版本
kubectl rollout undo deployment/web --to-revision=2

The ngnix version is free to change, try upgrading and rolling back

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
  namespace: default
  annotations:       # 记录回滚参数
    kubernetes.io/change-cause: "web.v1-nginx-1.17"   #记录到revision中的内容,记录版本号
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: web
        image: nginx:1.17

 Note: Rollback is to redeploy the state of a certain deployment, that is, all configurations of the current version

View the running service version, the mirror image is nginx: 1.17

Roll back to the previous version, the mirror is nginx: 1.16

2.4 Deployment horizontal expansion

The purpose of the ReplicaSet controller:

• Pod copy number management, constantly comparing the current number of Pods with the expected number of Pods

• Deployment will create an RS as a record for each release, which is used to implement rollback

# 查看RS记录
kubectl get rs	
# 版本对应RS记录
kubectl rollout history deployment web

# 命令直接修改副本个数,或者 修改yaml重新应用下
kubectl scale deployment web --replicas 5 deployment.apps/web scaled

##  也可以直接修改 ep 文件
kubectl edit  ep web -o yaml

Add two pods


3. DaemonSet Controller


DaemonSet guarantees to run a Pod on each Node. If a Node is added, the Pod will also run on the newly added Node. If the DadmonSet is deleted, the Pod it created will be cleared. It is often used to deploy some global applications such as cluster log collection and monitoring.

DaemonSet also supports update and rollback, and the specific operation is similar to that of Deployment.

Common scenarios are as follows:

1. Run the storage cluster daemon, such as ceph, glusterd, etc.;

2. Run a log collection daemon, such as logstash, fluentd, etc.;

3. Run monitoring daemons, such as Prometheus Node Exporter, collectd, New Relic agent, Ganglia gmond, etc.;

 Deploy a log collection program. The master node is tainted because it is installed using kubeadm. The following filebeat configures taint tolerance so that it can be scheduled successfully on the master node.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
spec:
  selector:
    matchLabels:
      name: filebeat
  template:
    metadata:
      labels:
        name: filebeat
    spec:
      tolerations:
      - effect: NoSchedule
        operator: Exists
      containers:
      - name: log
        image: elastic/filebeat:7.3.2

 Three nodes successfully deployed filebeat


4. Job controller


4.1 Job one-time execution

Calculate pi

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl # 自定义的一次性需要运行的镜像
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never

4.2 Timing task (CronJob)

Application scenarios: offline data processing, video decoding and other services

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello kangll
          restartPolicy: OnFailure

 It should be noted that due to the particularity of cron, sometimes a new scheduled task starts because the previous scheduled task has not been executed. We can define the rules by defining the spec.concurrencyPolicy field, for example:

  • concurrencyPolicy=Allow: Indicates that these jobs can exist at the same time
  • concurrencyPolicy=Firbid: Indicates that no new Job will be created, that is, this scheduled task is skipped
  • ConcurrencyPolicy=Replace: Indicates that the new Job generated will replace the old Job

V. Summary


The above are the commonly used controllers in daily work. Among them, Deployment and DaemonSet are used most frequently. Master these controllers proficiently, and learn when to choose which controller. Reasonable use will maximize work efficiency.

Guess you like

Origin blog.csdn.net/qq_35995514/article/details/128062173