[Cloud Native] The use of DaemonSet and Job in the Kubernetes controller

 

 

Table of contents

DaemonSet

1 What is DaemonSet

2 Using DaemonSet

Job

1 What is Job

2 using Job

3 Jobs that are automatically cleaned up

The controller can't solve the problem


DaemonSet

1 What is DaemonSet

DaemonSet | Kubernetes

DaemonSet ensures that all (or some) nodes run a copy of a Pod. When nodes join the cluster, a Pod will be added for them. These Pods are also recycled when a node is removed from the cluster. Deleting a DaemonSet will delete all Pods it created.

Some typical uses of DaemonSet:

  • Run cluster daemons on each node

  • Run a log collection daemon on each node

  • Run a monitoring daemon on each node

A simple usage is to start a DaemonSet on all nodes for each type of daemon. A slightly more complex usage is to deploy multiple DaemonSets for the same kind of daemon; each with different flags and different memory, CPU requirements for different hardware types.

2 Using DaemonSet

apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.19
        imagePullPolicy: IfNotPresent
        name: nginx
        resources: {}
      restartPolicy: Always

Job

1 What is Job

Job | Kubernetes

A Job creates one or more Pods and will continue to retry the execution of Pods until the specified number of Pods terminate successfully. As Pods complete successfully, the Job keeps track of how many Pods completed successfully. When the number reaches the specified success threshold, the task (i.e. Job) ends. Deleting a Job will clear all created Pods. The operation of suspending the Job will delete all active Pods of the Job until the Job is resumed again.

In a simple use case, you would create a Job object to run a Pod to completion in a reliable manner. The Job object starts a new Pod when the first Pod fails or is deleted (eg because of a node hardware failure or restart).

You can also use Jobs to run multiple Pods in parallel.

2 using Job

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl:5.34.0
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  # 当前任务出现失败 最大的重试次数
  backoffLimit: 4

3 Jobs that are automatically cleaned up

Completed jobs generally do not need to be persisted in the system. Keeping them in the system all the time puts extra pressure on the API server. If the Job is managed by some higher-level controller, such as a CronJob , then the Job can be cleaned up by the CronJob based on a specific capacity-based cleanup strategy.

  • The TTL mechanism of the completed job

    • Another way to automatically clean up completed jobs (status Completeor Failed) is to use the TTL mechanism provided by the TTL controller. By setting the Job .spec.ttlSecondsAfterFinishedfield, the controller can clean up the completed resources. When the TTL controller cleans up the Job, it will delete the Job object in cascade. In other words, it deletes all dependent objects, including Pods and Jobs themselves. Note that when a Job is deleted, the system takes into account its lifecycle guarantees, such as its Finalizers.

apiVersion: batch/v1
kind: Job
metadata:
  name: pi-with-ttl
spec:
  ttlSecondsAfterFinished: 100
  template:
    spec:
      containers:
      - name: pi
        image: perl:5.34.0
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never

A job pi-with-ttlcan be automatically deleted 100 seconds after it ends. If this field is set to 0, the Job becomes automatically deleteable immediately after it finishes. If this field is not set, the Job will not be automatically cleared by the TTL controller after completion.

The controller can't solve the problem

  • How to provide network services for pods

  • How to achieve load balancing among multiple Pods

Guess you like

Origin blog.csdn.net/weixin_53678904/article/details/132233724