Cloud Native Technology Open Course Study Notes: Application Orchestration and Management: Job and DaemonSet, Application Configuration Management

5. Application orchestration and management: Job and DaemonSet

1、Job

1), the source of demand

Insert picture description here

Insert picture description here

2), use case interpretation

1) Job syntax

Insert picture description here

The above picture is the simplest yaml format of Job. Here, a new kind called Job is mainly introduced. This Job is actually a type of job-controller. Then the name in the metadata specifies the name of the job, the spec.template below is actually the spec of the pod

The content here is the same, the only thing is two more points:

  • The first is restartPolicy. Three retry policies: Never, OnFailure, and Always can be set in Job . You can use Never when you want the job to run again; you can use OnFailure when you want to run it again when it fails, and you can use OnFailure when you try again; or you can use Always when you re-run under any circumstances.
  • In addition, it is impossible for Job to retry infinitely when it is running, so a parameter is needed to control the number of retries. This backoffLimit is to ensure how many times a job can be retried

So in Job, the main focus is the restartPolicy restart strategy and the backoffLimit retry limit

2) Job status

Insert picture description here

After the job is created, you can use kubectl get jobsthis command to view the running status of the current job. In the value obtained, there are basically the name of the job, how many pods are currently completed, and how long it takes

  • The meaning of AGE means that this Pod is calculated from the current time, minus the time it was created at that time. This duration is mainly used to tell you the history of the Pod and how long it has been since the Pod was created.
  • DURATION mainly looks at how long the actual business in the job has been running. This parameter will be very useful when tuning performance.
  • COMPLETIONS mainly looks at how many Pods in the task, and how many states it has completed, will be displayed in this field
3) View Pod

Insert picture description here

The final execution unit of the job is still the Pod. The Job just created will create a Pod called pi. This task is to calculate the pi. The name of the Pod will be${job-name}-${random-suffix}

It has one more than ordinary Pod called ownerReferences, which declares which upper level controller the pod belongs to to manage. You can see that the ownerReferences here are owned by batch/v1, which is managed by the previous job. Here is a statement of who its controller is, and then you can check who its controller is through pod back, and at the same time, you can check which pods it has under the job according to the job.

4) Run Job in parallel

Insert picture description here

Sometimes there are some requirements: I hope that the job can be maximized in parallel when running, and n Pods are generated in parallel to execute quickly. At the same time, due to our limited number of nodes, we may not want to have too many Pods in parallel at the same time. There is a concept of a pipeline. We can hope that the maximum degree of parallelism is, and the Job controller can do it for us.

Here are mainly two parameters: one is complementions, the other is parallelism

  • First of all, the first parameter is used to specify the number of executions of this Pod queue, which can be regarded as the total number of executions specified by this Job. For example, it is set to 8 here, that is, this task will be executed 8 times in total
  • The second parameter represents the number of parallel executions. The so-called number of parallel executions is actually the size of the buffer queue in a pipeline or buffer. Set it to 2, which means that this job must be executed 8 times, with 2 Pods in parallel each time. In this case, a total of 4 will be executed. Batches
5) View parallel job running

Insert picture description here

The above picture is the effect that can be seen after the job is run as a whole. First, see the name of the job, and then see that it has created a total of 8 pods, which took 2 minutes and 23 seconds to execute. This is the creation time.

Next, let’s look at the real pods. There are 8 pods in total, and the status of each pod is complete. Then look at its AGE, which is the time. From the bottom up, we can see that there are 73s, 40s, 110s and 2m26s respectively. Each group has two pods that have the same time, that is, when the time period is 40s, the last one is created, and 2m26s is the first one created. In other words, always two pods are created at the same time, paralleled and disappeared, then created, run, and finished again

6) CronJob syntax

Insert picture description here

Compared with Job, CronJob (timed task) has several different fields:

  • schedule : This field of schedule is mainly to set the time format, and its time format is the same as crontime of Linux

  • **startingDeadlineSeconds: **Every time a job is run, how long it can wait, sometimes the job may run for a long time without starting. So at this time, if more than a long time, CronJob will stop the job

  • concurrencyPolicy : It means whether to allow parallel operation. The so-called parallel operation is, for example, I execute it every minute, but this job may take a long time to run. If it takes two minutes to run successfully, that is, when the second job needs to run in time, the previous job still not complete. If this policy is set to true, then it will execute every minute regardless of whether your previous job is completed or not; if it is false, it will wait for the previous job to complete before running the next one

  • **JobsHistoryLimit:** This means that after each CronJob runs, it will leave the running history and view time of the previous job. Of course, this amount cannot be unlimited, so you need to set the number of historical retention, generally you can set the default 10 or 100.

3) Operation demonstration

1) Job creation and operation verification

job.yaml

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl","-Mbignum=bpi","-wle","print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4      
hanxiantaodeMBP:yamls hanxiantao$ kubectl create -f job.yaml 
job.batch/pi created
hanxiantaodeMBP:yamls hanxiantao$ kubectl get jobs
NAME   COMPLETIONS   DURATION   AGE
pi     1/1           2m1s       2m26s
hanxiantaodeMBP:yamls hanxiantao$ kubectl get pods 
NAME       READY   STATUS      RESTARTS   AGE
pi-hltwn   0/1     Completed   0          2m34s
hanxiantaodeMBP:yamls hanxiantao$ kubectl logs pi-hltwn
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901

2) Parallel Job creation and operation verification

job1.yaml

apiVersion: batch/v1
kind: Job
metadata:
  name: paral-1
spec:
  completions: 8
  parallelism: 2
  template:
    spec:
      containers:
      - name: param
        image: ubuntu
        command: ["/bin/bash"]
        args: ["-c","sleep 30;date"]
      restartPolicy: OnFailure
hanxiantaodeMBP:yamls hanxiantao$ kubectl create -f job1.yaml 
job.batch/paral-1 created
hanxiantaodeMBP:yamls hanxiantao$ kubectl get jobs
NAME      COMPLETIONS   DURATION   AGE
paral-1   0/8           10s        10s
hanxiantaodeMBP:yamls hanxiantao$ kubectl get pods
NAME            READY   STATUS    RESTARTS   AGE
paral-1-9gs52   1/1     Running   0          22s
paral-1-vjc5v   1/1     Running   0          22s
hanxiantaodeMBP:yamls hanxiantao$ kubectl get pods
NAME            READY   STATUS      RESTARTS   AGE
paral-1-7t6qf   0/1     Completed   0          102s
paral-1-9gs52   0/1     Completed   0          2m31s
paral-1-fps7x   0/1     Completed   0          107s
paral-1-hflsd   0/1     Completed   0          39s
paral-1-qfnk9   0/1     Completed   0          37s
paral-1-t5dqw   0/1     Completed   0          70s
paral-1-vjc5v   0/1     Completed   0          2m31s
paral-1-vqh7d   0/1     Completed   0          73s

3) Cronjob creation and operation verification

cron.yaml

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
       spec:
        containers:
        - name: hello
          image: busybox
          args: 
          - /bin/sh
          - -c
          - date;echo Hello from ther Kubernetes Cluster
        restartPolicy: OnFailure
  startingDeadlineSeconds: 10
  concurrencyPolicy: Allow
  successfulJobsHistoryLimit: 3      
hanxiantaodeMBP:yamls hanxiantao$ kubectl create -f cron.yaml 
cronjob.batch/hello created
hanxiantaodeMBP:yamls hanxiantao$ kubectl get jobs
No resources found in default namespace.
hanxiantaodeMBP:yamls hanxiantao$ kubectl get jobs
NAME               COMPLETIONS   DURATION   AGE
hello-1609464960   1/1           4s         5s
hanxiantaodeMBP:yamls hanxiantao$ kubectl get jobs
NAME               COMPLETIONS   DURATION   AGE
hello-1609464960   1/1           4s         66s
hello-1609465020   1/1           3s         6s

Since this CronJob is executed every minute, you kubectl get jobsmay not be able to see the job information at the beginning, so you need to wait a little while

4), architecture design

Insert picture description here

In fact, the Job Controller is still mainly to create the corresponding pod, and then the Job Controller will track the status of the Job, retry or continue to create according to some of the configurations submitted by us in a timely manner. Each pod will have its corresponding label, to track the Job Controller it belongs to, and also to configure parallel creation, parallel or serial to create pods

Insert picture description here

The figure above is the main flow of a Job controller. All jobs are a controller. It will watch the API Server. Every time a job is submitted, the yaml will be passed to etcd through the api-server, and then the Job Controller will register several Handlers, whenever there are additions, updates, or deletions. When waiting for operation, it will be sent to the controller through a memory-level message queue

Check whether there is currently a running pod through the Job Controller. If not, create the pod through Scale up; if there is, or if it is greater than this number, scale it down. If the pod changes at this time, you need to Update its status in time

At the same time, check whether it is a parallel job or a serial job, and create the number of pods in a timely manner according to the configured parallelism and serialization degree. Finally, it will update the entire status of the job to the API Server, so that we can see the final effect presented

2、DaemonSet

1), the source of demand

Insert picture description here

Insert picture description here

2), use case interpretation

1) DaemonSet syntax

Insert picture description here

The first is kind: DaemonSet. The syntax of DaemonSet has the same part as Deployment. For example, it will have matchLabel, through matchLabel to manage the corresponding pod, this pod.label must also match this DaemonSet.controller.label, it can find it according to label.selector The corresponding management Pod. Everything in the spec.container below is consistent

Here we use fluentd as an example. The most commonly used points of DaemonSet are the following:

  • The first is storage. Things like GlusterFS or Ceph need to run something similar to an Agent on each node. DaemonSet can meet this demand well.

  • In addition, for log collection, such as logstash or fluentd, these are the same requirements. Each node needs to run an Agent. In this way, its status can be easily collected and the information in each node can be reported to in time. Above

  • Another is that each node needs to run some monitoring things, and each node needs to run the same things, such as Promethues, which also needs the support of DaemonSet.

2) View the status of DaemonSet

Insert picture description here

After creating the DaemonSet, you can use it kubectl get DaemonSet(DaemonSet is abbreviated as ds). It can be seen that the return value of DaemonSet is very similar to deployment, that is, it currently has a few running, and then we need a few, and a few are READY. Of course, READY only has Pods, so all it creates in the end are pods.

There are several parameters here, namely: the number of pods needed, the number of pods that have been created, the number of ready, and all available pods that have passed the health check; and NODE SELECTOR. NODE SELECTOR is the node selection label. Sometimes we may want only some nodes to run this pod instead of all nodes, so if some nodes are marked, DaemonSet will only run on these nodes

3) Update DaemonSet

Insert picture description here

In fact, DaemonSet and deployment are very similar. It also has two update strategies: one is RollingUpdate and the other is OnDelete

  • RollingUpdate will update one by one. Update the first pod first, then the old pod is removed, and then build the second pod after passing the health check, so that the business will upgrade smoothly without interruption

  • OnDelete is actually a very good update strategy, that is, after the template is updated, there will be no changes to the pod, and we need to manually control it. If we delete the pod corresponding to a certain node, it will be rebuilt. If it is not deleted, it will not be rebuilt. In this way, it will also have a particularly good effect on some special needs that we need to manually control.

3) Operation demonstration

1) Arrangement of DaemonSet

daemonSet.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
     containers:
     - name: fluentd-elasticsearch
       image: fluent/fluentd:v1.4-1     
hanxiantaodeMBP:yamls hanxiantao$ kubectl create -f daemonSet.yaml 
daemonset.apps/fluentd-elasticsearch created
hanxiantaodeMBP:yamls hanxiantao$ kubectl get ds
NAME                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
fluentd-elasticsearch   1         1         1       1            1           <none>          35s
hanxiantaodeMBP:yamls hanxiantao$ kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
fluentd-elasticsearch-h9jb9   1/1     Running   0          2m17s

2) Update of DaemonSet

hanxiantaodeMBP:yamls hanxiantao$ kubectl set image ds/fluentd-elasticsearch fluentd-elasticsearch=fluent/fluentd:v1.4
daemonset.apps/fluentd-elasticsearch image updated
hanxiantaodeMBP:yamls hanxiantao$ kubectl rollout status ds/fluentd-elasticsearch
daemon set "fluentd-elasticsearch" successfully rolled out

4), architecture design

Insert picture description here

DaemonSet is also a controller, and its final real business unit is also a Pod. DaemonSet is actually very similar to a Job controller. It also uses the controller to watch the status of the API Server, and then adds pods in time. The only difference is that it monitors the status of the node. When a node is newly added or disappears, it will create a corresponding pod on the node, and then select the corresponding node according to the label you configured.

DaemonSet's controller, DaemonSet actually does the same thing as Job controller: both need to be based on the state of the API Server watch. Now the only difference between DaemonSet and Job controller is that DaemonsetSet Controller needs to watch the state of the node, but in fact the state of this node is still passed to etcd through the API Server

When a node state changes, it will be sent in through a memory message queue, and then the DaemonSet controller will watch this state to see if there is a corresponding Pod on each node, and if not, create it. Of course it will do a comparison, if there is any, it will compare the versions, and then add the above mentioned whether to do RollingUpdate? If not, it will be re-created. When Ondelete deletes the pod, it will also check it to check whether to update or create the corresponding pod.

Of course, at the end, if all the updates are completed, it will update the status of the entire DaemonSet to the API Server, and complete the final update.

Six, application configuration management

1. Source of demand

Insert picture description here

Insert picture description here

2、ConfigMap

Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here

3、Secret

Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here

4、ServiceAccount

Insert picture description here
Insert picture description here

5、Resource

Insert picture description here
Insert picture description here

6、SecurityContext

Insert picture description here

7 、 InitContainer

Insert picture description here

Course address : https://edu.aliyun.com/roadmap/cloudnative?spm=5176.11399608.aliyun-edu-index-014.4.dc2c4679O3eIId#suit

Guess you like

Origin blog.csdn.net/qq_40378034/article/details/112061346