Cloud Native Technology Open Course Study Notes: Application Orchestration and Management: Core Principles, Deployment

3. Application Orchestration and Management: Core Principles

1. Resource meta information

Insert picture description here

The composition of Kubernetes resource objects: It mainly includes two parts: Spec and Status. The Spec part is used to describe the desired state, and the Status part is used to describe the observed state

The metadata part of Kubernetes. This part mainly includes labels used to identify resources: Labels; annotations used to describe resources: Annotations; OwnerReference used to describe the relationship between multiple resources

1)、labels

Insert picture description here

Labels is a kind of Key: Value metadata with identification type. Labels are mainly used to filter resources and combined resources. You can use a similar SQL query select to query related resources based on the Label.

2)、Selector

The most common Selector is the equivalent Selector

Insert picture description here

Suppose there are four Pods in the system, and each Pod has a label that identifies the system level and environment. Through the Tie:front label, the Pod in the left column can be matched. The equal type Selector can also include multiple equal conditions and multiple equal conditions. Logical and

Insert picture description here

Through the Selector of Tie=front, Env=dev, you can filter out all Pods with Tie=front and Env=dev, which is the Pod in the upper left corner of the above figure. Another type of Selector is a collection type Selector. In the example, the Selector filters all Pods whose environment is test or gray.

Insert picture description here

In addition to the set operation of in, there are also notin set operations, such as tie notin (front, back), which will filter all Pods whose tie is not front and not back. In addition, you can also filter all Pods with a release tag based on whether there is a certain label, such as Selector release. Type and set equal Selector type, can also be used ,to connect, logical relationships identified with the same

3)、Annotations

Annotations are generally used by systems or tools to store non-marking information of resources, which can be used to expand the description of the spec/status of the resource

Insert picture description here

4)、Ownereference

Insert picture description here

Ownereference generally refers to collection resources. For example, Pod collections include replicaset and statefulset

The controller of the collection resource will create the corresponding attribution resource. For example, the replicaset controller will create a Pod during operation. The Ownereference of the created Pod points to the replicaset that created the Pod. Ownereference allows users to easily find an object that creates a resource. In addition, it can also be used to achieve the effect of cascading deletion.

2, kubectl view and modify Kubernetes metadata

pod1.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx1
  namespace: default
  labels:
    env: dev
    tie: front  
spec:
  containers:
  - name : nginx
    image: nginx:1.8
    ports:
    - containerPort: 80      

pod2.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx2
  namespace: default
  labels:
    env: dev
    tie: front  
spec:
  containers:
  - name : nginx
    image: nginx:1.8
    ports:
    - containerPort: 80      

Create two pods

hanxiantaodeMBP:yamls hanxiantao$ kubectl get pods
No resources found in default namespace.
hanxiantaodeMBP:yamls hanxiantao$ kubectl apply -f pod1.yaml 
pod/nginx1 created
hanxiantaodeMBP:yamls hanxiantao$ kubectl apply -f pod2.yaml 
pod/nginx2 created

View all pod tags

hanxiantaodeMBP:yamls hanxiantao$ kubectl get pods --show-labels
NAME     READY   STATUS    RESTARTS   AGE   LABELS
nginx1   1/1     Running   0          62s   env=dev,tie=front
nginx2   1/1     Running   0          58s   env=dev,tie=front

View the detailed information of the nginx1 pod

hanxiantaodeMBP:yamls hanxiantao$ kubectl get pods nginx1 -o yaml | less

Modify the label of the pod to env=test

hanxiantaodeMBP:yamls hanxiantao$ kubectl label pods nginx1 env=test
error: 'env' already has a value (dev), and --overwrite is false
hanxiantaodeMBP:yamls hanxiantao$ kubectl label pods nginx1 env=test --overwrite
pod/nginx1 labeled
hanxiantaodeMBP:yamls hanxiantao$ kubectl get pods --show-labels
NAME     READY   STATUS    RESTARTS   AGE     LABELS
nginx1   1/1     Running   0          5m39s   env=test,tie=front
nginx2   1/1     Running   0          5m35s   env=dev,tie=front

Remove the tie tag of the pod

hanxiantaodeMBP:yamls hanxiantao$ kubectl label pods nginx1 tie-
pod/nginx1 labeled
hanxiantaodeMBP:yamls hanxiantao$ kubectl get pods --show-labels
NAME     READY   STATUS    RESTARTS   AGE     LABELS
nginx1   1/1     Running   0          6m58s   env=test
nginx2   1/1     Running   0          6m54s   env=dev,tie=front

Filter pods by label

hanxiantaodeMBP:yamls hanxiantao$ kubectl get pods --show-labels -l env=test
NAME     READY   STATUS    RESTARTS   AGE     LABELS
nginx1   1/1     Running   0          7m33s   env=test
hanxiantaodeMBP:yamls hanxiantao$ kubectl get pods --show-labels -l env=test,env=dev
No resources found in default namespace.
hanxiantaodeMBP:yamls hanxiantao$ kubectl get pods --show-labels -l env=dev,tie=front
NAME     READY   STATUS    RESTARTS   AGE    LABELS
nginx2   1/1     Running   0          8m4s   env=dev,tie=front
hanxiantaodeMBP:yamls hanxiantao$ kubectl get pods --show-labels -l 'env in (test,dev)'
NAME     READY   STATUS    RESTARTS   AGE     LABELS
nginx1   1/1     Running   0          8m36s   env=test
nginx2   1/1     Running   0          8m32s   env=dev,tie=front

Add annotate

hanxiantaodeMBP:yamls hanxiantao$ kubectl annotate pods nginx1 my-annotate='my comment, ok'
pod/nginx1 annotated
hanxiantaodeMBP:yamls hanxiantao$ kubectl get pods -o yaml | less
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"env":"dev","tie":"front"},"name":"nginx1","namespace":"default"},"spec":{"containers":[{"image":"nginx:1.8","name":"nginx","ports":[{"containerPort":80}]}]}}
    my-annotate: my comment, ok
  creationTimestamp: "2020-12-24T00:18:18Z"
  labels:
    env: test

rs.yaml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx-replicasets
  namespace: default
  labels:
    env: prod
spec:
  replicas: 2
  selector:
   matchLabels:
    env: prod
  template:
    metadata:
      labels:
        env: prod
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80       

Create a pod by creating a replicaset object, the pod will contain ownerReference information

hanxiantaodeMBP:yamls hanxiantao$ kubectl apply -f rs.yaml 
replicaset.apps/nginx-replicasets created
hanxiantaodeMBP:yamls hanxiantao$ kubectl get pods 
NAME                      READY   STATUS    RESTARTS   AGE
nginx-replicasets-ld4n6   1/1     Running   0          2m3s
nginx-replicasets-xvr6k   1/1     Running   0          2m3s
nginx1                    1/1     Running   0          27m
nginx2                    1/1     Running   0          27m
hanxiantaodeMBP:yamls hanxiantao$ kubectl get pods nginx-replicasets-ld4n6 -o yaml | less
  name: nginx-replicasets-ld4n6
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: nginx-replicasets
    uid: 5beab4c4-3aae-4c6c-a3f2-3822a2e3ba3a
  resourceVersion: "567337"
  selfLink: /api/v1/namespaces/default/pods/nginx-replicasets-ld4n6
  uid: 9dbcafce-72ec-4088-a0ab-3eccaba95c2d
spec:
  containers:
  - image: nginx:1.7.9
    imagePullPolicy: IfNotPresent
    name: nginx
    ports:
    - containerPort: 80
      protocol: TCP
    resources: {
    
    }
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-6c86p
      readOnly: true

3. Controller mode

1), control loop

Insert picture description here

The core of the control mode is the concept of control loop. The control loop includes the controller, the controlled system, and the sensor that can observe the system, and three logic components

The outside world controls the resources by modifying the resource spec. The controller compares the resource spec and the status to calculate a diff. The diff will finally be used to determine what control operation is performed on the system. The control operation will cause the system to generate new output and be The sensor is reported in the form of resource status, and each component of the controller will run independently, and the system will continue to approach the final state of the spec.

2)、Sensor

Insert picture description here

The logical sensor in the control loop is mainly composed of three components: Reflector, Informer, and Indexer

Reflector obtains resource data through List and Watch K8s server. List is used to update the full amount of system resources when the Controller restarts and Watch is interrupted; while Watch performs incremental resource updates between multiple Lists; Reflector will be in the Delta queue after obtaining new resource data Insert a Delta record that includes the resource object information itself and the resource object event type. The Delta queue can ensure that there is only one record for the same object in the queue, thereby avoiding duplicate records when the Reflector re-Lists and Watches

The Informer component continuously pops delta records from the Delta queue, and then hands the resource object to the indexer, so that the indexer records the resource in a cache. The cache is indexed by the resource namespace by default and can be indexed by the Controller Shared by Manager or multiple Controllers. After that, give the event to the callback function of the event

The controller components in the control loop are mainly composed of event processing functions and workers. The event processing functions will pay attention to the new, updated, and deleted events of resources, and decide whether they need to be processed according to the logic of the controller. For events that need to be processed, the namespace and name of the event-related resource will be stuffed into a work queue, and processed by a worker in the subsequent worker pool. The work queue will de-duplicate the stored objects to avoid multiple The situation where a Woker handles the same resource

When a worker processes a resource object, it generally needs to use the name of the resource to retrieve the latest resource data, to create or update the resource object, or to call other external services. If the worker fails to process the resource, it will generally save the resource. The name is re-added to the work queue to facilitate retry later

3), control loop example-expansion

Insert picture description here

ReplicaSet is a resource used to describe the scaling behavior of stateless applications. ReplicaSet controler monitors ReplicaSet resources to maintain the number of states desired by the application. ReplicaSet uses selectors to match the associated Pods. Here, consider ReplicaSet rsA, replicas Scene changed from 2 to 3

Insert picture description here

First of all, Reflector will watch the changes of the two resources, ReplicaSet and Pod. After discovering that the ReplicaSet has changed, the object is rsA in the delta queue, and the type is an updated record

On the one hand, the Informer updates the new ReplicaSet to the cache and uses the Namespace nsA as an index. On the other hand, when the callback function of Update is called, the ReplicaSet controller will insert the nsA/rsA string of the string into the work queue after discovering that the ReplicaSet has changed. A Worker behind the work queue gets nsA/rsA from the work queue. The key of this string, and the latest ReplicaSet data is retrieved from the cache

The Worker compares the values ​​in the spec and status in the ReplicaSet and finds that the ReplicaSet needs to be expanded. Therefore, the Worker of the ReplicaSet creates a Pod. The Ownereference in this pod is oriented to ReplicaSet rsA.

Insert picture description here

Then the Pod added to the Reflector Watch event adds an Add type deta record to the delta queue. On the one hand, the new Pod record is stored in the cache through Indexer, and on the other hand, the Add callback function of the ReplicaSet controller is called. The Add callback function finds the corresponding ReplicaSet by checking pod ownerReferences, and stuffs the ReplicaSet namespace and string into the work queue

After getting the new work item, the Woker of the ReplicaSet fetched the new ReplicaSet record from the cache and got all the Pods created, because the status of the ReplicaSet is not the latest, that is, the number of all created Pods is not the latest. Therefore, at this time, the ReplicaSet updates the status so that the spec and status agree

4. Summary of controller mode

Insert picture description here

The controller model adopted by Kubernetes is driven by a declarative API. To be precise, it is driven based on the modification of Kubernetes resource objects

After the Kubernetes resource, it is the controller that pays attention to the resource. These controllers drive the asynchronous control system towards the set end state

These controllers operate autonomously, making the system automation and unattended possible

Because Kubernete's controllers and resources can be customized, the controller mode can be easily extended. Especially for stateful applications, we often automate operation and maintenance operations by customizing resources and controllers. This is the scene of the operator

Four, application orchestration and management: Deployment

1. Source of demand

Insert picture description here

If you directly manage all Pods in the cluster and apply Pods of A, B, and C, they are actually scattered in the cluster.

Now there are the following problems:

  • First of all, how to ensure the number of available Pods in the cluster? That is to say, if we use A four Pods, if there are some host failures or some network problems, how can we guarantee the number of them available?
  • How to update the mirror version for all Pods? Do we want a certain Pod to rebuild a new version of Pod?
  • Then, during the update process, how to ensure the availability of the service?
  • And during the update process, if a problem is found, how to quickly roll back to the previous version?

Insert picture description here

Use Deployment to plan applications A, B, and C into different Deployments. Each Deployment is actually a group of the same application Pod managed. We think this group of Pods is the same copy, so what can Deployment do for us? What's the matter?

1) First, Deployment defines an expected number of Pods. For example, for application A, we expect the number of Pods to be four. In this case, the controller will continue to maintain the expected number of Pods. When there is a network problem or host problem between us and the Pod, the controller can help us recover, that is, the corresponding Pod is newly expanded to ensure that the number of available Pods is consistent with the expected number

2) Configure the Pod publishing method, which means that the controller will update the Pod according to the policy given by the user, and during the update process, you can also set the number of unavailable Pods within the range

3) If a problem occurs during the update process, it is the so-called one-click rollback, which means that you can update all Pods under the Deployment to an old version with one command or one line of modification.

2. Use case interpretation

1), Deployment syntax

Insert picture description here

apiVersion:apps/v1, Which means that the group to which the Deployment currently belongs is apps, and the version is v1

As a K8s resource, Deployment has its own metadatameta-information. The Deployment.name defined here is nginx-deployment

Deployment.specFirst, there must be a core field, namely replicas, which defines the expected number of Pods as three; the selector is actually a Pod selector, so for all expanded Pods, its Labels must match the image.labels on the selector layer. Is app: nginx

2), view the deployment status

Insert picture description here

You can see the overall status of the Deployment through kubectl get deployment

  • DESIRED : The expected number of Pods is 3
  • CURRENT : The current actual number of Pods is 3
  • UP-TO-DATE : Actually the number of Pods that have reached the latest expected version
  • AVAILABLE : This is actually the number of Pods available during operation. Here, AVAILABLE is not simply available, that is, it is in the Ready state. It actually contains some Pods that can be used for more than a certain period of time.
  • AGE : The duration of deployment creation, as shown in the figure above, Deployment has been created for 80 minutes

3), view Pod

Insert picture description here

The first paragraph of the Pod name format: nginx-deployment, which is actually the Deployment.name to which the Pod belongs; the middle paragraph: template-hash, where the three Pods are the same, because these three Pods are actually created in the same template; The last paragraph is a random string

You can see through get.pod that the ownerReferences of the Pod is the controller resource to which the Pod belongs, not a Deployment, but a ReplicaSet. The name of this ReplicaSet is actually nginx-deployment plus pod.template-hash. All Pods are created by ReplicaSet, and ReplicaSet corresponds to a specific Deployment.template version

4), update the mirror

Insert picture description here

First of all, there is a fixed wording of set image behind kubectl, which refers to the setting of mirroring; secondly, a deployment.v1.apps, which is also a fixed wording, which writes the type of resource we want to operate, deployment is the resource name, and v1 is Resource version, apps are resource groups, here can also be abbreviated as deployment or deployment.apps, for example, when written as deployment, the apps group v1 version will be used by default

The third part is the name of the deployment to be updated, which is our nginx-deployment; then nginx actually refers to the template, which is the container.name in the Pod; here you can notice: in a Pod, it is actually possible There are multiple containers, and we specify the container.name of the image we want to update, which is nginx

Finally, specify the image version that our container expects to be updated, which refers to nginx: 1.9.1. As shown in the figure above: After executing this command, you can see that the template.spec in the deployment has been updated to nginx: 1.9.1

5), fast rollback

Insert picture description here

If executed through kubectl, it is actually kubectl rollout undothis command that can be rolled back to the previous version of Deployment; by rollout undoadding to-revisionto specify that it can be rolled back to a specific version

6)、DeploymeStatus

Insert picture description here

The three described in deploymentStatus are actually its conversion status, namely Processing, Complete and Failed

Take Processing as an example: Processing means that Deployment is being expanded and released. For example, for a deployment in the Processing state, all its replicas and Pod copies have reached the latest version and are available. In this case, it can enter the complete state. If the complete state has undergone some expansion or contraction, it will also enter the processing state of processing.

If some problems are encountered during the processing: for example, the mirror image fails to be pulled, or the readiness probe check fails, it will enter the failed state; if it is in the complete state during the operation, some pod readiness probe checks occur during the intermediate operation. If it fails, the deployment will also enter the failed state at this time. After entering the failed state, unless all point replicas become available and the latest version is updated, deployment will re-enter the complete state

3. Operation demonstration

deployment-case.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
   matchLabels:
    app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
hanxiantaodeMBP:yamls hanxiantao$ kubectl create -f deployment-case.yaml 
deployment.apps/nginx-deployment created
hanxiantaodeMBP:yamls hanxiantao$ kubectl get deployment nginx-deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           15s
hanxiantaodeMBP:yamls hanxiantao$ kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-5d59d67564-528s8   1/1     Running   0          30s
nginx-deployment-5d59d67564-6znl8   1/1     Running   0          30s
nginx-deployment-5d59d67564-q47cp   1/1     Running   0          30s
hanxiantaodeMBP:yamls hanxiantao$ kubectl get replicaset
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-5d59d67564   3         3         3       99s

Upgrade nginx to nginx: 1.9.1

hanxiantaodeMBP:yamls hanxiantao$ kubectl set image deployment nginx-deployment nginx=nginx:1.9.1
deployment.apps/nginx-deployment image updated
hanxiantaodeMBP:yamls hanxiantao$ kubectl edit deployment nginx-deployment
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.9.1
        imagePullPolicy: IfNotPresent
        name: nginx
        ports:
        - containerPort: 80
          protocol: TCP
        resources: {
    
    }
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {
    
    }
      terminationGracePeriodSeconds: 30
hanxiantaodeMBP:yamls hanxiantao$ kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-69c44dfb78-6v2gv   1/1     Running   0          2m2s
nginx-deployment-69c44dfb78-gzbnf   1/1     Running   0          83s
nginx-deployment-69c44dfb78-j7wwl   1/1     Running   0          85s
hanxiantaodeMBP:yamls hanxiantao$ kubectl get replicaset
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-5d59d67564   0         0         0       8m33s
nginx-deployment-69c44dfb78   3         3         3       5m47s

Roll back to the previous version

hanxiantaodeMBP:yamls hanxiantao$ kubectl rollout undo deployment nginx-deployment
deployment.apps/nginx-deployment rolled back
hanxiantaodeMBP:yamls hanxiantao$ kubectl get replicaset
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-5d59d67564   3         3         3       12m
nginx-deployment-69c44dfb78   0         0         0       9m15s

4. Architecture design

1) Management mode

Insert picture description here

Deployment is only responsible for managing different versions of ReplicaSet. ReplicaSet manages the specific number of Pod replicas. Each ReplicaSet corresponds to a version of Deployment template.

As shown in the figure above: Deployment creates ReplicaSet, and ReplicaSet creates Pod. Their OwnerRef actually corresponds to the resource of their controller

2), Deployment controller

Insert picture description here

All controllers do some Handler and Watch through the Event in the Informer. The deployment controller in this place actually pays attention to the events in the Deployment and ReplicaSet, and will be added to the queue after receiving the events. After the Deployment controller is taken out of the queue, its logic will determine Check Paused. This Paused is actually whether the Deployment needs a new release. If Paused is set to true, it means that the Deployment will only do a quantitative maintenance. Will make a new release

If Check paused is Yes, which is true, then only Sync replicas will be made. That is to say, synchronize the replicas sync to the corresponding ReplicaSet, and finally update the deployment status, then the controller's ReplicaSet this time is over

If paused is false, it will do Rollout, that is, update through Create or Rolling. The update method is actually implemented through Create/Update/Delete ReplicaSet.

3), ReplicaSet controller

Insert picture description here

After the Deployment allocates the ReplicaSet, the ReplicaSet controller itself also watches some events from the Informer. These events include the events of the ReplicaSet and Pod. After being taken out of the queue, the logic of the ReplicaSet controller is very simple, it only manages the number of replicas. That is to say, if the controller finds that the number of replicas is larger than the number of Pods, it will expand, and if it finds that the actual number exceeds the expected number, it will delete the Pod.

As you can see in the above Deployment controller diagram, the Deployment controller actually does more complicated things, including version management, and it delegates the maintenance of the quantity under each version to ReplicaSet to do

4), expansion simulation

Insert picture description here

There is a Deployment, its number of replicas is 2, and the corresponding ReplicaSet has Pod1 and Pod2. At this time, if we modify the Deployment replicas, the controller will synchronize the replicas to the current version of the ReplicaSet. This ReplicaSet finds that there are currently 2 Pods, and the current expectations of 3 are not met, and a new Pod3 will be created.

5), release simulation

Insert picture description here

The current initial template of the Deployment, such as the version of template1. There are three Pods in the version corresponding to the ReplicaSet of template1: Pod1, Pod2, Pod3

At this time, if the image of a container in the template is modified, the Deployment controller will create a new ReplicaSet corresponding to template2. After creation, ReplicaSet will gradually modify the number of two ReplicaSets, for example, it will gradually increase the expected number of replicas in ReplicaSet2, and gradually reduce the number of Pods in ReplicaSet1

Then the final result is: the new version of Pod is Pod4, Pod5 and Pod6, the old version of Pod has been deleted, and a release is completed here.

6), roll back the simulation

Insert picture description here

Roll back the simulation. According to the above release simulation, you can know that Pod4, Pod5, and Pod6 have been released. At this time, it is found that the current business version is problematic. If you do a rollback, whether you modify the template through the rollout command or through the rollback, it actually rolls back the template to the old version of template1.

At this time, Deployment will re-modify the expected number of Pods in ReplicaSet1, change the expected number to 3, and gradually reduce the number of replicas in the new version, that is, in ReplicaSet2. The final effect is to recreate the Pod from the old version.

As you can see in the released simulation diagram, in the initial version, Pod1, Pod2, and Pod3 are old versions, but after the rollback, they are actually Pod7, Pod8, and Pod9. That is to say, its rollback is not to find out the previous Pod again, but to re-create a Pod that conforms to the old version of the template.

7), spec field analysis

Insert picture description here

8), update strategy field analysis

Insert picture description here

Course address : https://edu.aliyun.com/roadmap/cloudnative?spm=5176.11399608.aliyun-edu-index-014.4.dc2c4679O3eIId#suit

Guess you like

Origin blog.csdn.net/qq_40378034/article/details/111808756