[Cloud Native] What is deployment in Kubernetes?

 


 

Table of contents

Deployments

Update Deployments

Rollback Deployment

Scale Deployments

Deployment Status

cleanup strategy

canary deployment

Write the Deployment specification


Deployments

A Deployment   provides declarative update capabilities for Pods  and  ReplicaSets .

You are responsible for describing the target state in the Deployment  , and the Deployment  Controller (Controller)  changes the actual state at a controlled rate to make it the desired state. You can define a Deployment to create a new ReplicaSet, or delete an existing Deployment and adopt its resources through a new Deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

In this example:

  • Create a Deployment named  nginx-deployment.metadata.name indicated by field). This name will become the naming basis for subsequent creation of ReplicaSets and Pods. See Writing a Deployment Specification for more details.

  • The Deployment creates a ReplicaSet that creates three (  .spec.replicas indicated by fields) replicas of the Pod.

  • .spec.selector Fields define how the created ReplicaSet finds Pods to manage. Here, you select the label ( app: nginx) defined in the Pod template. However, more complex selection rules are possible, as long as the Pod template itself satisfies the given rules.

    illustrate:

    .spec.selector.matchLabels A field is  {key,value} a map of key-value pairs. matchLabels Each map in the map  is  {key,value} equivalent to  matchExpressions an element in , that is, its  key field is "key", its field operator is "In", and values the array contains only "value".  All conditions given in and  matchLabels must  be met for a match.matchExpressions

  • template field contains the following subfields:
    • Pods are tagged with the usage  .metadata.labels field  app: nginx .
    • The Pod template specification (ie,  .template.spec field) instructs  nginx the Pod to run a container running a  nginx Docker Hub  image version 1.14.2.
    • Create a container and  .spec.template.spec.containers[0].name name it with fields  nginx.

Before starting, make sure your Kubernetes cluster is up and running. Follow the steps below to create the above Deployment:

1. Create a Deployment by running the following command:

kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml

2. Run  kubectl get deployments to check whether the Deployment has been created. If the Deployment is still being created, the output will be similar to:

NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   0/3     0            0           1s

3. When checking the Deployment in the cluster, the displayed fields are:

Note that the desired number of replicas is  .spec.replicas set to 3 according to the field.

  • NAME Lists the names of Deployments in the namespace.
  • READY Displays the number of "copies" of the application available. The displayed pattern is "Ready Count/Expected Count".
  • UP-TO-DATE Shows the number of replicas that have been updated to achieve the desired state.
  • AVAILABLE Displays the number of copies of the app available to the user.
  • AGE Shows how long the application has been running.

4. To view the online status of the Deployment, run it  kubectl rollout status deployment/nginx-deployment.

The output is similar to:

Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
deployment "nginx-deployment" successfully rolled out

5. Run again after a few seconds  kubectl get deployments. The output is similar to:

NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           18s

Notice that the Deployment has created all three replicas, and all replicas are up to date (they contain the latest pod templates) and available.

6. To see the ReplicaSet() created by the Deployment rs, run  kubectl get rs. The output is similar to:

NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-75675f5897   3         3         3       18s

7. The ReplicaSet output contains the following fields:

Note that the name of a ReplicaSet is always of the form  [Deployment 名称]-[哈希]. This name will be the basis for naming the Pods that are created. The strings in it are consistent 哈希with the labels on the ReplicaSet  pod-template-hash .

  • NAME List the names of ReplicaSets in the namespace;
  • DESIRED Displays the expected number of copies of the application, which is the value defined when creating the Deployment. This is the desired state;
  • CURRENT Display the number of replicas in the current running state;
  • READY Shows how many copies of the application are available to serve the user;
  • AGE Shows how long the app has been running.

8. To see the automatically generated labels for each Pod, run  kubectl get pods --show-labels. The output is similar to:

NAME                                READY     STATUS    RESTARTS   AGE       LABELS
nginx-deployment-75675f5897-7ci7o   1/1       Running   0          18s       app=nginx,pod-template-hash=75675f5897
nginx-deployment-75675f5897-kzszj   1/1       Running   0          18s       app=nginx,pod-template-hash=75675f5897
nginx-deployment-75675f5897-qqcnn   1/1       Running   0          18s       app=nginx,pod-template-hash=75675f5897

The created ReplicaSet ensures that there are always three  nginx Pods.

illustrate:

You must specify the appropriate selector and Pod template label (in this case  app: nginx) in the Deployment. Labels or selectors should not overlap other controllers (including other Deployments and StatefulSets). Kubernetes doesn't prevent you from doing this, but if multiple controllers have overlapping selectors, they can collide and perform unpredictable actions.

Pod-template-hash tag

Notice:

Do not change this label.

The Deployment controller  pod-template-hash adds labels to each ReplicaSet created or hosted by the Deployment.

This label ensures that the Deployment's child ReplicaSets do not overlap. Labels are obtained by  PodTemplate hashing the ReplicaSet. The resulting hash is added to the ReplicaSet selector, Pod template label, and exists in any existing Pods that the ReplicaSet may have.

Update Deployments

illustrate:

Deployment go-live is only triggered when the Deployment Pod template (i.e.  .spec.template) changes, such as the template's tags or container image being updated. Other updates (such as expanding or shrinking the Deployment) will not trigger the go-live action.

Follow the steps below to update the Deployment:

1. Let's first update the nginx Pod to use  nginx:1.16.1 the mirror instead of  nginx:1.14.2 the mirror.

kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1

        Or use the following command:

kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1

        Here, deployment/nginx-deployment indicate the name of the Deployment, nginx indicate the container that needs to be updated, and  nginx:1.16.1 indicate the new version of the image and its label.

        The output is similar to:

deployment.apps/nginx-deployment image updated

2. Alternatively, you can act on the Deployment  edit and  change .spec.template.spec.containers[0].image from  nginx:1.14.2 to  nginx:1.16.1.

kubectl edit deployment/nginx-deployment

        The output is similar to:

deployment.apps/nginx-deployment edited

        To check the live status, run:

kubectl rollout status deployment/nginx-deployment

        The output is similar to:

Waiting for rollout to finish: 2 out of 3 new replicas have been updated...

        or

deployment "nginx-deployment" successfully rolled out

Get more information about the updated Deployment:

  • After successfully going online, you can  kubectl get deployments view the Deployment by running: The output is similar to:

    NAME               READY   UP-TO-DATE   AVAILABLE   AGE
    nginx-deployment   3/3     3            3           36s
    
  • Run  kubectl get rs to see that the Deployment has finished updating the Pods by creating a new ReplicaSet and scaling it up to 3 replicas and scaling down the old ReplicaSet to 0 replicas:

    kubectl get rs
    

    The output is similar to:

    NAME                          DESIRED   CURRENT   READY   AGE
    nginx-deployment-1564180365   3         3         3       6s
    nginx-deployment-2035384211   0         0         0       36s
    
  • Now running  get pods should only show new pods:

    kubectl get pods
    

    The output is similar to:

    NAME                                READY     STATUS    RESTARTS   AGE
    nginx-deployment-1564180365-khku8   1/1       Running   0          14s
    nginx-deployment-1564180365-nacti   1/1       Running   0          14s
    nginx-deployment-1564180365-z9gth   1/1       Running   0          14s
    

    The next time you want to update those pods, just update the Deployment Pod template again.

    A Deployment ensures that only a certain number of Pods are shut down when updating. By default, it ensures that at least 75% of the required Pods are running (with a maximum unavailable percentage of 25%).

    The Deployment also ensures that only slightly higher pod counts than expected are created. By default, it ensures that up to 125% more Pods than expected are started (maximum peak of 25%).

    For example, if you look closely at the above Deployment, you will see that it first creates a new Pod, then deletes the old Pod, and creates the new Pod. It won't kill old pods until a sufficient number of new pods have come up. New pods are not created until a sufficient number of old pods are killed. It ensures that at least 3 pods are available and at most 4 total pods are available. When the Deployment is set to 4 replicas, the number of Pods will be between 3 and 5.

  • Get more information about a Deployment

    kubectl describe deployments
    

    The output is similar to:

    Name:                   nginx-deployment
    Namespace:              default
    CreationTimestamp:      Thu, 30 Nov 2017 10:56:25 +0000
    Labels:                 app=nginx
    Annotations:            deployment.kubernetes.io/revision=2
    Selector:               app=nginx
    Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
    StrategyType:           RollingUpdate
    MinReadySeconds:        0
    RollingUpdateStrategy:  25% max unavailable, 25% max surge
    Pod Template:
      Labels:  app=nginx
       Containers:
        nginx:
          Image:        nginx:1.16.1
          Port:         80/TCP
          Environment:  <none>
          Mounts:       <none>
        Volumes:        <none>
      Conditions:
        Type           Status  Reason
        ----           ------  ------
        Available      True    MinimumReplicasAvailable
        Progressing    True    NewReplicaSetAvailable
      OldReplicaSets:  <none>
      NewReplicaSet:   nginx-deployment-1564180365 (3/3 replicas created)
      Events:
        Type    Reason             Age   From                   Message
        ----    ------             ----  ----                   -------
        Normal  ScalingReplicaSet  2m    deployment-controller  Scaled up replica set nginx-deployment-2035384211 to 3
        Normal  ScalingReplicaSet  24s   deployment-controller  Scaled up replica set nginx-deployment-1564180365 to 1
        Normal  ScalingReplicaSet  22s   deployment-controller  Scaled down replica set nginx-deployment-2035384211 to 2
        Normal  ScalingReplicaSet  22s   deployment-controller  Scaled up replica set nginx-deployment-1564180365 to 2
        Normal  ScalingReplicaSet  19s   deployment-controller  Scaled down replica set nginx-deployment-2035384211 to 1
        Normal  ScalingReplicaSet  19s   deployment-controller  Scaled up replica set nginx-deployment-1564180365 to 3
        Normal  ScalingReplicaSet  14s   deployment-controller  Scaled down replica set nginx-deployment-2035384211 to 0
    

    You can see that when the Deployment was first created, it created a ReplicaSet( nginx-deployment-2035384211) and expanded it directly to 3 replicas. When a Deployment is updated, it creates a new ReplicaSet (nginx-deployment-1564180365), scales it up to 1, waits for it to be ready; then scales down the old ReplicaSet to 2, and scales up the new ReplicaSet to 2 so that there are at least 3 Pods are available and a maximum of 4 Pods can be created. It then continues scaling up the new ReplicaSet and scaling down the old ReplicaSet using the same rolling update strategy. In the end, you will have 3 usable replicas in the new ReplicaSet, and the old ReplicaSet will shrink to 0.

illustrate:

Kubernetes  availableReplicas does not consider Pods in the process of termination when calculating the value,  availableReplicas and the value must be  between replicas - maxUnavailable and  replicas + maxSurge . Therefore, you may see that the number of Pods is larger than expected during the online period, and the total resources consumed by the Deployment are also greater than the  replicas + maxSurge resources used by each Pod until the  terminationGracePeriodSeconds expiration set by the terminated Pod.

Flip (multi-Deployment dynamic update)

Every time the Deployment controller notices a new Deployment, it creates a ReplicaSet to start the required Pods. If the Deployment is updated, existing ReplicaSets for Pods that control label matching  .spec.selector but template mismatching  .spec.template are scaled down. Eventually, the new ReplicaSet is scaled to  .spec.replicas replicas, and all old ReplicaSets are scaled to 0 replicas.

When the Deployment is being updated while it is going online, the Deployment will create a new ReplicaSet for the update and start expanding it. The ReplicaSet that was being expanded before will be flipped, added to the old ReplicaSets list and start shrinking.

For example, suppose you create a Deployment to create  nginx:1.14.2 5 replicas, but then update the Deployment to create 5  nginx:1.16.1 replicas, when only 3  nginx:1.14.2 replicas have been created. In this case, the Deployment immediately starts killing 3  nginx:1.14.2 pods and starts creating  nginx:1.16.1 pods. It doesn't wait  nginx:1.14.2 for all 5 replicas to be created before starting the change action.

Change label selector

Updating label selectors is generally discouraged. It is recommended that you plan ahead for the selection operator. In any case, if you need to update a label selector, use extreme caution and make sure you understand everything that might be going on behind the scenes.

illustrate:

In the API version  apps/v1 , Deployment tag selectors are immutable after creation.

  • Adding a selector requires updating the Pod template label in the Deployment spec with the new label, otherwise a validation error will be returned. This change is non-overlapping, meaning that the new selector will not select ReplicaSets and Pods created with the old selector, which will result in all old ReplicaSets being orphaned when the new one is created.
  • Updates to selectors If the key name of an operator is changed, this results in the same behavior as when adding an operator.
  • Deleting a selector removes the existing operator from the Deployment selector. This action does not require changing the Pod template labels. Existing ReplicaSets will not be orphaned, nor will new ReplicaSets be created as a result, but note that removed labels still exist in existing Pods and ReplicaSets.

Rollback Deployment

Sometimes, you may want to rollback a Deployment; for example, when the Deployment is unstable (such as entering a repeatedly crashing state). By default, all live records of a Deployment are kept in the system so that they can be rolled back at any time (you can change this constraint by modifying the revision history limit).

illustrate:

When a Deployment is triggered to go live, the system creates a new revision of the Deployment. .spec.templateThis means that new revisions are only created when the Deployment's Pod template ( ) changes -- for example, the template's tags or container image change. Other updates, such as scaling a Deployment, do not create a Deployment revision. This is for convenience when performing manual scaling or autoscaling at the same time. In other words, when you roll back to an earlier revision, only the Pod template portion of the Deployment is rolled back.

  • Assuming you made a typo when updating the Deployment, set the image name to  nginx:1.161 instead  nginx:1.16.1:

    kubectl set image deployment/nginx-deployment nginx=nginx:1.161
    

    The output is similar to:

    deployment.apps/nginx-deployment image updated
    
  • This onboarding process will stall. You can verify this by checking the live status:

    kubectl rollout status deployment/nginx-deployment
    

    The output is similar to:

    Waiting for rollout to finish: 1 out of 3 new replicas have been updated...
    
  • Press Ctrl-C to stop the online status observation above. For more information about stagnation, refer here .
  • You can see that the old copy has two ( nginx-deployment-1564180365 and  nginx-deployment-2035384211), and the new copy has one ( nginx-deployment-3066724191):

    kubectl get rs
    

    The output is similar to:

    NAME                          DESIRED   CURRENT   READY   AGE
    nginx-deployment-1564180365   3         3         3       25s
    nginx-deployment-2035384211   0         0         0       36s
    nginx-deployment-3066724191   1         1         0       6s
    
  • Looking at the created Pods, you will notice that 1 Pod created by the new ReplicaSet is stuck in the image pull loop.

    kubectl get pods
    

    The output is similar to:

    NAME                                READY     STATUS             RESTARTS   AGE
    nginx-deployment-1564180365-70iae   1/1       Running            0          25s
    nginx-deployment-1564180365-jbqqo   1/1       Running            0          25s
    nginx-deployment-1564180365-hysrc   1/1       Running            0          25s
    nginx-deployment-3066724191-08mng   0/1       ImagePullBackOff   0          6s
    
    illustrate:

    The Deployment controller automatically stops the problematic online process and stops expanding the new ReplicaSet. This behavior depends on the rollingUpdate parameter specified (specifically  maxUnavailable). By default, Kubernetes sets this value to 25%.

  • Get Deployment description information:

    kubectl describe deployment
    

    The output is similar to:

    Name:           nginx-deployment
    Namespace:      default
    CreationTimestamp:  Tue, 15 Mar 2016 14:48:04 -0700
    Labels:         app=nginx
    Selector:       app=nginx
    Replicas:       3 desired | 1 updated | 4 total | 3 available | 1 unavailable
    StrategyType:       RollingUpdate
    MinReadySeconds:    0
    RollingUpdateStrategy:  25% max unavailable, 25% max surge
    Pod Template:
      Labels:  app=nginx
      Containers:
       nginx:
        Image:        nginx:1.161
        Port:         80/TCP
        Host Port:    0/TCP
        Environment:  <none>
        Mounts:       <none>
      Volumes:        <none>
    Conditions:
      Type           Status  Reason
      ----           ------  ------
      Available      True    MinimumReplicasAvailable
      Progressing    True    ReplicaSetUpdated
    OldReplicaSets:     nginx-deployment-1564180365 (3/3 replicas created)
    NewReplicaSet:      nginx-deployment-3066724191 (1/1 replicas created)
    Events:
      FirstSeen LastSeen    Count   From                    SubObjectPath   Type        Reason              Message
      --------- --------    -----   ----                    -------------   --------    ------              -------
      1m        1m          1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-2035384211 to 3
      22s       22s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-1564180365 to 1
      22s       22s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled down replica set nginx-deployment-2035384211 to 2
      22s       22s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-1564180365 to 2
      21s       21s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled down replica set nginx-deployment-2035384211 to 1
      21s       21s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-1564180365 to 3
      13s       13s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled down replica set nginx-deployment-2035384211 to 0
      13s       13s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-3066724191 to 1
    

    To resolve this issue, a rollback to the previous stable version of the Deployment is required.

Check Deployment Live History

1. Follow the steps below to check the rollback history:

        First, check the Deployment revision history:

kubectl rollout history deployment/nginx-deployment

        The output is similar to:

deployments "nginx-deployment"
REVISION    CHANGE-CAUSE
1           kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml
2           kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
3           kubectl set image deployment/nginx-deployment nginx=nginx:1.161

CHANGE-CAUSEkubernetes.io/change-cause The content of is copied  from the Deployment's  annotation. Copying occurs when a revision is created. You can set the message with  CHANGE-CAUSE :

  • Use  kubectl annotate deployment/nginx-deployment kubernetes.io/change-cause="image updated to 1.16.1" to add annotations to the Deployment.
  • Manually edit the resource's manifest.

2. To view the details of the revision history, run:

kubectl rollout history deployment/nginx-deployment --revision=2

        The output is similar to:

deployments "nginx-deployment" revision 2
  Labels:       app=nginx
          pod-template-hash=1159050644
  Annotations:  kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
  Containers:
   nginx:
    Image:      nginx:1.16.1
    Port:       80/TCP
     QoS Tier:
        cpu:      BestEffort
        memory:   BestEffort
    Environment Variables:      <none>
  No volumes.

rollback to previous revision

1. Follow the steps given below to roll back the Deployment from the current version to the previous version (i.e. version 2).

        Assume now that you have decided to undo the current liveline and roll back to a previous revision:

kubectl rollout undo deployment/nginx-deployment

        The output is similar to:

deployment.apps/nginx-deployment rolled back

        Alternatively, you can  --to-revision rollback to a specific revision by using:

kubectl rollout undo deployment/nginx-deployment --to-revision=2

        The output is similar to:

deployment.apps/nginx-deployment rolled back

        For more details about rollback-related commands, please refer to  kubectl rollout .

Right now, the Deployment is rolling back to the previous stable version. As you can see, the Deployment controller generated a rollback to revision 2  DeploymentRollback event.

2. To check that the rollback was successful and that the Deployment is running, run:

kubectl get deployment nginx-deployment

        The output is similar to:

NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           30m

Get Deployment description information:

kubectl describe deployment nginx-deployment

The output is similar to:

Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Sun, 02 Sep 2018 18:17:55 -0500
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision=4
                        kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
Selector:               app=nginx
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx:1.16.1
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-deployment-c4747d96c (3/3 replicas created)
Events:
  Type    Reason              Age   From                   Message
  ----    ------              ----  ----                   -------
  Normal  ScalingReplicaSet   12m   deployment-controller  Scaled up replica set nginx-deployment-75675f5897 to 3
  Normal  ScalingReplicaSet   11m   deployment-controller  Scaled up replica set nginx-deployment-c4747d96c to 1
  Normal  ScalingReplicaSet   11m   deployment-controller  Scaled down replica set nginx-deployment-75675f5897 to 2
  Normal  ScalingReplicaSet   11m   deployment-controller  Scaled up replica set nginx-deployment-c4747d96c to 2
  Normal  ScalingReplicaSet   11m   deployment-controller  Scaled down replica set nginx-deployment-75675f5897 to 1
  Normal  ScalingReplicaSet   11m   deployment-controller  Scaled up replica set nginx-deployment-c4747d96c to 3
  Normal  ScalingReplicaSet   11m   deployment-controller  Scaled down replica set nginx-deployment-75675f5897 to 0
  Normal  ScalingReplicaSet   11m   deployment-controller  Scaled up replica set nginx-deployment-595696685f to 1
  Normal  DeploymentRollback  15s   deployment-controller  Rolled back deployment "nginx-deployment" to revision 2
  Normal  ScalingReplicaSet   15s   deployment-controller  Scaled down replica set nginx-deployment-595696685f to 0

Scale Deployments

You can scale a Deployment using the following command:

kubectl scale deployment/nginx-deployment --replicas=10

The output is similar to:

deployment.apps/nginx-deployment scaled

Assuming the cluster has horizontal pod autoscaling enabled , you can set up an autoscaler for a Deployment and choose a minimum and maximum number of Pods to run based on the CPU utilization of existing Pods.

kubectl autoscale deployment/nginx-deployment --min=10 --max=15 --cpu-percent=80

The output is similar to:

deployment.apps/nginx-deployment scaled

scaling

RollingUpdate's Deployment supports running multiple versions of an application concurrently. When the autoscaler scales a RollingUpdate Deployment that is in the process of going live (still in progress or paused), the Deployment Controller balances additional replicas among existing active ReplicaSets (ReplicaSets with Pods) to mitigate risk. This is called  Proportional Scaling .

For example, you are running a 10-replica Deployment with  maxSurge = 3 and maxUnavailable = 2.

  • Make sure that all 10 copies of the Deployment are running.

    kubectl get deploy
    

    The output is similar to:

    NAME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    nginx-deployment     10        10        10           10          50s
    
  • Update the Deployment to use the new image, which happens to be unresolvable from within the cluster.

    kubectl set image deployment/nginx-deployment nginx=nginx:sometag
    

    The output is similar to:

    deployment.apps/nginx-deployment image updated
    
  • Mirror updates use the ReplicaSet  nginx-deployment-1989198191 to initiate a new rollout process, but due to  maxUnavailable the requirements mentioned above, the process is blocked. Check online status:

    kubectl get rs
    

    The output is similar to:

    NAME                          DESIRED   CURRENT   READY     AGE
    nginx-deployment-1989198191   5         5         0         9s
    nginx-deployment-618515232    8         8         8         1m
    
  • Then, a new Deployment scaling request comes in. The autoscaler increases the Deployment replicas to 15. The Deployment controller needs to decide where to add the 5 new replicas. If scaling is not used, all 5 replicas will be added to the new ReplicaSet. When using scaling, additional replicas can be distributed across all ReplicaSets. A larger proportion of replicas will be added to the ReplicaSet with the most replicas, and a lower proportion of replicas will go to the ReplicaSet with fewer replicas. All remaining replicas are added to the ReplicaSet with the most replicas. ReplicaSets with zero replicas will not be scaled up.

In the example above, 3 replicas were added to the old ReplicaSet and 2 replicas were added to the new ReplicaSet. Assuming the new replicas are healthy, the go-live process should eventually migrate all replicas to the new ReplicaSet. To confirm this, run:

kubectl get deploy

The output is similar to:

NAME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment     15        18        7            8           7m

The live state identifies how replicas were added to each ReplicaSet.

kubectl get rs

The output is similar to:

NAME                          DESIRED   CURRENT   READY     AGE
nginx-deployment-1989198191   7         7         0         7m
nginx-deployment-618515232    11        11        11        7m

Pause and resume the online process of Deployment

When you update a Deployment, or plan to update it, you can pause the Deployment's rollout process before triggering one or more updates. When you are ready to apply the changes, you can resume the Deployment rollout process again. Doing this allows you to apply multiple patches between pausing and resuming execution without triggering unnecessary rollouts.

  • For example, for a newly created Deployment:

    Get the Deployment information:

    kubectl get deploy
    

    The output is similar to:

    NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    nginx     3         3         3            3           1m
    

    Get online status:

    kubectl get rs
    

    The output is similar to:

    NAME               DESIRED   CURRENT   READY     AGE
    nginx-2142116321   3         3         3         1m
    
  • Use the following command to suspend online:

    kubectl rollout pause deployment/nginx-deployment
    

    The output is similar to:

    deployment.apps/nginx-deployment paused
    
  • Next update the Deployment image:

    kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
    

    The output is similar to:

    deployment.apps/nginx-deployment image updated
    
  • Note that no new online is triggered:

    kubectl rollout history deployment/nginx-deployment
    

    The output is similar to:

    deployments "nginx"
    REVISION  CHANGE-CAUSE
    1   <none>
    
  • Get the live status to verify that the existing ReplicaSet has not been changed:

    kubectl get rs
    

    The output is similar to:

    NAME               DESIRED   CURRENT   READY     AGE
    nginx-2142116321   3         3         3         2m
    
  • You can perform as many updates as you want, for example, resources that can be used:

    kubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi
    

    The output is similar to:

    deployment.apps/nginx-deployment resource requirements updated
    

    The initial state before the Deployment was paused will continue to function, but new updates will have no effect while the Deployment is paused.

  • Finally, bring the Deployment back online and watch the creation of a new ReplicaSet with all the updates applied:

    kubectl rollout resume deployment/nginx-deployment
    

    The output is similar to this:

    deployment.apps/nginx-deployment resumed
    
  • Observe the status of the go-live until it is complete.

    kubectl get rs -w
    

    The output is similar to:

    NAME               DESIRED   CURRENT   READY     AGE
    nginx-2142116321   2         2         2         2m
    nginx-3926361531   2         2         0         6s
    nginx-3926361531   2         2         1         18s
    nginx-2142116321   1         2         2         2m
    nginx-2142116321   1         2         2         2m
    nginx-3926361531   3         2         1         18s
    nginx-3926361531   3         2         1         18s
    nginx-2142116321   1         1         1         2m
    nginx-3926361531   3         3         1         18s
    nginx-3926361531   3         3         2         19s
    nginx-2142116321   0         1         1         2m
    nginx-2142116321   0         1         1         2m
    nginx-2142116321   0         0         0         2m
    nginx-3926361531   3         3         3         20s
    
  • Get the latest online status:

    kubectl get rs
    

    The output is similar to:

    NAME               DESIRED   CURRENT   READY     AGE
    nginx-2142116321   0         0         0         2m
    nginx-3926361531   3         3         3         28s
    

illustrate:

You cannot roll back a suspended Deployment without first resuming its execution.

Deployment Status

There are many states in the life cycle of a Deployment. During the online new ReplicaSet, it may be  Progressing (in progress) , it may be  Complete (completed) , or it may be  Failed (failed) so that it cannot continue.

Deployment in progress

Kubernetes marks a Deployment as Progressing during the execution of the following tasks :

  • Deployment creates a new ReplicaSet
  • Deployment is scaling up its latest ReplicaSet
  • Deployment is scaling down its old ReplicaSet(s)
  • The new Pod is ready or available (ready for at least  MinReadySeconds  seconds).

When the onboarding process enters the "Progressing" state, the Deployment controller adds a  .status.conditions status entry to the Deployment with the following properties:

  • type: Progressing
  • status: "True"
  • reason: NewReplicaSetCreated | reason: FoundNewReplicaSet | reason: ReplicaSetUpdated

You can  kubectl rollout status monitor the progress of a Deployment using .

Completed Deployment

Kubernetes marks a Deployment as Complete when it has the following characteristics ;

  • All replicas associated with the Deployment have been updated to the latest version specified, which means that all previously requested updates have been completed.
  • All replicas associated with the Deployment are available.
  • Old copy of Deployment not running.

When the go-live process enters the "Complete" state, the Deployment controller  .status.conditions adds a status entry to the Deployment with the following properties:

  • type: Progressing
  • status: "True"
  • reason: NewReplicaSetAvailable

The state value of this  Progressing state will continue to be  "True"until a new online action is triggered. Even if the replica's availability status changes (and thus affects  Available the status), Progressing the status's value does not change.

You can  kubectl rollout status check if a Deployment is complete with . kubectl rollout status Returns an exit code of 0 if the onboarding completes successfully .

kubectl rollout status deployment/nginx-deployment

The output is similar to:

Waiting for rollout to finish: 2 of 3 updated replicas are available...
deployment "nginx-deployment" successfully rolled out

kubectl rollout The return status obtained from  the command is 0 (success):

echo $?
0

Failed Deployments

Your Deployment might get frustrated trying to deploy its latest ReplicaSet, leaving it in an incomplete state. Some possible factors for this are as follows:

  • Insufficient quota
  • Readiness Probe failed
  • Image pull error
  • Insufficient permissions
  • Limit Ranges problem
  • Configuration errors at application runtime

One way to detect this condition is to specify a deadline parameter in the Deployment specification: ( .spec.progressDeadlineSeconds ). .spec.progressDeadlineSeconds Given is the number of seconds the Deployment controller should wait for before marking (via the Deployment state) that the Deployment is stalled.

The following  kubectl command sets the statute  progressDeadlineSecondsto tell the controller to report no progress in bringing the Deployment online after 10 minutes:

kubectl patch deployment/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'

The output is similar to:

deployment.apps/nginx-deployment patched

After the deadline expires, the Deployment controller will add a Deployment status with the following properties to the Deployment's  .status.conditions :

  • type: Progressing
  • status: "False"
  • reason: ProgressDeadlineExceeded

This condition may also fail earlier, so its status value is set to  "False", with a reason of  ReplicaSetCreateError. Once a Deployment goes live, its deadline is no longer considered.

See  the Kubernetes API Conventions  for more information on state status.

illustrate:

Reason=ProgressDeadlineExceeded Kubernetes does not take any action on a stopped Deployment other than reporting  status. Higher-level orchestrators can take advantage of this design and act accordingly. For example, rolling back a Deployment to its previous version.

illustrate:

If you pause a Deployment going live, Kubernetes no longer checks the progress of the Deployment going live against the specified deadline. You can safely pause and resume a Deployment in the middle of a rollout without running into deadline issues.

A Deployment may fail transiently, either because of a timeout set too short, or because of other issues that can be considered temporary. For example, assume that the problem encountered is insufficient quotas. If you describe a Deployment, you will notice the following sections:

kubectl describe deployment nginx-deployment

The output is similar to:

<...>
Conditions:
  Type            Status  Reason
  ----            ------  ------
  Available       True    MinimumReplicasAvailable
  Progressing     True    ReplicaSetUpdated
  ReplicaFailure  True    FailedCreate
<...>

If run  kubectl get deployment nginx-deployment -o yaml, the Deployment status output will look like this:

status:
  availableReplicas: 2
  conditions:
  - lastTransitionTime: 2016-10-04T12:25:39Z
    lastUpdateTime: 2016-10-04T12:25:39Z
    message: Replica set "nginx-deployment-4262182780" is progressing.
    reason: ReplicaSetUpdated
    status: "True"
    type: Progressing
  - lastTransitionTime: 2016-10-04T12:25:42Z
    lastUpdateTime: 2016-10-04T12:25:42Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: 2016-10-04T12:25:39Z
    lastUpdateTime: 2016-10-04T12:25:39Z
    message: 'Error creating: pods "nginx-deployment-4262182780-" is forbidden: exceeded quota:
      object-counts, requested: pods=1, used: pods=3, limited: pods=2'
    reason: FailedCreate
    status: "True"
    type: ReplicaFailure
  observedGeneration: 3
  replicas: 2
  unavailableReplicas: 2

Finally, once the Deployment progress deadline is exceeded, Kubernetes will update the status and progress status of the reason:

Conditions:
  Type            Status  Reason
  ----            ------  ------
  Available       True    MinimumReplicasAvailable
  Progressing     False   ProgressDeadlineExceeded
  ReplicaFailure  True    FailedCreate

You can solve the problem of insufficient quota by scaling down the Deployment or other running controllers, or directly increasing the quota in the namespace. If the quota conditions are met and the Deployment controller completes the Deployment onboarding operation, the Deployment status is updated with a success status ( Status=True and  Reason=NewReplicaSetAvailable).

Conditions:
  Type          Status  Reason
  ----          ------  ------
  Available     True    MinimumReplicasAvailable
  Progressing   True    NewReplicaSetAvailable

type: Available Plus  status: True means the Deployment has minimum availability. The minimum availability is specified by parameters in the Deployment policy. type: Progressing Plus  status: True indicates that the Deployment is in the process of going live and is running, or has successfully completed progress, and the minimum required new replica is available. See the Reason for the corresponding situation for details. In our case  reason: NewReplicaSetAvailable it means that the Deployment is complete.

You can  kubectl rollout status check if a Deployment failed to make progress using Returns a non-zero exit code if the Deployment has exceeded the progress deadline kubectl rollout status .

kubectl rollout status deployment/nginx-deployment

The output is similar to:

Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
error: deployment "nginx" exceeded its progress deadline

kubectl rollout The exit status of the command is 1 (indicating that an error occurred):

echo $?
1

Actions on failed Deployments

All actions that apply to completed Deployments also apply to failed Deployments. You can scale it up or down, roll back to a previous revision, etc., or pause a Deployment if you need to apply multiple adjustments to its Pod templates.

cleanup strategy

You can set fields in the Deployment  .spec.revisionHistoryLimit to specify how many old ReplicaSets of this Deployment are kept. The remaining ReplicaSets will be garbage collected in the background. By default, this value is 10.

illustrate:

Explicitly setting this field to 0 will cause all of the Deployment's history to be cleared, so the Deployment will not be able to be rolled back.

canary deployment

If you want to use a Deployment to roll out a version to a subset of users or a subset of servers, you can follow the canary pattern described in Resource Management and create multiple Deployments, one for each version.

Write the Deployment specification

Like other Kubernetes configurations, a Deployment requires a  .apiVersion, .kind and  .metadata fields. For additional information on configuration files, refer to the documentation on deploying a Deployment , configuring containers, and managing resources with kubectl .

When the control plane creates new Pods for a Deployment, the Deployment's  .metadata.name is part of the basis for naming those Pods. The Deployment's name must be a valid  DNS subdomain value, but this has unexpected results for the Pod's hostname. For best compatibility, names should follow stricter  DNS labeling rules.

A Deployment also requires  a .spec section .

Pod template

.spec.spec.template Only and  .spec.selector are required fields  in  .

.spec.template is a  Pod template . It has   exactly the same syntax rules as Pod . Only here it's nested so no need for  apiVersion an or  kind.

In addition to the Pod's required fields, Pod templates in a Deployment must specify appropriate labels and an appropriate restart policy. For labels, make sure not to overlap other controllers. See Selectors .

Only  .spec.template.spec.restartPolicy  is  Always allowed, which is the default when not specified.

copy

.spec.replicas is an optional field specifying the desired Pod. Its default value is 1.

If you perform a manual scaling operation on a Deployment (for example, via  ), and then perform an update operation on the Deployment based on the manifest kubectl scale deployment deployment --replicas=X(for example, by running  kubectl apply -f deployment.yaml.

Do not set if a  HorizontalPodAutoscaler  (or other similar API that performs horizontal scaling operations) is managing the scaling of the Deployment  .spec.replicas.

Quite the contrary, the Kubernetes  control plane should be allowed to manage fields automatically  .spec.replicas .

selector

.spec.selectorIt is a required field to specify the Pod label selector  of this Deployment  .

.spec.selector Must match  .spec.template.metadata.labels, otherwise the request will be rejected by the API.

In the API  apps/v1version, .spec.selector and  .metadata.labels will not be set by default if not set  .spec.template.metadata.labels, so it needs to be set explicitly. At the same time in  apps/v1the version, the Deployment  .spec.selector is immutable after creation.

 A Deployment terminates a Pod when its label matches the selector, but its template  .spec.template does not, or when the total number of such Pods exceeds  the setting. .spec.replicasIf the total number of Pods does not meet the desired value, Deployment will  .spec.template create new Pods based on it.

illustrate:

You should not create Pods matching this selector directly, nor by creating another Deployment or controller like ReplicaSet or ReplicationController to create Pods with labels matching this selector. If you do, the first Deployment will think it created the Pods. Kubernetes won't stop you from doing this.

If there are multiple controllers with overlapping selectors, conflicts between controllers will not work properly.

Strategy

.spec.strategy Policy specifies the strategy used to replace old Pods with new ones. .spec.strategy.type Can be "Recreate" or "RollingUpdate". "RollingUpdate" is the default.

Recreate the Deployment

If  .spec.strategy.type==Recreate, all existing Pods are killed before new Pods are created.

illustrate:

This just ensures that other pods are terminated before new pods are created for the upgrade. If you upgrade a Deployment, all Pods of the older version will be terminated immediately. The controller waits for these pods to be successfully removed before creating new versions of the pods. If you manually delete a Pod, its lifecycle is controlled by the ReplicaSet, which immediately creates a replacement Pod (even if the old Pod is still in the Terminating state). If you need a "at most n" guarantee of the number of Pods, you need to consider using  a StatefulSet .

Rolling updates to Deployments

The Deployment will  .spec.strategy.type==RollingUpdateupdate Pods in a rolling update manner. You can specify  maxUnavailable and  maxSurge to control the rolling update process.

maximum unavailable

.spec.strategy.rollingUpdate.maxUnavailable is an optional field that specifies the maximum number of Pods that are unavailable during the update process. This value can be an absolute number (for example, 5) or a percentage of desired Pods (for example, 10%). Percentage values ​​are converted to absolute numbers and the decimal part is removed. If  .spec.strategy.rollingUpdate.maxSurge 0, this value cannot be 0. The default value is 25%.

For example, when this value is set to 30%, the rolling update starts immediately shrinking the old ReplicaSet to 70% of the expected number of Pods. Once the new Pods are ready, you can continue to scale down the old ReplicaSet and then scale up the new ReplicaSet, ensuring that the total number of Pods available at any time during the update is at least 70% of the desired number of Pods.

maximum peak value

.spec.strategy.rollingUpdate.maxSurge is an optional field that specifies the number of Pods that can be created beyond the desired number of Pods. This value can be an absolute number (for example, 5) or a percentage of desired Pods (for example, 10%). If  MaxUnavailable 0, this value cannot be 0. Percentage values ​​are converted to absolute numbers by rounding up. The default value for this field is 25%.

For example, when this value is 30%, the new ReplicaSet will be expanded immediately after the rolling update is started, while ensuring that the total number of old and new Pods does not exceed 130% of the total number of required Pods. Once the old Pods are killed, the new ReplicaSet can be further scaled while ensuring that the total number of running Pods at any time during the update is at most 130% of the desired total number of Pods.

progress deadline seconds

.spec.progressDeadlineSeconds is an optional field that specifies   the number of seconds the system waits for a Deployment to make progress before reporting a Deployment progress failure . Such reports will be reflected in the resource status as  type: Progressing, status: Falsereason: ProgressDeadlineExceeded. The Deployment controller will keep retrying the Deployment for a default of 600 milliseconds. In the future, once automatic rollback is implemented, the Deployment controller will roll back the Deployment as soon as it detects such a condition.

If specified, this field value needs to be greater than  .spec.minReadySeconds the value.

minimum ready time

.spec.minReadySeconds is an optional field that specifies the minimum ready time for a newly created Pod without any container crashes before it is considered available. The default is 0 (Pods will be considered available as soon as they are ready). To find out when a Pod is considered ready, see Container Probes .

Revision History Limits

A Deployment's revision history is stored in the ReplicaSets it controls.

.spec.revisionHistoryLimit is an optional field used to set the number of old ReplicaSets to keep for rollback purposes. These old ReplicaSets will consume resources in etcd and consume  kubectl get rs output. The configuration for each Deployment revision is stored in its ReplicaSets; therefore, once an old ReplicaSet is deleted, the ability to roll back to the Deployment's corresponding revision is lost. By default, the system keeps 10 old ReplicaSets, but the ideal value depends on the frequency and stability of new Deployments.

More specifically, setting this field to 0 means that all old ReplicaSets with 0 replicas will be cleaned up. In this case, the new Deployment cannot be undone because its revision history is cleared.

paused (suspended)

.spec.paused is an optional boolean field used to pause and resume the Deployment. The only difference between a paused Deployment and an unpaused Deployment is that any modification of the PodTemplateSpec while the Deployment is in the paused state will not trigger a new rollout. Deployments are not paused by default when they are created.

 

 

Guess you like

Origin blog.csdn.net/weixin_53678904/article/details/132059226