Rolling update, use kubectl rollout
to realize application upgrade and downgrade without user perception.
1. Define the application version
Kubernetes
In , the version update does not use API
the object, but two commands: kubectl apply
and , of course, they also need to match the , etc. files kubectl rollout
required for deploying the application .Deployment
DaemonSet
YAML
We often simply think that "version" is the "version number" of the application, or the "label" of the container image, but don't forget that the application Kubernetes
in byPod
etc. To manage, so the "version update" of the application actually updates the entire application .Pod
Deployment
Pod
Pod
Then what is the decision?
Pod
Is determined by the YAML
description file, more precisely, Deployment
fields in objects such as template
.
Therefore, the version change applied Kubernetes
in is the change intemplate
, even if only one field is changed in , it will form a new version, which is also a version change.Pod
template
But template
there is too much content in , it is not realistic to use such a long string as the "version number", so Kubernetes
the "summary" function is used, template
and Hash
the value of is calculated by the digest algorithm as the "version number", although it is not easy to identify , but very practical.
Pod
The string of random numbers "6796..." in the name Pod
is Hash
the value of the template, which is the "version number" Pod
of . If you change Pod YAML
the description , such as changing the image to nginx:stable-alpine
, or changing the container name to nginx-test
, a new application version will be generated and kubectl apply
then recreated Pod
:
You can see that the valuePod
in the name has changed to "7c6c...", which means that the version of is updated.Hash
Pod
2. Implement app updates
To take a closer look at the application update process Kubernetes
of the , let's modify Nginx Deployment
the object slightly to Kubernetes
see how the version update is implemented.
First modify it ConfigMap
so that it Nginx
outputs the version number so that we can curl
view the version with:
# ngx-conf.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: ngx-conf
data:
default.conf: |
server {
listen 80;
location / {
default_type text/plain;
return 200
'ver : $nginx_version\nsrv : $server_addr:$server_port\nhost: $hostname\n';
}
}
Then we modify Pod
the image to explicitly specify that the version number is 1.21-alpine, and the number of instances is set to 4:
# ngx-v1.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ngx-dep
spec:
replicas: 4
selector:
matchLabels:
app: ngx-dep
template:
metadata:
labels:
app: ngx-dep
spec:
volumes:
- name: ngx-conf-vol
configMap:
name: ngx-conf
containers:
- image: nginx:1.21-alpine
name: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/conf.d
name: ngx-conf-vol
Name it ngx-v1.yml
, and execute the command kubectl apply
to deploy the application:
kubectl apply -f ngx-conf.yml
kubectl apply -f ngx-v1.yml
# ngx-svc.yml
apiVersion: v1
kind: Service
metadata:
name: ngx-svc
spec:
selector:
app: ngx-dep
ports:
- port: 80
targetPort: 80
protocol: TCP
We can also create Service
an object , and then use kubectl port-forward
the forward request to check the status:
kubectl apply -f ngx-svc.yml
kubectl port-forward svc/ngx-svc 8080:80 &
curl 127.1:8080
As you can see from the output of curl
the command , the version being applied is 1.21.6. Now, let's write a new version object ngx-v2.yml
, upgrade the image to nginx:1.22-alpine
, and keep everything else the same.
Because the action Kubernetes
of is too fast, in order to be able to observe the process of applying the update, we also need to add a field minReadySeconds
to let Wait for a while Kubernetes
during the update process, confirm Pod
that there is no problem before continuing the rest Pod
of the creation work.
As a reminder, minReadySeconds
this field doesn't belong to Pod
the template , so it doesn't affect Pod
the version:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ngx-dep
spec:
minReadySeconds: 15 # 确认Pod就绪的等待时间
replicas: 4
... ...
containers:
- image: nginx:1.22-alpine
... ...
Now we execute the command kubectl apply
to update the application, because the image name is changed and Pod
the template is changed, it will trigger "version update", and then use a new command: kubectl rollout status
, to view the status of the application update:
kubectl apply -f ngx-v2.yml
kubectl rollout status deployment ngx-dep
After the update is complete, if you execute it again kubectl get pod
, you will see that all Pod
have been replaced with the new version "d575...", curl
accessed with Nginx
, the output information also becomes "1.22.0":
If you look carefully kubectl rollout status
at the output information of , you can find that Kubernetes
instead of destroying all the old Pod
ones and creating new ones at once Pod
, you are creating new ones one by one Pod
, and destroying the old ones at the same time Pod
, so as to ensure that there are always enough Pod
running , and there will be no The "empty window period" interrupts the service.
The process of increasing the number of new Pod
data is a bit like "snowballing", starting from zero and getting bigger and bigger, so this is the so-called "rolling update" ( rolling update
).
Use the command kubectl describe
to see Pod
the changes more clearly:
kubectl describe deploy ngx-dep
1. At the beginning, the number of V1 Pods (ie ngx-dep-54b865d75) is 4;
2. When the "rolling update" starts, Kubernetes
create 1 V2 Pod (ie ngx-dep-d575d5776), and put the V1 Pod The number is reduced to 3;
3. Then increase the number of V2 Pods to 2, and the number of V1 Pods becomes 1;
4. Finally, the number of V2 Pods reaches the expected value of 4, and the number of V1 Pods becomes 0. The entire update The process is over.
Seeing this, do you understand a little bit? In fact, "rolling update" is two simultaneous "application scaling" operations Deployment
controlled . The old version shrinks to 0, and the new version expands to the specified value at the same time. The process of ebb and flow".
I drew a picture of this rolling update process, you can refer to it for further experience:
3. Manage App Updates
Kubernetes
The "rolling update" function of the . ?
To solve these two problems, we still have to use kubectl rollout
the command .
In the process of applying the update, you can use kubectl rollout pause
to , check, modify Pod
, or test and verify, and use it kubectl rollout resume
to .
These two commands are relatively simple, so I won't introduce them more. It should be noted that they are only supported Deployment
and cannot be used on DaemonSet
and StatefulSet
above (the latest 1.24 supports the rolling update of StatefulSet).
For the problems after the update, Kubernetes
we provide us with "regret medicine", that is, the update history. You can check the previous update records and roll back to any position, which is very similar to the version control software we develop commonly used such as Git.
The command to view the update history is kubectl rollout history
:
kubectl rollout history deploy ngx-dep
It will output a version list, because we Nginx Deployment
create a version and update another version, so there will be two historical records here.
kubectl rollout history
But the useful information output by the list is too little, you can add parameters --revision
after the command to view the detailed information of each version, including labels, image names, environment variables, storage volumes, etc., through which you can get a general understanding of each version Which key fields have changed:
kubectl rollout history deploy --revision=2
Suppose we think that the newly updated nginx:1.22-alpine is not good, and if we want to roll back to the previous version, we can use the command kubectl rollout undo
or add parameters --to-revision
to roll back to any historical version:
kubectl rollout undo deploy ngx-dep
kubectl rollout undo
The operation process of is actually kubectl apply
the same as that of , and the implementation is still "rolling update", except that the old version Pod
template is used to shrink Pod
the number of to 0, and at the same time Pod
expand the old version to the specified value.
I also drew a picture of the "version downgrade" process from V2 to V1, which is exactly the same as the "version upgrade" process from V1 to V2, the only difference is the change direction of the version number:
4. Add update description
Do you feel kubectl rollout history
that the list of versions of is a bit too simple? There is only one version update sequence number, but CHANGE-CAUSE
why is always displayed as? Can you add explanation information every time you update it like Git?
This is of course possible, and the method is very simple, we only need Deployment
to metadata
add a new field to the annotations
.
annotations
The meaning of the field is "comment" and "comment". The form is labels
the same as Key-Value
, both add some extra information to API
the object , but the usage is very different.
annotations
The added information is generally used for variousKubernetes
internal objects, a bit like "extended attributes";labels
It mainly facesKubernetes
external users and is used to screen and filter objects.
If we use a simple metaphor, annotations
it is the product manual inside the box, labels
but the label sticker outside the box.
With the help of OCP annotations
, any additional information Kubernetes
can be added to API
the object . This is the typical OCP "opening and closing principle" in object-oriented design, which makes the object more extensible and flexible.
annotations
The value in can be written arbitrarily, and Kubernetes
the incomprehensible ones will be ignored automatically Key-Value
, but a specific field needs to be used to write an update description kubernetes.io/change-cause
.
Let's do it, we create 3 versions of Nginx
the application , and add update instructions at the same time:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ngx-dep
annotations:
kubernetes.io/change-cause: v1, ngx=1.21
... ...
apiVersion: apps/v1
kind: Deployment
metadata:
name: ngx-dep
annotations:
kubernetes.io/change-cause: update to v2, ngx=1.22
... ...
apiVersion: apps/v1
kind: Deployment
metadata:
name: ngx-dep
annotations:
kubernetes.io/change-cause: update to v3, change name
... ...
It should be YAML
noted metadata
that the part in is used to annotations.kubernetes.io/change-cause
describe the version update situation, which kubectl rollout history --revision
is easier to understand than to list a large amount of information in .
After using sequentially kubectl apply
to create and update objects, take a little time in between to ensure that Pod
all have been established, and then use kubectl rollout history
to look at the update history:
5. Summary
Rolling update, it will automatically scale Pod
the number , and can realize service upgrade or downgrade without user perception, making the originally complicated and difficult operation and maintenance work simple and easy.
Kubernetes
The version applied in is not just the container image, but the entirePod
template. In order to facilitate processing, a digest algorithm is used to calculateHash
the value as the version number.Kubernetes
The update application adopts a rolling update strategy, reducing the oldPod
version adding the new version at the same timePod
, ensuring that the service is always available during the update process.- The commands used to manage application updates are
kubectl rollout
, and the subcommands includestatus
,history
, andundo
so on. Kubernetes
The update history of the application will be recorded. You can use tohistory --revision
view the detailed information of each version, and you can also add notes for each updatekubernetes.io/change-cause
.
In addition, Deployment
there are other fields in which can control the rolling update process in more detail. They are all in spec.strategy.rollingUpdate
, such as maxSurge
, maxUnavailable
and other fields, which respectively control the maximum Pod
number of and maximum unavailable Pod
numbers. Generally, the default value is enough .
Deployment
When the version is updated, ReplicaSet
the object , and different versions are created ReplicaSet
, and ReplicaSet
then Pod
the number is scaled by .
In addition to using kubectl apply
to trigger the application update, you can also use any other API
way to modify the object, such as kubectl edit
, kubectl patch
, kubectl set image
and other commands.
kubenetes
It will not record all the update history, which is too wasteful of resources. By default, it will only keep the latest 10 operations, but this value can be revisionHistoryLimit
adjusted .
6. Q&A
Kubernetes
What are the similarities and differences between the "rolling update" learned today and the "grayscale release" we often say?
"Rolling release" is a capability, and "grayscale release" is a function. k8s
Based on the "rolling release" capability, it can pod
achieve "horizontal expansion/contraction", so as to provide similar "grayscale release" and "canary release" This function.
Gray scale release should be the coexistence of multiple application versions, distributed according to a certain ratio;
Rolling update is a release method that gradually replaces the "old" version with the "new" version; grayscale release is also called canary release. During the grayscale period, the "new" and "old" versions will exist at the same time. The publishing method can be used to implement A/B testing
YAML
Version rollback can also be implemented by directly deploying the old version .kubectl rollout undo
What are the benefits of the command?
In fact, before discussing this issue, we must first understand the controller model k8s
of , and we also need to introduce a concept ReplicaSet
, what does it mean? In fact, Deployment
it is not directly controlled Pod
, Pod
but the attribution object of is ReplicaSet
, that is to say, what is Deployment
controlled ReplicaSet
(the concept of version is actually We can equate to ReplicaSet
), and then ReplicaSet
control Pod
the number of . We can look at the specific content kubectl get rs
through :
So at this time, it is easy for us to understand the difference between "version rollback" and "direct deployment of the old version of YAML". The version here is just tag
like , this snapshot information can correctly help us record the most original information of the scene at that time, so we can ensure the correctness to the greatest extent by rolling back the version (this is k8s has already guaranteed this for us), otherwise if we pass The old yaml
deployment does not necessarily guarantee whether the current yaml
file has been changed. The variables here are quite large, so directly deploying through yaml
the deployment greatly increases the risk of our deployment.
In the experimental environment, I do not have YAML
files . Sometimes I just make a small adjustment and release it. At this time undo
, it is easier to use, and the version rollback/return is really realized.