This article has been included in the column " Learn k8s from scratch "
Previous article: k8s core technology-Controller click to jump
Deployment Controller
What is a Deployment Controller
- Deployment controllers can deploy stateless applications
- Manage Pods and ReplicaSets
- Deployment, rolling upgrades, etc.
- Application scenarios: web services, microservices
Deployment represents an update operation of the K8S cluster by the user. Deployment is an API object with a wider application model than RS (Replica Set, RS), which can be to create a new service, update a new service, or upgrade a service in a rolling manner. Rolling upgrade of a service is actually a composite operation of creating a new RS, gradually increasing the number of replicas in the new RS to an ideal state, and reducing the number of replicas in the old RS to 0.
Such a composite operation is not well described by an RS, so it is described by a more general Deployment. With the development direction of K8S, the future management of all long-term servo-based services will be managed through Deployment.
Deployment overview
Deployment is the most commonly used resource object in kubernetes. It provides a declarative definition method for the creation of ReplicaSet and Pod. When a desired state is described in the Deployment object, the Deployment controller will change the actual state according to a certain control rate. In the desired state, a new ReplicaSet controller will be created by defining a Deployment controller, a pod will be created through the ReplicaSet, the Deployment controller will be deleted, and the corresponding ReplicaSet controller and pod resources under the Deployment controller will also be deleted.
Using a Deployment instead of creating a ReplicaSet directly is because Deployment objects have many features that ReplicaSets do not have, such as rolling upgrades and rollbacks.
Extension: Declarative definition refers to directly modifying the resource list yaml file, and then through kubectl apply -f the resource list yaml file, you can change the resource. The Deployment controller is a controller built on top of rs. It can manage multiple rs. Every time the mirror version is updated, a new rs will be generated and the old rs will be replaced. Multiple rs exist at the same time, but only one rs run.
rs v1 controls three pods, deletes one pod, recreates one on rs v2, and so on, until all are controlled by rs v2. If there is a problem with rs v2, it can be rolled back. Deployment is built on top of rs , multiple rs form a Deployment, but only one rs is active.
How Deployments Work: How to Manage rs and Pods?
Deployment can use declarative definitions to modify the content of the corresponding resource version directly on the command line through pure commands, that is, by patching; Deployment can provide rolling custom self-controlled updates; Speaking of, we can also implement control update rhythm and update logic when implementing updates.
What is the update rhythm and update logic?
For example, Deployment controls 5 pod replicas, and the expected value of pods is 5, but when an upgrade requires a few more pods, our controller can control a few more pod replicas in addition to the 5 pod replicas. For example, there can be one more, but not less, so when upgrading, add one first, then delete one, add one and delete one, and always keep the number of pod replicas at 5. There is also a situation, at most one more is allowed, and at least one less is allowed, that is, at most 6, at least 4, add one for the first time, delete two, add two for the second time, delete two, and so on, you can Control the update method by yourself. This rolling update requires readinessProbe and livenessProbe detection to ensure that the applications in the container in the pod are started normally before deleting the previous pod.
Start the first step, just update the first batch and pause it; if the target is 5, there is not less than one allowed, and a maximum of 10 are allowed, then add 5 at a time; this is that we can control the rhythm by ourselves. The method to control the update.
With the Deployment object, you can easily do the following:
1. Create ReplicaSet and Pod
2. Rolling upgrade (upgrade without stopping the old service) and rollback application (roll back the application to the previous version)
3. Smoothly expand and shrink
4. Pause and resume Deployment
Simple use of Deployment
But we can try to create a mirror using the above code [just try, won't create]
kubectl create deployment web --image=nginx --dry-run -o yaml > nginx.yaml
Then output a yaml configuration file nginx.yml
, the configuration file is as follows
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: web
name: web
spec:
replicas: 1
selector:
matchLabels:
app: web
strategy: {
}
template:
metadata:
creationTimestamp: null
labels:
app: web
spec:
containers:
- image: nginx
name: nginx
resources: {
}
status: {
}
The selector and label we see are the bridge between our Pod and Controller
Create Pods with YAML
Quickly create a Pod image with the code just now
kubectl apply -f nginx.yaml
But because it is created in this way, we can only access it inside the cluster, so we also need to expose the port to the outside world
kubectl expose deployment web --port=80 --type=NodePort --target-port=80 --name=web1
Regarding the above command, there are several parameters
- –port: is our internal port number
- –target-port: is the port number that is exposed to the outside world
- --name: name
- --type: type
In the same way, we can also export the corresponding configuration file
kubectl expose deployment web --port=80 --type=NodePort --target-port=80 --name=web1 -o yaml > web1.yaml
The resulting web1.yaml looks like this
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2022-07-11T07:21:58Z"
labels:
app: web
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {
}
f:app: {
}
f:spec:
f:externalTrafficPolicy: {
}
f:ports:
.: {
}
k:{
"port":80,"protocol":"TCP"}:
.: {
}
f:port: {
}
f:protocol: {
}
f:targetPort: {
}
f:selector:
.: {
}
f:app: {
}
f:sessionAffinity: {
}
f:type: {
}
manager: kubectl
operation: Update
time: "2022-07-11T07:21:58Z"
name: web-5dcb957ccc-246f9
namespace: default
resourceVersion: "49814"
selfLink: /api/v1/namespaces/default/services/web-5dcb957ccc-246f9
uid: 05dbb353-3b46-4907-a6fe-7f345d178d93
spec:
clusterIP: 10.105.31.251
externalTrafficPolicy: Cluster
ports:
- nodePort: 30438
port: 80
protocol: TCP
targetPort: 80
selector:
app: web
sessionAffinity: None
type: NodePort
status:
loadBalancer: {
}
Then we can view the exposed services through the following command
kubectl get pods,svc
Then we visit the corresponding url, and we can see nginxhttp://192.168.11.139:32639/
Upgrade rollback and elastic scaling
- Upgrade: Assuming that the version is upgraded from 1.14 to 1.15, this is called an application upgrade [upgrade can ensure uninterrupted service]
- Rollback: from version 1.15 to 1.14, this is called application rollback
- Elastic scaling: We change the number of Pods to provide external services according to different business scenarios, which is elastic scaling
Application upgrades and rollbacks
First, let's create a Pod of version 1.14
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: web
name: web
spec:
replicas: 1
selector:
matchLabels:
app: web
strategy: {
}
template:
metadata:
creationTimestamp: null
labels:
app: web
spec:
containers:
- image: nginx:1.14
name: nginx
resources: {
}
status: {
}
We first specify the version as 1.14, and then start creating our Pod
kubectl apply -f nginx.yaml
Then we check the pods, nginx is on node2, and using the docker images command in the past, we can see that we have successfully pulled a version 1.14 image
nginx 1.14 295c7be07902 3 years ago 109MB
We can upgrade nginx from 1.14 to 1.15 using the following command
kubectl set image deployment web nginx=nginx:1.15
After we execute the command, we can see the upgrade process
[root@k8smaster node]# kubectl get pods
NAME READY STATUS RESTARTS AGE
web-65b7447c7-9rp9v 1/1 Running 0 107s
web-7d9697b7f8-fdtbv 0/1 ContainerCreating 0 21s
[root@k8smaster node]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-f89759699-5qdtn 1/1 Running 2 14d
web-7d9697b7f8-fdtbv 1/1 Running 0 37s
- First, the pods of the nginx 1.14 version are running, and then the 1.15 version is created
- Then after the 1.15 version is created, the 1.14 version will be suspended
- Finally, remove the Pod of version 1.14 to complete our upgrade
When we download version 1.15, the container is in the ContainerCreating state. After the download is complete, we replace version 1.14 with version 1.15. The advantage of this is that the upgrade can ensure that the service is not interrupted.
We go to our node2 node and check our docker images;
we can see that we have successfully pulled the 1.15 version of nginx
Check upgrade status
You can check the upgrade status below
kubectl rollout status deployment web
View Historical Versions
We can also view historical versions
kubectl rollout history deployment web
Apply rollback
We can use the following command to complete the rollback operation, that is, roll back to the previous version
kubectl rollout undo deployment web
Then we can check the status
kubectl rollout status deployment web
At the same time, we can also roll back to the specified version
kubectl rollout undo deployment web --to-revision=2
Elastic scaling
Elastic scaling, that is, we create multiple copies by command
kubectl scale deployment web --replicas=10
It can be clearly seen that we created 10 copies at once
write at the end
It is not easy to create, if you think the content is helpful to you, please give me a three-link follow to support me! If there are any mistakes, please point them out in the comments and I will change them in time!
The series that is currently being updated: learn k8s from scratch.
Thank you for watching. The article is mixed with personal understanding. If there is any error, please contact me and point it out~