Pod controller
Pod controller
Controller concept
There are many controllers (controllers) built in k8s, which are equivalent to a state machine to control the specific state and behavior of the Pod
Different classification of life cycle
-
Autonomous Pod: Pod exits, this type of Pod will not be created
-
Pod managed by the controller: During the life cycle of the controller, the number of copies of the Pod must always be maintained
Controller type
- ReplicationController和ReplicaSet
- Deployment
- DaemonSet
- StateFulSet
- Job/ConJob
- Horizontal Pod Autoscaling
❤See previous articles for details
Pod control type
Controller instance
1. RS and RC are associated with Deployment RC (ReplicationController)
The main function is to ensure that the number of copies of the container application always remains at the user-defined number of copies. That is, if a container exits abnormally, a new Pod will be automatically created to replace it; and if the abnormally extra container is also automatically recycled
Kubernetes officially recommends using RS (ReplicaSet) instead of RC (ReplicationController) for deployment. RS is not essentially different from RC, but the name is different, and RS supports collective selectors
View RS full template information: kubectl explain rs
vim rs.yaml
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: frontend
spec:
replicas: 3 # 3个副本
selector: # 选择标签
matchLabels: # 匹配标签,匹配上就是rs的
tier: frontend # keys:values
template: # 模板,后面是pod属性
metadata:
labels:
tier: frontend
spec:
containers:
- name: myapp
image: hub.atguigu.com/library/myapp:v1
env: # 注入环境变量
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 80
kubectl delete pod --all
kubectl create -f rs.yaml
kubectl get pod
kubectl get pod --show-labels
kubectl label pod frontend-2q74g tier=frontend1 --overwrite=True
kubectl get pod --show-labels
will find four One, the fourth is the previous 3, because the number of rs copies is 3, and the label matching can only match frontend, because there are only two, and a new rvckj must be created to meet the expected value of 3.
Then delete the rs and find that there is a
kubectl that is not managed by rs. delete rs --all
kubectl get pod --show-labels
2. The association between RS and Deployment
Deployment provides a declarative method for Pod and ReplicaSet to replace the previous ReplicationController to facilitate management applications. Typical application scenarios include:
- Define Deployment to create Pod and ReplicaSet
- Rolling upgrade and rollback
- Application expansion and contraction
- Pause and resume deployment
★Deploy a simple Nginx application
1创建 创建
vim deployment.yaml
kubectl apply -f deployment.yaml --record
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: myapp
image: hub.atguigu.com/library/myapp:v1
ports:
- containerPort: 80
kubectl create -f https://kubernetes.io/docs/user-guide/nginx-deployment.yaml --record## --record parameter can record the command, we can easily check the changes of each revision
The creation of deployment will create the corresponding rs, which is determined by 7bb4cb8f64 and labels.
2. Expansion
kubectl scale deployment nginx-deployment --replicas 10
stateless expansion and contraction
3. If the cluster supports horizontal pod autoscaling, you can also set the automatic expansion
kubectl autoscale deployment nginx-deployment --min=10–max=15–cpu-percent=80 for the Deployment
4. It is also relatively simple to update the image.
kubectl set image deployment/nginx-deployment myapp(container name)=
wangyanglinux /myapp:v2
. The modification of the kubectl get rs image will trigger the creation of rs, as mentioned before. Update and rollback operations will be in a pod alternate process
5. Rollback
kubectl rollout undo deployment/nginx-deployment
update and rollback operations will be in the pod alternate process
3. Update Deployment
Deployment update strategy
- Deployment can guarantee that only a certain number of Pods are down during the upgrade. By default, it will ensure that at least one less than the expected number of Pods is up (at most one is unavailable)
- Deployment can also ensure that only a certain number of Pods that exceed the expected number are created. By default, it will ensure that at most one more Pod than the expected number of Pods is up (at most 1 surge)
- In the future version of Kuberentes, it will change from 1-1 to 25%-25%
kubectl descirbe deployment
Rollover (multiple rollouts in parallel)
Suppose you create a Deployment with 5 nginx:1.7.9 replicas, but when only 3 nginx:1.7.9 replicas are created, you start to update the Deployment with 5 nginx:1.9.1 replicas. In this case, Deployment will immediately kill the 3 nginx:1.7.9 Pods that have been created, and start to create nginx:1.9.1 Pods . It will not wait until all 5 nginx:1.7.9 Pods are created before starting to change the course
回退Deployment
kubectl set image deployment/nginx-deployment nginx=nginx:1.91
查看当前的回滚状态:kubectl rollout status deployments nginx-deployment
查看历史版本:kubectl rollout history deployment/nginx-deployment
如果创建的时候加了record,那么CHANGE-CAUSE才会记录
kubectl rollout undo deployment/nginx-deployment
可以使用 --revision参数指定某个历史版本:kubectl rollout undo deployment/nginx-deployment --to-revision=2
暂停 deployment 的更新:kubectl rollout pause deployment/nginx-deployment
You can use the kubectl rollout status command to check whether the Deployment is complete. If the rollout is successfully completed, kubectl rolloutstatus will return an Exit Code of 0
kubectl rollout status deployments nginx-deployment
echo $?
Clean up Policy
You can set the .spec.revisonHistoryLimit item to specify the maximum number of revision history records that the deployment keeps. By default, all revisions will be kept; if this item is set to 0, Deployment will not allow rollback
4、DaemonSet
daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemonset-example # 对应1 与对应2 要一致
labels:
app: daemonset
spec:
selector:
matchLabels:
name: daemonset-example # 对应2 pod
template:
metadata:
labels:
name: daemonset-example
spec:
containers:
- name: daemonset-example
image: wangyanglinux/myapp:v1
# 模板可以复杂点,init C,start stop,readiness等
kubectl create -f daemonset.yaml
kubectl get pod -o wide
master node will not be created by default
kubectl delete pod daemonset-example-t5fzc
5. Job and CronJob
vim job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
metadata:
name: pi
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] # 圆周率计算小数点
restartPolicy: Never
kubectl create -f job.yaml
kubectl describe pod pi-9vt9h View execution status
kubectl get job #look at the current task
Verify the result, get the 2000 digits of Pi, correct
6、CronJob
Deployment manages pod by creating rs, while
CronJob manages by creating pod to manage time-based jobs, namely:
- Run only once at a given point in time
- Run periodically at a given point in time
- Prerequisites for use: the currently used Kubernetes cluster, version >= 1.8 (for CronJob). For the previous version of the cluster, version <1.8, when starting the API Server, pass the option -runtime-config=batch/v2alpha1=true to enable batch/v2alpha1API
The typical usage is as follows:
- Schedule Job to run at a given point in time
- Create a job that runs periodically, for example: database backup, sending email
vim cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *" # 调度的时间
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date;echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
kubectl apply -f cronjob.yaml
kubectl get cronjob
kubectl get jobs
and then check the log for verification.
[root@k8s-master01 ~]# vim cronjob.yaml
[root@k8s-master01 ~]# kubectl apply -f cronjob.yaml
cronjob.batch/hello created
[root@k8s-master01 ~]# kubectl get cronjob
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
hello */1 * * * * False 0 <none> 15s
[root@k8s-master01 ~]# kubectl get job
NAME COMPLETIONS DURATION AGE
hello-1601467260 1/1 9s 90s
hello-1601467320 1/1 10s 30s
[root@k8s-master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-1601467260-466fs 0/1 Completed 0 108s
hello-1601467320-c5b5b 0/1 Completed 0 48s
[root@k8s-master01 ~]# kubectl log hello-1601467260-466fs
log is DEPRECATED and will be removed in a future version. Use logs instead.
Wed Sep 30 12:01:13 UTC 2020
Hello from the Kubernetes cluster
[root@k8s-master01 ~]# kubectl log hello-1601467320-c5b5b
log is DEPRECATED and will be removed in a future version. Use logs instead.
Wed Sep 30 12:02:14 UTC 2020
[root@k8s-master01 ~]# kubectl delete cronjob --all
cronjob.batch "hello" deleted
Some limitations of CrondJob itself: the operation of creating a Job should be idempotent. The
success of CrondJob is not easy to judge, it will only be responsible for creating the Job, but the success of the Job can be judged
StateFullSet and HPA, temporary knowledge is not enough to support, see you later