K8S Resource Controller

1. What is a controller

There are many built-in controllers in Kubernetes, which are equivalent to a state machine to control the specific state and behavior of the Pod

2. Controller type

  • ReplicationController 和 ReplicaSet
  • Deployment
  • DaemonSet
  • StateFulSet
  • Job/CronJob
  • Horizontal Pod Autoscaling

2.1 ReplicationController 和 ReplicaSet

ReplicationController (RC) is used to ensure that the number of replicas of the container application is always maintained at the user-defined number of replicas, that is, if a container exits abnormally, a new Pod will be automatically created to replace it; and the extra containers due to the exception will also be automatically recycled;

In the new version of Kubernetes, it is recommended to use ReplicaSet (RS, imperative definition method ) to replace ReplicationController. There is no essential difference between RS and RC, but RS supports collective selectors;

2.2 Deployment

Deployment provides a declarative method for Pod and RS to replace the previous RC to easily manage applications. Deployment does not directly create Pod, it creates Pod by controlling RS.

2.2.1 Typical application scenarios include:

  • Define Deployment to create Pod and ReplicaSet
  • Rolling upgrade and rollback application
  • Expansion and shrinkage
  • Pause and resume deployment

2.2.2 Update strategy

Deployment can guarantee that only a certain number of Pods are down during the upgrade. By default, it will ensure that at least one less than the expected number of Pods is up (at most one is unavailable)

Deployment can also ensure that only a certain number of Pods that exceed the expected number are created. By default, it will ensure that at most one more Pod than the expected number of Pods are in the up state (maximum 1 more)

In the future version of K8S, change 1-1 (update 1 and delete 1) to 25%-25% (update 25% pods, delete 25% pods)

2.2.3 Rollover (multiple rollouts in parallel)

Suppose we created a Deployment with 5 nginx:1.7.9 replicas, but when only 3 nginx:1.7.9 replicas were created, we decided to start to update the version of nginx to 1.9.1. In this case, Deployment will immediately kill the 3 nginx:1.7.9 Pods that have been created, and start creating nginx:1.9.1 Pods. It will not wait until all 5 nginx:1.7.9 Pods are created before starting to update.

2.2.4 回退 Deployment

//1. 查看回滚状态
kubectl rollout status deployment名称(例如:deployment/nginx-deployment)
//2.查看历史版本
kubectl rollout history deployment名称(例如:deployment/nginx-deployment)
//3.回退到指定版本,假设当前版本是3,想要回退到版本1
kubectl rollout undo deployment名称 --to-revision=1
//4.暂停操作
kubectl rollout pause deployment名称
//5.可以使用 ‘kubectl rollout status’命令查看Deployment是否完成。如果rollout成功完成,‘kubectl rollout status’将返回一个0值的Exit Code
// 可以使用 ‘echo $?’ 代替 ‘kubectl rollout status’ 命令查看Deployment是否完成
echo $?

2.2.5 Cleanup strategy

You can spec.revisionHistoryLimitspecify the maximum number of revision history records that the deployment keeps through setting items. By default, all revisions will be kept; if this item is set to 0, Deployment will not allow rollback.

Expansion:

  • Imperative programming: It focuses on how to implement the program, just like when we were new to programming, we need to write down the implementation process of the program step by step in accordance with the logical results
  • Declarative programming: It focuses on defining what you want, and then telling the computer/engine to let it help you achieve it

2.3 DaemonSet

DaemonSet (daemon related) ensures that all (or some) Nodes run a copy of the Pod . When a Node joins the cluster, it will also add a Pod for them. When a Node is removed from the cluster, these Pods will also be recycled. Deleting the DaemonSet will delete all Pods it created.

Some typical usages of DaemonSet:

  • Run cluster storage daemon, for example, run glusterd and ceph on each Node
  • Run the log collection daemon on each Node, such as fluentd, logstash
  • Run monitoring daemon on each Node, such as Prometheus Node Exporter, collectd, Datadog agent, New Relic agent or Ganglia gmond

2.4 Job

Job is responsible for batch processing tasks, that is, tasks that are executed only once, and it ensures that one or more Pods of the batch processing tasks are successfully completed

2.5 CronJob

CronJob manages time-based jobs

  • Run only once at a given point in time
  • Run periodically at a given point in time

Prerequisites for use: the
currently used Kubernetes cluster version >= 1.8 (for CronJob). For the previous version of the cluster, version <1.8, when starting the API Server, the --runtime-config=batch/v2alpha1=truebatch/v2alpha1 API can be turned on by passing the option **

Typical usage is as follows:

  • Schedule Job to run at a given point in time
  • Create a job that runs periodically, such as: database backup, sending emails

2.6 StatefulSet

As the Controller, StatefulSet provides a unique identifier for Pod. It can guarantee the order of deployment and scale.
StatefulSet is to solve the problem of stateful services (corresponding to Deployment and ReplicaSet are designed for stateless services). Its application scenarios include:

  • Stable persistent storage, that is, the same persistent data can be accessed after the Pod is rescheduled. Based on PVC
  • A stable network symbol, that is, its podname and hostname remain unchanged after the Pod is rescheduled. Implementation based on Headless Service (that is, Service without Cluster IP)
  • Orderly deployment, orderly expansion, that is, Pods are in order. When deploying or expanding, they must be carried out according to the defined order (that is, from 0 to N-1. Before the next Pod runs, all Pods must be Running and Ready status), based on init containers
  • Orderly shrink, orderly delete (ie from N-1 to 0)

2.7 Horizontal Pod Autoscaling

Horizontal Pod Autoscaling (HPA, Pod horizontal autoscaling) does not actually belong to the real controller, it achieves its goals by controlling other controllers . When the resource usage rate of an application usually has peaks and valleys, how to cut peaks and fill valleys, improve the overall resource utilization of the cluster, and automatically adjust the number of Pods in the service. In this scenario, HPA is required.

Guess you like

Origin blog.csdn.net/sinat_34241861/article/details/113104000