Introducing k8s -kube-controller-manager

1. Controller Manager Profile

Controller Manager as an internal management control center of the cluster responsible Node, Pod copy service endpoint (Endpoint) within the cluster, the namespace (Namespace), the service account (ServiceAccount), resource quotas (ResourceQuota) administration, when a Node accident when downtime, Controller Manager will detect and perform automated repair process to ensure that the cluster is always in the expected operating conditions. 
Write pictures described here 
The current state of each resource object by each Controller Interface API Server provides real-time monitoring of the entire cluster, when a failure occurs resulting in a variety of system status changes, the system will attempt to repair the state to the "desired state."

2. Replication Controller

In order to distinguish between resource objects Replication Controller referred to as RC, and this article refers to the Controller Manager in the Replication Controller, called a copy of the controller. A replica of the controller which is to ensure the number of clusters in a Pod copy of the associated RC remains a preset value.

Pod restart only when the strategy is Always time (RestartPolicy = Always), a copy of the controller will manage the Pod's operation (create, destroy, restart, etc.). 
Pod template RC stuff in like a mold, mold manufacturing out of the mold once they left, they would no longer between does not matter. Once the Pod is created, no matter how the template changes will not affect the Pod has been created. 
Pod can be disengaged by modifying the control label RC, the method may be used to migrate Pod from the cluster, the data repair debugging. 
Deleting a Pod RC does not affect it creates, if you want to delete Pod required number of copies of the RC property is set to 0. 
Do not cross the RC created Pod, because RC can be automated control Pod, improve disaster recovery capabilities.

2.1. Replication Controller duties

确保集群中有且仅有N个Pod实例,N是RC中定义的Pod副本数量。
通过调整RC中的spec.replicas属性值来实现系统扩容或缩容。
通过改变RC中的Pod模板来实现系统的滚动升级。
  • 1
  • 2
  • 3

2.2. Replication Controller usage scenario 
usage scenarios illustrate using the command 
reschedule or node failure occurs when the Pod is terminated unexpectedly run, you can reschedule the number of copies to ensure that the cluster is still running specified. 
Elastically stretchable manually or automatically repair a copy of the proxy expansion controller spec.replicas properties can be achieved elastically stretchable. kubectl scale 
rollover create a new RC file by kubectl command or API execution, it will add a new copy and delete the old copy, is 0 when the old copy, delete the old RC. kubectl rolling-update

Rolling upgrade specific reference kubectl rolling-update -help, the official document: https://kubernetes.io/docs/tasks/run-application/rolling-update-replication-controller/

[root@node5 ~]# kubectl rolling-update --help
Perform a rolling update of the given ReplicationController.
Replaces the specified replication controller with a new replication controller by updating one pod at a time to use the
new PodTemplate. The new-controller.json must specify the same namespace as the
existing replication controller and overwrite at least one (common) label in its replicaSelector.
Usage:
  kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE | -f NEW_CONTROLLER_SPEC) [flags]
Aliases:
  rolling-update, rollingupdate
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

Examples:

Update pods of frontend-v1 using new replication controller data in frontend-v2.json.

 kubectl rolling-update frontend-v1 -f frontend-v2.json
  • 1

Update pods of frontend-v1 using JSON data passed into stdin.

cat frontend-v2.json | kubectl rolling-update frontend-v1 -f -
  • 1

Update the pods of frontend-v1 to frontend-v2 by just changing the image, and switching the 
name of the replication controller.

 kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2
  • 1

Update the pods of frontend by just changing the image, and keeping the old name

kubectl rolling-update frontend --image=image:v2
  • 1

3. Node Controller

kubelet registers itself at startup node information API Server, and the timing to report status information to the API Server, API Server upon receiving the update information to the etcd. 
Write pictures described here 
Node Controller via API Server Node real-time access to relevant information, to achieve relevant control functions of each Node node management and monitoring of the cluster. Process is as follows

1, Controller Manager at startup if set -cluster-cidr parameters, then generates a CIDR address is not provided for each of the Node Spec.PodCIDR node, and use the CIDR address setting properties Spec.PodCIDR node, preventing different nodes CIDR address conflict.

2, a flowchart of specific processes supra.

3, node information read one by one, if the node status becomes not "ready" state, the node joins the queue to be deleted, or they will be removed from the queue node.

4. ResourceQuota Controller

Resource quota management to ensure that the specified resource object at any time will not take up excessive physical system resources. 
It supports three levels of resource configuration management:

1)容器级别:对CPU和Memory进行限制
2)Pod级别:对一个Pod内所有容器的可用资源进行限制
3)Namespace级别:包括

    Pod数量
    Replication Controller数量
    Service数量
    ResourceQuota数量
    Secret数量
    可持有的PV(Persistent Volume)数量
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

Description:

k8s配额管理是通过Admission Control(准入控制)来控制的;
Admission Control提供两种配额约束方式:LimitRanger和ResourceQuota;
LimitRanger作用于Pod和Container;
ResourceQuota作用于Namespace上,限定一个Namespace里的各类资源的使用总额。
  • 1
  • 2
  • 3
  • 4

ResourceQuota Controller flow chart: 
Write pictures described here

5. Namespace Controller

Users can create new Namespace through the API Server and stored in etcd in, Namespace Controller regularly read these Namespace information through the API Server.

If Namespace is marked as elegant API deletion (ie the deletion deadline is set, DeletionTimestamp), then the Namespace state is set to "Terminating", and save it to etcd in. Meanwhile Namespace Controller Delete under the Namespace ServiceAccount, RC, Pod and other resource objects.

6. Endpoint Controller

Service, Endpoint, Pod relationship:

Endpoints represent the addresses of all Pod access a copy of the corresponding Service, and Endpoints Controller is responsible for generating and maintaining control of all Endpoints objects. It is responsible for monitoring changes in Service and corresponding Pod copy.

如果监测到Service被删除,则删除和该Service同名的Endpoints对象;
如果监测到新的Service被创建或修改,则根据该Service信息获得相关的Pod列表,然后创建或更新Service对应的Endpoints对象。
如果监测到Pod的事件,则更新它对应的Service的Endpoints对象。
  • 1
  • 2
  • 3

Write pictures described here 
kube-proxy process to obtain Endpoints each Service, to achieve load balancing Service.

7. Service Controller

Service Controller is an interface between the controller and external cluster belong kubernetes cloud platform. Service Controller listens Service changes, if LoadBalancer is a type of Service, make sure LoadBalancer instance on external cloud platforms corresponding to the Service is correspondingly create, delete, and update routing tables.

Guess you like

Origin www.cnblogs.com/Su-per-man/p/10942856.html