k8s container arrangement

1, K8S how to arrange the container?

In K8S cluster, the smallest unit of the container is not, K8S cluster is the smallest unit of scheduling the Pod, were encapsulated in the container Pod. It can be seen, a container or containers may belong in a Pod.
2, Pod is how to create out?
Pod is not a reason to run out, it is an abstract concept of logic, then the Pod is how to create it? Pod Pod is controlled by a management controller, which controller has a representative Pod Deployment, StatefulSet like.
3, how to apply make up Pod resources to provide external access?
Pod composition application is to provide internal and external access via the Service such abstract resources, but need external access service port mapping, bringing the tedious and cumbersome operation of port mapping. For this purpose there is a method of providing external access to resources is called Ingress.
4, Service is how it related to Pod?
Pod said above is controlled by a management controller Pod, Pod desired state of the resource objects managed automatically. In the Pod Controller Pod resource object are defined by a YAML file. In that document, but also on Pod resource object tagging for identification Pod, and Servcie it is through the tag selector, related to the same type of label Pod resource object. This realization from service - a process> container of -> pod.
5, Pod of how to create a logical process like?
(1) client submits a request to create, through Restful API API Server, you can also use kubectl command-line tool. Supported data types include JSON and YAML.
(2) API Server handle user requests to store data Pod etcd.
(3) View Pod scheduler is not bound by the API Server. Try to assign host Pod.
(4) Host filtered (pre-scheduling): Used scheduling a set of rules to filter out the host does not meet the requirements. For example, Pod specifies the amount of resources needed, the available resources less resources than the host of Pod will need to be filtered out.
(5) Host scoring (dispatch preferred): The first step of the host screened to meet the requirements of scoring, the host scoring stage, the scheduler will consider the overall optimization strategies, such as the volume of distribution of a copy of Replication Controller to a different on the host, using the host the lowest load and so on.
(6) select the host: selecting the highest scoring host for binding operation, and stores the result in etcd.
(7) kubelet creation Pod execution according to the schedule RESULTS: After binding is successful, scheduler calls APIServer the API to create a boundpod object etcd, a description of all the information pod bind running on a working node. Kubelet running on each node will also work regularly with etcd synchronization boundpod information, once found boundpod objects should be running on the node does not update work, then call Docker API to create and launch container in the pod.
 
Third, the various types of interfaces is how to call
When we create a resource object by RC kubectrl create command, kubectrl will be presented by create rc interface data to the rest api server,
Then api server writes data in etcd persisted, meanwhile, controller manager at the watch all the rc resource object, so once the object is written to etcd rc in, controller manager get a notification,
It reads the definition of rc, then difference between the actual number of copies with an expected value in comparison pod rc controlled, and then take the corresponding action.
 
At the moment, controller manager has not found a cluster corresponding pod instance, you (template) defined according rc in the pod template,
Create and save a pod to etcd by apiserver.
 
Similarly, scheduler process watch all pod, once it is found that the system produces a newborn pod,
To begin scheduling logic, it arranged a new home (node), if all goes well, this pod was assigned to a node on the node, that is binding to a node.
 
Next, scheduler process this information and put the pod in a status update to etcd, kubelet listening to new pod placed in their own here on the last node of the target node,
Thus the definition is arranged in the pod, mirrored pull the container and creates a corresponding container, when the container is successfully created, kubelet process then updates the status of the pod, and running through the update to etcd api server.
 
If this pod there is a corresponding service, then the rest is played turn kube-proxy, kube-proxy process on each node listens change pod examples of all these service and corresponding service, if found to have changed, it will iptables on the node where the node's corresponding increase or delete nat forwarding rule, and ultimately the intelligent load balancing service, and all this is done automatically, without human intervention.
 
If a node goes down, what will it happen? If a node goes down for some time, because no process on this node kubelet regular reports of the status of these pod, all pod instances on this node will be judged as a failure state, then controller manager will be deleted and the pod generating a new instance pod, the pod will then be dispatched to the other node is generated, so that the system automatically restored.
 
 

Guess you like

Origin www.cnblogs.com/muzinan110/p/11105794.html