Kubernetes talk about your understanding of the (simple)

Overall Summary

k8s is a scheduling tool, Google programmer according to internal Google borg simple change in terms is used by the company to integrate the entire container resources, container resources will be fully unified scheduling management. Kubernetes (k8s) is an open-source platform for automated container operations. These container operations comprising: between deployment, and scheduling node cluster expansion.

pod workflow

  1. Autonomous pod: pod can be self-managed, is still required to submit to the APIserver after self-created, after receiving, by means of a specified node node scheduler scheduler schedules to start the pod by the node node, if the pod container fails, need to restart the container, the operation is carried out by the kubelet to complete , but if node node out of the failure, then the pod will disappear, not the run.
  2. The pod controller manages: after referenced using the controller, so that the pod can have a life cycle of an object called by a node scheduler scheduler to schedule its k8s cluster, then start the pod, a task termination, it It will be cleared. But there are some tasks are required to be available to run as a daemon in a container, once a failure occurs, we want to discover, rebuild or restart the container, the interface is not easy to achieve. But in k8s provided in the pod controller, which is the old controller was out of a new version of its controller deployment LPA controller inside the assembly, it can expand horizontally pod
    1. service component is the iptables port forwarding manually configured, after service1.2 new version has joined the ipvs scheduling function.
    2. k8s core infrastructure components as there pod, service, controller controller, pod run containers. service and controller are by tags and tag selectors to identify the pod under their control, the controller can be pod, add, delete, etc. operations.
    3. service is based on the name of the backup client access and discovery, so we need to use the DNS service, DNS service itself is a pod, so it needs to service and controllers to manage the DNS of the pod is a basic system architecture level pod, k8s their need to use the pod
  • Monitoring: use the grafana + promethos, which is the attachment

k8s network

Network model: each pod is running in a network, a network name space, service is a network but the service is a virtual out ip, where the node node is a network, a three network modes , namely node network cluster network , pod network

Upon request, on behalf of a network node of a network agent to a cluster, then there pod cluster network to network proxy

  1. Communication between a plurality of containers of the same pod

    Pod Each Pod Docker containers have the same IP and port address space, and because they are on the same network namespace, they can access each other by between localhost.
    What mechanism for communicating with each other more docker container in a Pod with that? In fact, using a network model Docker's: -net = container

  2. Communication between individual pod, communication between them directly reachable, overlay overlay network, but in fact is not direct communication between the pod

  3. communication between the pod and service, service is a virtual IP address, each host has an associated IPtables rules, docker is a gateway to point, point to the host, and gateway after receiving the package, will be forwarded to the forwarded via iptables rules service, and the service address updates and other operations, other nodes know how it is sometimes, it leads to kube-proxy, each node will have this component, which is responsible for maintaining communication at any time and APIserver, where the node will inform APIserver time service updates, then APIserver will produce a practical task to inform all kube-proxy, thus completing , without human involvement

flannel

It is a good support for CNI plugin, support for network configuration

calico: Based on the BGP protocol to support network configurations and network policies

canel: each set of two kinds of advantages, most of the currently used

Network policy, network proxy

namespace and docker k8s are different, it is for the entire k8s in terms of clusters, a space pod assign a name, and that name space is only provided on a border management, for communication between the pod and the pod is not set up to do , and the network is that you can define access policies between these namespaces by setting iptables rules

Several sets of credentials understand

In order to secure communication

  1. Certification requires two-way between APIserver and etcd
  2. APIserver with its client also requires a certificate

Package

master management node on the component

components on the master, there are a lot of data on APIserver, and configuration, all of the data is not stored locally, but on the shared storage ETCD, but avoid single point of failure is stored, it is generally at least three master nodes

  1. APIserver: This component provides an interface services are run in daemon mode, the user of cluster management is to manage the entire cluster k8s by APIserver. Interaction with APIserver entire cluster is the core of all queries and management operations need to be carried out by API, does not call each other all component modules, is done by that part of the work themselves and APIserver deal , it's all the result of state Preparation of the persistent storage in etcd. It is the gateway of the entire cluster
  2. Controller manager (controller manager): perform the background of the entire system, including the node status condition, the number of Pod, Pods and associated Service, and the like.
  3. The scheduler (scheduler): container for deploying and managing large-scale application platform, after APIserver confirm the creation request pod objects, we need to make scheduling decisions by the scheduler according to the available resources of each node in the cluster, and the necessary resource requirements
  4. ETCD : service discovery and configuration sharing between responsible nodes.

node node

  1. kubelet: As Agent, APIserver receiving task allocation and management Pods node of the container, the container periodically acquires status, back to apiserver.
  2. There are container runtime environment , responsible for downloading the image and run the container, including at least docker environment
  3. kube-proxy, runs on each computing node, the network proxy is responsible Pod. Timing information acquired from the service etcd to do the appropriate policy.

k8s ultra-detailed summary

Guess you like

Origin www.cnblogs.com/joinbestgo/p/11298688.html