kubernetes-1-Online installation of stand-alone version of kubernetes and introduction of components

1 Introduction

The rapid development of cloud computing

  • IaaS
  • PaaS
  • SaaS
    Docker technology advances by leaps and bounds
  • Build once, run everywhere
  • Fast and light weight of the container
  • Complete ecological environment

Kubernetes is also called k8s. It is an open source system mainly used for automatic deployment, expansion and contraction, and management of containerized applications. It divides the containers of many applications into several logical units for easy management and discovery.

First of all, it is a new leading solution for distributed architecture based on container technology. Kubernetes (k8s) is Google's open source container cluster management system (Google internal: Borg). On the basis of Docker technology, it provides a series of complete functions such as deployment and operation, resource scheduling, service discovery and dynamic scaling for containerized applications, which improves the convenience of large-scale container cluster management.

Kubernetes is a complete distributed system support platform, with complete cluster management capabilities, multiple expansion and multi-level security protection and access mechanisms, multi-tenant application support capabilities, transparent service registration and discovery mechanisms, and a built-in intelligent load balancer , Powerful fault discovery and self-repair capabilities, service rolling upgrade and online expansion capabilities, scalable resource automatic scheduling mechanism, and multi-granular resource quota management capabilities. At the same time, Kubernetes provides comprehensive management tools, covering all aspects including development, deployment testing, operation and maintenance monitoring.

1.1 service

(1) In Kubernetes, service is the core, and service needs to have a unique name and ip:port to provide services to the outside world.

In Kubernetes, Service is the core of the distributed cluster architecture. A Service object has the following key characteristics:

拥有一个唯一指定的名字
拥有一个虚拟IP(Cluster IP、Service IP、或VIP)和端口号
能够体现某种远程服务能力
被映射到了提供这种服务能力的一组容器应用上

The service process of Service is currently based on Socket communication to provide external services, such as Redis, Memcache, MySQL, Web Server, or a specific TCP Server process that implements a specific business, although a Service usually consists of multiple related Service processes provide services, and each service process has an independent Endpoint (IP+Port) access point, but Kubernetes allows us to connect to the specified Service through the service.

With the built-in transparent load balancing and failure recovery mechanism of Kubernetes, no matter how many service processes there are in the backend, or whether a service process will be redeployed to other machines due to a failure, it will not affect our normal service calls , And more importantly, once the Service itself is created, it will not change, which means that in a Kubernetes cluster, we don’t have to worry about changing the IP address of the service.

1.2 sub

(2) The service is provided by the container. In order to ensure the high availability of the service, there cannot be only one container providing the service. A group is required . We call this group of containers a pod . Pod is the most basic operation unit of kubernetes.

The container provides a powerful isolation function, so it is necessary to put the group of processes that provide services for the service into the container for isolation. To this end, Kubernetes designed a Pod object, packaging each service process into a corresponding Pod, making it a container running in the Pod. In order to establish the association management between Service and Pod, Kubernetes affixes a label to each Pod. For example, the Pod running MySQL is labeled with name=mysql, the Pod running PHP is labeled with name=php, and then the corresponding Service is attached. Define the label selector Label Selector, so that the association problem between Service and Pod can be solved ingeniously.

(3) In order to realize the management between service and pod, there is the concept of label , and pods with the same function are set to the same label. For example, you can label all pods that provide mysql service with name=mysql, so that mysql service will work on all pods that contain the name=mysql label.

(4) Pod runs on Node . Node can be a physical machine or a virtual machine. Usually there are hundreds of pods running on a Node. Each pod runs a special container called Pause. Other containers are called business containers. The business containers share the Pause container's network stack and Volume mount volume, so in the same pod The communication and data exchange between business containers is more efficient.

1.3 Cluster Management

(5) In terms of cluster management, kubernetes divides the machines in the cluster into a Master node and a group of working nodes Node . Among them, kube-apiserver, kube-controller-manager, and kube-scheduler are running on the Master. They realize resource management, Pod scheduling, elastic scaling, security control, system monitoring, error correction and other functions.

Node is a working node, which runs applications and provides services. The smallest unit on Node is the pod. The kubelet and kube-proxy service processes of kubernetesd are running on Node. They are responsible for the creation, startup, monitoring, restart, and destruction of pods, and load balancing has been achieved.

In terms of cluster management, Kubernetes divides the machines in the cluster into a Master node and a group of working nodes. Among them, a group of processes related to cluster management kube-apiserver, kube-controller-manager and kube-scheduler are running on the Master node. These processes realize the management capabilities of the entire cluster, such as resource management, Pod scheduling, elastic scaling, security control, system monitoring, and error correction, and they are all fully automated.

Node, as a working node in the cluster, runs real applications. The smallest operating unit managed by Kubernetes on Node is Pod. The kubelet and kube-proxy service processes of Kubernetes are running on Node. These service processes are responsible for the creation, startup, monitoring, restart, and destruction of Pod, as well as the load balancer that implements the software mode.

1.4 Service expansion and upgrade

(6) Expansion and upgrade need a key thing, Replication controller (RC), RC needs to contain 3 key information:

(1)目标pod的定义。
(2)目标pod需要运行的副本数量(replicas)。
(3)要监控的目标pod的标签(Label)。

In the Kubernetes cluster, it solves the two major problems of service expansion and upgrade in traditional IT systems. You only need to create a Replication Controller (RC) for the Pod associated with the Service that needs to be expanded, and the expansion of the Service and subsequent upgrades will be solved.

Working process: Three indicators need to be defined in RC. kubernetes will filter out the corresponding pods according to the Label defined by RC, and monitor their status and number in real time. When the number of instances is less than the defined number of replicas, it will be based on RC defines the pod template to create a new pod, and then schedule the pod to the appropriate Node to start and run. The whole process is fully automated, without manual intervention.

2 Frame structure

Insert picture description here

2.1 Master component

Insert picture description here
The master is the control center of the entire cluster. All control commands of kubernetes are sent to the master, which is responsible for the specific execution process. Generally, the master is independent of a physical machine or virtual machine, and its importance is self-evident.
(1) All cluster control commands are passed to the Master component and executed on it.
(2) Each Kubernetes cluster has at least one set of Master components (currently default: one).
(3) Each set of Master components includes three core components (controller-manager, apiserver and scheduler) and cluster data configuration center etcd.

2.1.1 cube apiserver

As the entrance of the Kubernetes system, it encapsulates the addition, deletion, modification, and checking operations of core objects, and provides external customers and internal component calls in the form of RESTful API interfaces. The maintained REST objects are persisted to Etcd for storage.

Provides the key service process of the HTTP Rest interface, which is the only entry for all resource addition, deletion, modification, and check operations, and also the entry process for cluster control. It is the only component of the Kubernetes system that directly talks with etcd.
(1) The only entry for cluster control, providing the core components of Kubernetes cluster control RESTful API.
(2) The center of data interaction and communication between various components in the cluster.
(3) Provide cluster control security mechanisms (identity authentication, authorization and admission control).

2.1.2 kube-controller-manager

It is the automation control center of all resources, and can be understood as the master of resource objects.
(1) The core manager of various resource controllers in the cluster.
(2) For each specific resource, there is a corresponding Controller.
(3) Ensure that the resources corresponding to each Controller managed under it are always in the "expected state".

Insert picture description here

(1)Replication Controller
管理维护Replication Controller,关联Replication Controller和Pod,保证Replication Controller定义的副本数量与实际运行Pod数量一致。
(2)Node Controller
管理维护Node,定期检查Node的健康状态,标识出(失效|未失效)的Node节点。
(3)Namespace Controller
管理维护Namespace,定期清理无效的Namespace,包括Namesapce下的API对象,比如Pod、Service等。
(4)Service Controller
管理维护Service,提供负载以及服务代理。
(5)EndPoints Controller
管理维护Endpoints,关联Service和Pod,创建Endpoints为Service的后端,当Pod发生变化时,实时更新Endpoints。
(6)Service Account Controller
管理维护Service Account,为每个Namespace创建默认的Service Account,同时为Service Account创建Service Account Secret。
(7)Persistent Volume Controller
管理维护Persistent Volume和Persistent Volume Claim,为新的Persistent Volume Claim分配Persistent Volume进行绑定,为释放的Persistent Volume执行清理回收。
(8)Daemon Set Controller
管理维护Daemon Set,负责创建Daemon Pod,保证指定的Node上正常的运行Daemon Pod。
(9)Deployment Controller
管理维护Deployment,关联Deployment和Replication Controller,保证运行指定数量的Pod。当Deployment更新时,控制实现Replication Controller和 Pod的更新。
(10)Job Controller
管理维护Job,为Jod创建一次性任务Pod,保证完成Job指定完成的任务数目
(11)Pod Autoscaler Controller
实现Pod的自动伸缩,定时获取监控数据,进行策略匹配,当满足条件时执行Pod的伸缩动作。

2.1.3 kube-scheduler

Perform node selection (that is, allocate machines) for the newly created Pod, and be responsible for the resource scheduling of the cluster. The components are separated and can be easily replaced with other schedulers.

Responsible for the process of resource scheduling (pod scheduling), which is equivalent to the "dispatch room" of the bus company.
(1) Monitor the information of the newly created Pod replica through the Watch interface of the API Server, and select the most suitable Node for the Pod through the scheduling algorithm.
(2) Support custom scheduling algorithm provider.
(3) The default scheduling algorithm has built-in pre-selection strategies and optimization strategies, and decision-making considers resource requirements, service quality, software and hardware constraints, affinity, data locality and other index parameters.

2.1.4 etcd Server

The data of all resource objects in Kubernetes is stored in etcd, which is the database of Kubernetes' storage state.
(1) The main database of the Kubernetes cluster stores all resource objects and states.
(2) Deployed on the same Node with the Master component by default.
(3) Etcd's data changes are all carried out through API Server.

2.2 Node components

Insert picture description hereIn addition to the Master, the other machines in the Kubernetes cluster are called Node, and the earlier version is called Minion. A Node can be a physical machine or a virtual machine. Some workloads (ie docker containers) will be allocated on each Node. When the Node goes down, the applications running on it will be transferred to other Nodes.
Node: The real workload node in the
Kubernetes cluster (1) The Kubernetes cluster is shared by multiple Nodes to undertake the workload, and the Pod is assigned to a specific Node for execution.
(2) Kubernetes manages node resources through the node controller. Supports dynamically adding or deleting Nodes in the cluster.
(3) Kubelet and Kube-proxy will be deployed on each cluster Node.

2.2.1 Kubelet

Responsible for managing containers, Kubelet will receive Pod creation requests from the Kubernetes API Server, start and stop the containers, monitor the running status of the containers, and report to the Kubernetes API Server.

Responsible for the creation, start and stop of Pod corresponding containers, and work closely with the Master node to realize the basic functions of cluster management.
(1) Non-container service process components located on each Node in the cluster, a bridge between Master and Node.
(2) Handle management tasks such as Pod creation, start and stop issued by Master to this Node; register Node information with API Server.
(3) Monitor the container and node resources on the Node, and report the node resource occupancy to the Master regularly.

2.2.2 kube-proxy

Responsible for creating proxy services for Pod, Kubernetes Proxy will obtain all Service information from the Kubernetes API Server, and create proxy services based on the Service information to implement request routing and forwarding from Service to Pod, thereby realizing a Kubernetes-level virtual forwarding network.

An important component to realize the communication and load balancing mechanism of Kubernetes Service.
Kube-proxy runs on each Node.
(1) The realization of the abstract concept of Service, the request to the Service is distributed to the backend Pod (Endpoint) according to the strategy (load balancing) algorithm.
(2) The iptables mode is used by default.
(3) Support the nodeport mode to achieve external access to the Service in the cluster.

2.2.3 Docker Engine

The Docker engine is responsible for the creation and management of native containers.
The container service needs to run on Node.

3 Stand-alone installation

(1) Turn off the centos built-in protective wall service
#systemctl disable firewalld
#systemctl stop firewalld
(2) Install etcd and Kubernetes software
[Docker software will be installed automatically]
#yum install -y etcd kubernetes
(3) After installing the software, modify two A configuration file
Docker configuration file /etc/sysconfig/docker, where the content of OPTIONS is set to:

OPTIONS='--selinux-enabled=false --insecure-registry gcr.io'

Kubernets apiserver configuration file /etc/kubernetes/apiserver:

将 –adminssion_control参数中的ServiceAccount删除

(4) Start all services in order:

systemctl start etcd
systemctl start docker
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl start kubelet
systemctl start kube-proxy

Guess you like

Origin blog.csdn.net/qq_20466211/article/details/113033684