Basic concepts of getting started with Kubernetes

Kubernetes overview

      Kubernetes (k8s) is an open source project released by Google in 2014. Initially, Google developed a system called Borg (now named Omega) to schedule a huge number of containers and workloads. After accumulating years of experience, Google decided to rewrite this container management system and contribute it to the open source community to benefit the world. This project is Kubernetes. Simply put, Kubernetes is the open source version of Google Omega

       Kubernetes (k8s) is an open source platform for automating container operations, including deployment, scheduling, and node cluster expansion. If you have ever used Docker container technology to deploy containers, you can think of Docker as a low-level component used internally in Kubernetes. Kubernetes not only supports Docker, but also Rocket, which is another container technology.

      Most concepts in Kubernetes, such as Node, Pod, Replication Controller, Service, etc., can be regarded as a kind of resource object. Almost all resource objects can be added, deleted, deleted through the kubectl tool (or API programming call) provided by Kubernetes. Change, check and other operations and save them in etcd persistent storage. From this perspective, Kubernetes is actually a highly automated resource control system. It achieves automatic control and automatic correction by tracking and comparing the difference between the "resource expected state" saved in the etcd library and the "actual resource state" in the current environment. Wrong advanced features

      ​Kubernetes is a complete distributed system support platform. Kubernetes has complete cluster management capabilities, including multi-level security protection and access mechanisms, multi-tenant application support capabilities, transparent service registration and service discovery mechanisms, built-in intelligent load balancers, powerful fault detection and self-repair capabilities , Service rolling upgrade and online expansion capabilities, scalable automatic resource scheduling mechanism, and multi-granularity resource quota management capabilities. At the same time, Kubernetes provides comprehensive management tools, which cover all aspects including development, deployment testing, operation and maintenance monitoring. Therefore, Kubernetes is a new distributed architecture solution based on container technology, and a one-stop complete distributed system development and support platform

Features

(1) Automated container deployment and replication.
(2) Expand or shrink the container scale at any time.
(3) Organize containers into groups and provide load balancing among containers.
(4) Easily upgrade the new version of the application container.
(5) Provide container flexibility, if the container fails, replace it. ### basic concept

Cluster

​Cluster is a collection of computing, storage, and network resources. Kubernetes uses these resources to run various container-based applications

Master

​ Master refers to the cluster control node. In each Kubernetes cluster, there needs to be a Master responsible for the management and control of the entire cluster. Basically all control commands of Kubernetes are sent to it, which is responsible for the specific execution process.

The following key processes are running on the Master.

Kubernetes API Server (kube-apiserver) : Provides the key service process of the HTTP Rest interface, and is the only entry for the addition, deletion, modification, and checking of all resources in Kubernetes.

Kubernetes Controller Manager (kube-controller-manager) : The automation control center for all resource objects in Kubernetes.

Kubernetes Scheduler (kube-scheduler) : The process responsible for resource scheduling (Pod scheduling).

In addition, the etcd service usually needs to be deployed on the Master, and the data of all resource objects in Kubernetes are stored in etcd.

Node

​ Except for the Master, the other machines in the Kubernetes cluster are called Node. A Node can be a physical host or a virtual machine. Node is a workload node in a Kubernetes cluster. Each Node will be assigned some workload (Docker container) by the Master. When a Node goes down, the workload on it will be automatically transferred to other nodes by the Master.

The following key processes are running on each Node

kubelet : Responsible for the creation, start and stop of the container corresponding to the Pod, and work closely with the Master to realize the basic functions of cluster management.

kube-proxy : An important component that implements the communication and load balancing mechanism of Kubernetes Service.

Docker Engine (docker) : Docker engine, responsible for the creation and management of native containers.

Nodes can be dynamically added to the Kubernetes cluster during operation, and the kubelet process will periodically report information to the Master, so that the Master can know the resource usage of each Node and implement an efficient and balanced resource scheduling strategy. When a Node does not report information for more than a specified time, it will be judged by the Master as "losing connection", and the status of the Node will be marked as Not Ready.

Under

​ Pod is the smallest unit of work in Kubernetes. Each Pod contains one or more containers. The containers in the Pod will be scheduled by the Master to run on a Node as a whole. All containers in the Pod use the same network namespace (Namespace), that is, the same IP address and Port space. They can communicate directly with Localhost. Similarly, these containers can share storage. When Kubernetes mounts Volume to Pod, it essentially mounts Volume to each container in Pod.

Controller

      ​Kubernetes manages Pods through Controller. The Controller defines the deployment characteristics of the Pod, such as how many replicas there are and what Node is running on it. In order to meet different business scenarios, Kubernetes provides a variety of Controllers, including Deployment, ReplicaSet, DaemonSet, StatefuleSet, Job, etc.

Deployment is the most commonly used. It can manage multiple copies of a Pod and ensure that the Pod is running in the desired state.

ReplicaSet implements the management of multiple copies of Pod. When using Deployment, ReplicaSet is automatically created, which means that Deployment manages multiple copies of Pod through ReplicaSet. We usually don't need to use ReplicaSet directly.

DaemonSet is used in scenarios where each Node can only run at most one Pod copy. As its name reveals,

DaemonSet is usually used to run Daemon.

StatefuleSet can ensure that the name of each copy of the Pod is unchanged throughout its life cycle, while other Controllers do not provide this function. When a Pod fails and needs to be deleted and restarted, the name of the Pod will change, and StatefuleSet will ensure that the copy is started, updated, or deleted in a fixed order. Job is used for applications that are deleted at the end of the run, while Pods in other Controllers usually run continuously for a long time.

Service

​ Kubernetes Service defines a way for the outside world to access a specific set of Pods. Service has its own IP and port, and Service provides load balancing for Pod. The two tasks of Kubernetes running containers (Pod) and accessing containers (Pod) are executed by Controller and Service respectively.

Namespace

​ Namespace can logically divide a physical Cluster into multiple virtual Clusters, and each Cluster is a Namespace. Resources in different Namespaces are completely isolated. Kubernetes creates 3 namespaces by default

Guess you like

Origin blog.csdn.net/samz5906/article/details/106892628