【Cloud native】Kubernetes overview

Kubernetes overview

1. Introduction to Kubernetes

Kubernetes is a portable, scalable, open-source platform for managing containerized workloads and services that simplifies (facilitates) declarative configuration and automation. It has a large, fast-growing ecosystem. Services, support, and tools for Kubernetes are available everywhere.

The word Kubernetes is derived from Greek and means helmsman or pilot. 2014 2014In 2014 , Google open sourced the Kubernetes project. Kubernetes builds on Google's15 1515 years of experience, and the best ideas and practices from the community.

insert image description here
Using Kubernetes, we can respond to customer needs quickly and efficiently:

  • Deploy your applications quickly and predictably.
  • Possesses the ability to scale applications on the fly.
  • Seamlessly release new features without disrupting existing business.
  • Optimize hardware resources and reduce costs.

The goal of Kubernetes is to build an ecosystem of software and tools to relieve you of the burden of running applications in public or private clouds.

1.1 Review

Let's go back in time and see why Kubernetes is so useful.

insert image description here
The traditional deployment era : In the early days, organizations ran applications on physical servers. There is no way to define resource boundaries for applications in physical servers, which leads to resource allocation problems. For example, if multiple applications are running on a physical server, then in some cases one application will consume most of the resources causing other applications to degrade. The solution to this is to run each application on a different physical server. However, scaling was not possible due to underutilized resources and the high cost of maintaining many physical servers.

Era of Virtualized Deployment : As a solution, virtualization was introduced. It allows you to run multiple virtual machines (VMs) on a single physical server CPU. Virtualization allows applications to be isolated between VMs and provides a level of security since information from one application cannot be freely accessed by another application.

Virtualization allows for better utilization of resources within a physical server, provides better scalability because applications can be easily added or updated, reduces hardware costs, and more. With virtualization, you can represent a set of physical resources as a one-off cluster of virtual machines.

Each VM is a complete machine running all components, including its own operating system, on virtual hardware.

The era of container deployment : Containers are similar to VMs, but they have loose isolation properties to share the operating system OS between applications. Therefore, containers are considered lightweight. Similar to a VM, a container has its own file system, CPU, memory, process space, and more. Since they are decoupled from the underlying infrastructure, they are portable across clouds and OS distributions.

Containers have become popular because they provide additional benefits such as:

  • Agile Application Creation and Deployment : Increases the ease and efficiency of container image creation compared to using VM images.
  • Continuous development, integration, and deployment : Provides reliable and frequent container image builds and deployments with fast and easy rollbacks.
  • Separation of Development and Operations Concerns : Create application container images at build/release time rather than deployment time, decoupling applications from infrastructure.
  • Observability : Shows not only OS-level information and metrics, but also application health and other signals.
  • Environment Consistency Across Dev, Test, and Production : Running on your laptop is exactly the same as running on the cloud.
  • Cloud and OS Release Portability : Runs on Ubuntu, RHEL, CoreOS, on-prem, Google Kubernetes Engine, and anywhere else.
  • Application-centric management : Raises the level of abstraction from running an operating system on virtual hardware to running applications on an operating system using logical resources.
  • Loosely coupled, distributed, resilient, liberated microservices : Applications are broken down into smaller independent parts that can be dynamically deployed and managed - rather than a monolithic stack running on one large single-purpose machine.
  • Resource Isolation : Predictable Application Performance.
  • Resource utilization : high efficiency and high density.

1.2. Why is Kubernetes needed? what can it do

Containers are a great way to bundle and run applications. In a production environment, you need to manage the containers running your application and ensure there is no downtime. For example, if one container fails, another container needs to be started. Wouldn't it be easier if this behavior was handled by a system?

Kubernetes provides you with a framework for running distributed systems elastically. It handles scaling and failover of the application, provides a deployment mode, and more.

Kubernetes provides you with:

  • Service discovery and load balancing : Kubernetes can expose containers using DNS names or their own IP addresses. If traffic to containers is high, Kubernetes is able to load balance and distribute network traffic, keeping the deployment stable.
  • Storage orchestration : Kubernetes allows you to automatically mount the storage system of your choice, such as local storage, public cloud providers, and more.
  • Automatic rolling updates and rollbacks : The desired state of deployed containers can be described using Kubernetes, and the actual state can be changed to the desired state at a controlled rate.
  • Automatic packaging : You only need to provide Kubernetes with a set of nodes, and it can use these nodes to run containerized tasks. Tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can place containers on your nodes to make the most of your resources.
  • Automatic restart : Kubernetes will restart failed containers, replace containers, kill containers that do not respond to user-defined health checks, and not expose them to clients until they are ready to serve.
  • Secrets and configuration management : Kubernetes allows you to store and manage sensitive information such as passwords, OAuth tokens, and ssh keys. You can deploy and update secrets and application configurations without rebuilding container images or exposing secrets in stack configurations.

1.3. What Kubernetes is not

Kubernetes is not a traditional, all-encompassing PaaS system. Because Kubernetes operates at the container level rather than the hardware level, it provides some common features common to PaaS offerings, such as deployment, scaling, load balancing, logging, and monitoring. However, Kubernetes is not monolithic, and these default solutions are optional and pluggable. Kubernetes provides the building blocks for building a developer platform, but preserves user choice and flexibility where it matters.

  • There is no limit to the types of applications supported . The goal of Kubernetes is to support a very diverse set of workloads, including stateless, stateful, and data processing workloads. If an application can run in a container, it should run on Kubernetes.
  • The source code is not deployed, and the application is not built . Continuous integration, delivery, and deployment (CI/CD) workflows are determined by organizational culture and preferences, as well as technical needs.
  • Does not provide application-level services such as middleware (eg, message bus), data processing frameworks (eg, Spark), databases (eg, MySQL), caches, or cluster storage systems (eg, Ceph) as built-in services . These components can run on Kubernetes and can be accessed by applications running on Kubernetes through a portable mechanism (for example, Open Service Broker).
  • Kubernetes allows users to choose other logging, monitoring and alerting systems .
  • Configuration languages/systems (eg, jsonnet) are neither provided nor mandated . It provides a declarative API that can be targeted by any form of declarative specification.
  • No comprehensive machine configuration, maintenance, management, or self-healing system is provided nor employed .
  • Furthermore, Kubernetes is more than just an orchestration system . In fact, it eliminates the need for orchestration. The technical definition of orchestration is the execution of a defined workflow: first do A, then B, then C. In contrast, Kubernetes includes a set of independent, composable control processes, which are continuously driven towards the desired state through a declarative syntax. It doesn't matter how you get from point A to point C, just tell the state of C.

2. Kubernetes components

When you deploy Kubernetes, you get a cluster.

A cluster is a group of machines, called nodes, that run containerized applications managed by Kubernetes. A cluster has at least one worker node and one master node.

Worker nodes host the components of the application. The master node manages the worker nodes and pods in the cluster. Multiple master nodes are used to provide a cluster with failover and high availability.

The following is a relationship diagram of a Kubernetes cluster:

insert image description here

2.1 Master components

The Master component provides the control plane of the cluster . replicasThe Master component makes global decisions about the cluster (e.g., scheduling), and the Master component detects and responds to cluster events (e.g., starts a new one when a deployed field is not satisfied pod)

Master components can run on any machine in the cluster. However, for simplicity, the setup script usually starts all master components on the same machine and does not run user containers on this machine.

(1)to apiserver

The API server is a Kubernetes dashboard control component that exposes the Kubernetes API. It is a front end to the Kubernetes control plane.

The primary implementation of the Kubernetes API server is kube-apiserver. kube-apiserveris designed to scale horizontally - that is, it scales by deploying more instances. You can run kube-apiservermultiple instances of and balance traffic between those instances.

(2)etcd

A consistent and highly available key-value store is used as backup storage for all cluster data in Kubernetes.

If your Kubernetes cluster uses etcdas its backup storage, make sure to have a backup plan for this data.

(3)kube-scheduler

Watch for newly created ones that don't have a node assigned to them pod, and select a node for them to run on.

(4)kube-controller-manager

The components that run the controllers. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.

These controllers include:

  • Node Controller: Node controller, responsible for discovering and responding to node failures.
  • Replication Controller: the replica controller, responsible for maintaining the correct number for each replica controller object in the system pod.
  • Endpoints Controller: Endpoint controller, populated with endpoint objects (i.e., connection services and pod).
  • Service Account & Token Controllers: Creates a default account and API access token for new namespaces.

(5)cloud-controller-manager

Cloud Controller-Manager, which runs the controller that interacts with the underlying cloud provider

2.2 Node components

The Node component runs on each node, maintains the running podsand provides the Kubernetes runtime environment.

(1) Kubelet

kubeletis an agent, which runs on every node in the cluster, which ensures that containers podare running in one. kubeletOnly manage containers created by Kubernetes.

(2)be a proxy

kube-proxyIt is a network proxy that runs on each node in the cluster and is part of the realization of the Kubernetes service concept.

kube-proxyMaintain network rules on nodes. These network rules allow network communication with from network sessions inside and outside the cluster pod.

If an operating system packet filtering layer is available, kube-proxyit will be used. Otherwise, kube-proxythe traffic itself is forwarded.

(3)Container Runtime

A container runtime is the software responsible for running containers.

Kubernetes supports several container runtimes: any implementation of Docker, containerd, crio, rktletand Kubernetes CRI(Container Runtime Interface).

2.3 Addons (plug-ins)

(1)DNS

While the other plugins are not strictly required, all Kubernetes clusters should have cluster DNS, as many examples depend on it.

(2)Web UI(Dashboard)

Dashboard is a generic, web-based user interface for Kubernetes clusters. It allows users to manage and troubleshoot applications running in the cluster and the cluster itself.

(3)Container Resource Monitoring

Container Resource Monitoring records general time-series metrics for containers in a central database and provides a UI for exploring that data.

(4)Cluster-level Logging

A cluster-level logging mechanism is responsible for persisting container logs to a central log store with a search/browse interface.

3. Summary

Summary of Kubernetes features:

  • Portability : It fully supports public cloud, private cloud, hybrid cloud or multi-cloud architecture.
  • Extensible : It is modular, pluggable, mountable, and composable, and supports various forms of expansion.
  • Self-healing : It can self-maintain application state, be self-restarting, self-replicating, and self-scaling. It provides powerful self-healing capabilities through declarative syntax.

Kubernetes is built on Google Inc. 15 15Based on 15 years of operation and maintenance experience, all Google applications run on containers.


https://kubernetes.io/docs/concepts/overview/
https://kubernetes.io/docs/concepts/overview/components/

Guess you like

Origin blog.csdn.net/be_racle/article/details/132250692