Personal summary of cloud native technology

cloud native concept

The focus is on the following three aspects

  • Application containerization
  • Microservice Oriented Architecture
  • The application supports the orchestration and scheduling of containers

introduce:

  • These techniques enable the construction of loosely coupled systems that are fault-tolerant, manageable, and observable. Combined with solid automation, cloud-native technologies make it easy for engineers to make frequent and predictable breaking changes to the system.
  • Cloud-native technologies enable organizations to build and run elastically scalable applications in new dynamic environments such as public cloud, private cloud, and hybrid cloud. Representative cloud-native technologies include containers, service grids, microservices, immutable infrastructure, and declarative APIs.

Kubernetes

Containers and microservices and automated management

Containers and Microservices

Container:
The container is a higher level of abstraction after the virtual machine. In this layer of abstraction, each component of the entire application is packaged into an independent unit. This unit is the so-called container. In this way, code and application services can be separated from the underlying architecture, enabling full portability (the ability to run applications on any operating system or
environment
) It is divided and packaged into independent services, so that each component can be easily replaced, upgraded, and debugged.
Docker

  • Docker is the most commonly used containerization tool and the most popular container runtime.
  • Docker was open sourced in 2013. Used to package and create containers, and manage container-based applications. All Linux distributions, Windows and macOs support Docker.
  • There are other containerization tools like CoreOS rkt, Mesos Containerizer, and LXC. But at present, the vast majority of containerized applications are in Docker to allow

k8s

K8s:

  • kubernetes can propagate changes to all controlled containers at once

  • At the same time, kubernetes can easily schedule the available containers, and this process is automated

    Kubernetes runs on a node, which is a single machine in a cluster. A node might correspond to a physical machine if you have your own hardware, but more likely to a virtual machine running in the cloud. A node is where your application or service is deployed, and where Kubernetes does its work. There are 2 types of nodes - master nodes and worker nodes, so Kubernetes is a master-slave structure.

    A master node is a special node that controls all other nodes. On the one hand, it's like any other node in the cluster, which means it's just another machine or virtual machine. On the other hand, it runs the software that controls the rest of the cluster. It sends messages to all other nodes in the cluster, assigning work to them, and worker nodes report back to the API Server on the master node.

    The Master node itself also contains a component called API Server. This API is the only endpoint for nodes to communicate with the control plane. The API Server is critical because this is the point where worker nodes and master nodes communicate about the state of pods, deployments, and all other Kubernetes API objects.

    The Worker node is the real working node in Kubernetes. When you deploy containers or pods (defined later) in your application, you are actually deploying them to run on worker nodes. Worker nodes host and run resources for one or more containers.

    The logical rather than physical unit of work in Kubernetes is called a pod. A pod is similar to a container in Docker. Remember what we said earlier that containers allow you to create independent, isolated units of work that can run independently. But to create complex applications, such as web servers, you often need to combine multiple containers, then run and manage them together in a pod. That's what pods are designed for - a pod allows you to take multiple containers and specify how they fit together to create an application. And this further clarifies the relationship between Docker and Kubernetes - a Kubernetes pod usually contains one or more Docker containers, and all containers are managed as a unit.

service mesh

Service Mesh Architecture Diagram
Service Mesh Architecture Diagram

  • The service grid injects a sidecar proxy into each pod, which is transparent to the application and all traffic between applications passes through it, so the control of application traffic can be implemented in the service grid.
  • Under the cloud-native architecture, the use of containers gives more feasibility to heterogeneous applications, and the horizontal expansion capability of applications enhanced by Kubernetes allows users to quickly compile applications with complex environments and complex dependencies. Focus on program development without having to worry too much about application monitoring, scalability, service discovery, and distributed tracing, giving developers more creativity
  • Currently hot is Istio, a microservice management, protection and monitoring framework open sourced by Google, IBM and Lyft

edge computing

insert image description here

  • In order to cope with the challenges of cloud computing, network pressure and improve user experience to meet business needs, the industry proposes to migrate the cloud computing platform to the edge of the network, that is, edge computing
  • Edge computing extends computing, storage and other capabilities from cloud data centers to the edge of the network closer to the data source instead of centralized servers or cloud-based locations for client data processing and computing​
  • ​In short, edge computing brings computing resources, data storage, and enterprise applications closer to where people actually consume information, and can support the execution of artificial intelligence algorithms such as deep learning and reinforcement learning at the edge of the network, avoiding computing tasks from the edge of the network. The ultra-long network transmission delay to the remote data center meets the requirements of high real-time IoT applications (such as autonomous driving, drones, augmented reality, etc.).
  • Popular framework: kubeEdge, for extending native containerized application orchestration capabilities to edge hosts. It is built on top of kubernetes
    to provide infrastructure support for networking, application deployment and metadata synchronization between cloud and edge.

Cloud edge

insert image description here

"Cloud-edge collaboration", the terminal is responsible for overall perception, the edge is responsible for local data analysis and reasoning, and the cloud gathers all edge perception data, business data, and Internet data to complete industry and cross-industry situation awareness and analysis.
"Cloud" is the central node of traditional cloud computing and the control terminal of edge computing;
"edge" is the edge side of cloud computing, which is divided into infrastructure edge and device edge
; Various sensors, cameras, etc.

insert image description here
By placing network forwarding, storage, computing, intelligent data analysis and other tasks on the edge for processing,
super-large-scale computing, storage and non-delay-sensitive tasks are placed on the cloud, and the cooperation between cloud computing and edge computing realizes cloud-side collaboration and global In order to truly realize the ubiquitous cloud, the network computing power scheduling and the unified management and control of the entire network can be realized.

Guess you like

Origin blog.csdn.net/qq_35798433/article/details/130687712