Fundamental Concepts Review: Cloud Native Application Delivery

Original link: Basic Concepts Review: Cloud Native Application Delivery

Reprint source: NGINX open source community

The only official Chinese community of NGINX, all at nginx.org.cn


 

Although cloud native application development was born in the early 21st century, there is still great confusion in the use of terminology. This article walks you through common terms and issues.

cloud native

The Cloud Native Computing Foundation (CNCF) defines “cloud native” as follows:

Cloud-native technologies allow enterprises to build and run scalable applications in modern, dynamic environments such as public, private and hybrid clouds. Representative technologies of cloud native include containers, service mesh, microservices, immutable infrastructure and declarative API.

These techniques enable building loosely coupled systems that are fault tolerant, easy to manage, and easy to observe. Combined with reliable automation, cloud-native technologies make it easy for engineers to make frequent and predictable major changes to systems.

Cloud load balancing

Cloud load balancing refers to distributing client requests to multiple application servers running in a cloud environment. Like other forms of load balancing, cloud load balancing maximizes application performance and reliability; its advantages over traditional load balancing of on-premises resources are (usually) lower cost and the ability to easily scale with demand or shrink application.

Today, more and more businesses, especially small ones, are running a variety of applications in the cloud. A business might use a cloud-based CRM (such as Salesforce.com) to store customer information, use a cloud-based ERP system to track product data, use a web hosting provider (such as Google) to host the website, and use Amazon Elastic Compute Cloud (EC2) to run a small amount of Customized applications.

The recommended practice is to deploy the load balancer server in the same environment as the resources being load balanced. Therefore, when the majority of a company's computing infrastructure is hosted in the cloud, it is necessary to run a load balancer in the cloud.

The advantages of cloud load balancing are mainly reflected in the scalability and global nature of the cloud itself.

  • The ease and speed of scaling in the cloud means that enterprises can easily handle traffic peaks (such as Double Eleven traffic) by simply placing a cloud load balancer in front of a set of application instances that can quickly and automatically scale based on demand. , without degrading performance.

  • The ability to host applications in multiple cloud centers around the world also improves reliability. For example, if the Northeastern United States experiences a power outage after being hit by a snowstorm, a cloud load balancer can direct traffic from cloud resources hosted in that region to resources hosted in other parts of the country.

Multi-cloud vs. hybrid cloud

"Multi-cloud" and "hybrid cloud" are often used synonymously, but there is actually a difference between the two.

Multicloud infrastructure spans multiple public cloud environments from different providers. In a multi-cloud infrastructure, different public clouds are often used to perform different tasks (e.g., one for program logic, a second for databases, a third for machine learning), and distribution across clouds can vary by application. different. Enterprises choose a multi-cloud strategy to take advantage of the flexibility and features of various clouds. 

Hybrid cloud infrastructure consists of two or more different types of cloud environments (on-premises, private cloud, and public cloud). In a hybrid cloud infrastructure, the role of the public cloud is to extend the capabilities of the private cloud or on-premises environment. This approach is often taken by enterprises that are migrating applications to the cloud or have too much technical debt to become fully cloud native. Hybrid cloud infrastructure typically includes multiple public clouds and therefore combines hybrid cloud and multi-cloud.

container

Containers are a virtualization technology designed to create and support portability for applications—in other words, to make it easy to deploy applications on a variety of different platforms. Containers can package all the requirements of an application (the application code itself, the application's dependencies (such as libraries that need to run, etc.), and the runtime environment of the application and its dependencies) into a package that can be transported across platforms and run independently. A container is an abstraction of an application from its typical operating system runtime environment.

Docker is the best-known container implementation format; there are also other container technologies (such as rkt/CoreOS, containerd, Hyper-V containers) as well as lower-level technologies (such as cgroups and namespaces), both of which are used for application isolation , similar to a container engine, but does not provide isolated portability like containers). You can manage containers directly using platform tools such as Docker or rkt, but most deployments use orchestration tools such as Kubernetes to manage containers. Kubernetes has gradually become the default and standard tool for production-grade container deployment.

For more container and other related content, visit the NGINX open source community to see more.

microservices

Microservices are a software architectural approach to building large, complex applications using multiple small components, each performing a single function such as authentication, notifications, or payment processes, or the small components themselves. Each microservice is an independent unit in a software development project, with its own code base, infrastructure, and database. Microservices work together and communicate through Web API or message queues to respond to incoming events.

In the book Building Microservices, Sam Newman succinctly defines microservices as “small autonomous services that work together” – a definition that encompasses the three elements of microservices.

A microservice's code base is "small" because it is focused on one feature; "small" means that a single developer or a small team can create and maintain the code.
"Autonomous" means that microservices can be deployed and scaled on demand, without consulting the teams responsible for other microservices when changes occur within the microservice.
This is possible because when microservices "work together," they communicate through well-defined APIs or similar mechanisms that don't expose the inner workings of the microservices.
For more basic concepts of "microservices", please read our article "Understanding Microservices in One Article" .

Ingress Controller

Ingress controller is a dedicated load balancer for Kubernetes (and other containerized) environments. Kubernetes is the de facto standard for containerized application management.

For many enterprises, migrating production workloads to Kubernetes increases the challenges and complexity of managing application traffic. Ingress controller abstracts away the complexity of Kubernetes application traffic routing and establishes a bridge between Kubernetes services and external services.

The functions of Kubernetes Ingress Controller are as follows:

  • Accept traffic from outside the Kubernetes platform and load balance it to pods (containers) running inside the Kubernetes platform
  • Can manage outbound traffic for services within the cluster that need to communicate with other services outside the cluster.
  • Configure using the Kubernetes API to deploy an object named "Ingress Resource"
  • Monitor pods running in Kubernetes and automatically update load balancing rules after pods are added or removed from the service

Service Mesh

According to The New Stack's definition, service mesh is a technology designed to "improve the security, observability and flow control of distributed systems." More specifically, a service mesh is a component of a containerized environment orchestration tool like Kubernetes.

It is typically responsible for a range of functions, including routing traffic between containerized applications, serving as an interface for defining automatic service-to-service mutual TLS (mTLS) policies and enforcing these policies, and providing visibility into application availability and security. . Like the overall situation of Kubernetes, the service mesh is also composed of control plane, management plane and data plane.

Service mesh typically handles traffic management and security in a way that is transparent to containerized applications. Through features such as SSL/TLS offloading and load balancing, service mesh eliminates the need for developers to implement security or service availability in each application individually. Enterprise-level service mesh provides solutions to various "problems":

  • Secure traffic with end-to-end encryption and mTLS
  • Orchestration with injection management, sidecar management, and Kubernetes API integration
  • Manage service traffic, including load balancing, traffic control (rate limiting and circuit breaking) and traffic shaping (grayscale deployment, A/B testing, blue-green deployment)
  • Enhance monitoring and visualization of service-to-service traffic using popular tools like Prometheus and Grafana
  • Simplify Kubernetes inbound and outbound traffic management through the natively integrated Ingress controller.

A service mesh can be small and focus on a specific function; it can be large and include a comprehensive set of network and cluster management tools (such as Istio); or it can be anything in between. The larger and more complex the service mesh, the more useful a separate management plane becomes.


The only official Chinese community of NGINX, all at nginx.org.cn

More NGINX-related technical information, interactive Q&A, series of courses, and event resources: Open Source Community Official Website | WeChat Official Account

 

IntelliJ IDEA 2023.3 & JetBrains Family Bucket annual major version update new concept "defensive programming": make yourself a stable job GitHub.com runs more than 1,200 MySQL hosts, how to seamlessly upgrade to 8.0? Stephen Chow's Web3 team will launch an independent App next month. Will Firefox be eliminated? Visual Studio Code 1.85 released, floating window US CISA recommends abandoning C/C++ to eliminate memory security vulnerabilities Yu Chengdong: Huawei will launch disruptive products next year and rewrite industry history TIOBE December: C# is expected to become the programming language of the year A paper written by Lei Jun 30 years ago : "Principle and Design of Computer Virus Determination Expert System"
{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/5246775/blog/10112278