Java Architecture Patterns and Design Patterns (9)-An article to understand the native cloud

Original link

table of Contents

Preface

The history of back-end architecture

Centralized architecture

Distributed system architecture

The new era of container technology Docker

Microservice architecture

Governors

Service Mesh

to sum up

Cloud Native

What is Cloud

What is native

Cloud Native is Tao, Service Mesh is technology

Service Mesh

Disputes 2017

A hundred schools of thought 2018

Continuous development 2019

Same

connection

protection

control

Observation

to sum up


Preface

Since the maturity of container (virtual) technology (Docker) in 2013, the back-end architecture has entered a stage of rapid iteration, and many emerging concepts have emerged:

  • Microservice

  • k8s

  • Serverless

  • IaaS: Infrastructure service, Infrastructure-as-a-service

  • PaaS: Platform service, Platform-as-a-service

  • SaaS: Software service, Software-as-a-service

  • Cloud Native: Cloud Native

  • Service Mesh

The changes in the back-end architecture are closely related to the development of cloud computing. The architecture is actually constantly adapting to cloud computing, especially cloud native, which is known as the future architecture. In 2019, the cloud native landing solution Service Mesh is fully blooming at home and abroad. Think that the future has come.

Next, we will:

  • Sort out the evolution history of the back-end architecture and review the development history of the back-end architecture;

  • Review the development process of cloud services and discuss the concept of cloud native;

  • Sort out the development history of the cloud native implementation solution Service Mesh;

  • Introduce the dazzling features of Service Mesh, which represents Istio;

The history of back-end architecture

Centralized architecture

The centralized architecture is also called the monolithic architecture, which is very popular when the Web 2.0 model has not risen on a large scale. Later, the B/S (Browser/Server) architecture based on Web applications gradually replaced the C/S (Client/Server) architecture based on desktop applications. Most of the back-end systems of the B/S architecture adopted a centralized architecture. At that time, it unified the development field of the server back-end with an elegant hierarchical design.

The centralized application is divided into a standard three-layer architecture model: data access layer M, service layer V and logic control layer C. Both domain model objects can be shared between each layer, or more detailed splits can be carried out.

The disadvantage is

  • The compilation time is too long;

  • The regression test cycle is too long;

  • Reduced development efficiency, etc.;

  • Not conducive to updating the technical framework

Distributed system architecture

For the rapid growth of Internet application scale, the centralized architecture cannot increase the throughput of the system unlimitedly, while the distributed system architecture theoretically provides unlimited expansion possibilities for the increase in throughput. Therefore, servers used to build Internet applications have gradually abandoned expensive minicomputers and adopted a large number of cheap PC servers.

The new era of container technology Docker

The concept of distributed architecture has appeared very early, and the biggest problem hindering its implementation is that the container technology is immature, and the application is running on the cloud platform. It is still necessary to install the corresponding runtime environment for different development languages. Although automated operation and maintenance tools can reduce the complexity of environment construction, they still cannot fundamentally solve environmental problems.

The emergence of Docker has become a new watershed in the software development industry; the maturity of container technology also marks the beginning of a new era of technology. Docker allows developers to package their applications and dependencies into a portable container. Just as the emergence of smartphones changed the rules of the game in the entire mobile phone industry, Docker has also swept the entire software industry and changed the industry's rules of the game. Through containerized packaging, development and operation and maintenance are all released in a standardized manner, heterogeneous languages ​​are no longer the shackles of the team.

After Docker, microservices became popular

Microservice architecture

The microservice architecture style is a method of developing a single application into a group of small services, each service runs in its own process, and the communication between services uses a lightweight communication mechanism (usually HTTP resource API). These services are built around business capabilities and can be independently deployed through a fully automated deployment mechanism. These services share a minimal centralized management. Services can be developed in different languages ​​and use different data storage technologies.

Microservice advantages

  • Scalable

  • Upgradeable

  • Easy to maintain

  • Isolation of faults and resources

The problem with microservices

However, there is no perfect thing in the world, and so are microservices. The well-known software guru, Chris Richardson, considered one of the top ten software architects, pointed out: "Microservice applications are distributed systems, which will bring inherent complexity. Developers need to use RPC or messaging. Choose between and complete the inter-process communication mechanism. In addition, they must write code to deal with local failures such as slow or unavailability in message delivery."

In the microservice architecture, the following types of problems are generally dealt with:

  • Service governance: elastic scaling, fault isolation

  • Flow control: routing, fuse, speed limit

  • Application observation: index measurement, chain tracking

Solution Spring Cloud (Netflix OSS)

This is a typical microservice architecture diagram

The Spring Cloud system provides the core functions of distributed systems such as service discovery, load balancing, failover, dynamic expansion, data fragmentation, and call link monitoring, and it has once become the best practice of microservices.

Problems with Spring Cloud

If you start to build microservices, you will definitely be attracted to Netflix OSS/Java/Spring/SpringCloud. But know that you are not Netflix, and you do not need to use AWS EC2 directly, which makes the application very complicated. Nowadays, it is wise to use docker and memos/kubernetes, they already have a large number of distributed system features. Layering at the application layer is because of the problems that netflix faced 5 years ago and had to do so (it can be said that if kubernetes were available at that time, the netflix OSS stack would be very different).

Therefore, it is recommended to choose carefully and choose according to needs to avoid unnecessary complexity to the application.

It is true that the SpringCloud solution looks very beautiful, but it is very intrusive, the application code will contain a large number of SpringCloud modules, and it is not friendly to other programming languages.

Governors

The emergence of Kubernetes is to solve the problem of Spring Cloud, not to invade the application layer, to solve the problem at the container layer.

The origin of Kubernetes

Kubernetes originally originated from Borg within Google, providing an application-oriented container cluster deployment and management system.

The goal of Kubernetes is to remove the burden of orchestrating physical/virtual computing, network and storage infrastructure, and to enable application operators and developers to fully focus on container-centric primitives for self-service operations.

Kubernetes also provides a stable and compatible foundation (platform) for building customized workflows and more advanced automation tasks. Kubernetes has complete cluster management capabilities, including multi-level security protection and access mechanisms, multi-tenant application support capabilities, transparent service registration and service discovery mechanisms, built-in load balancers, fault detection and self-repair capabilities, and rolling service upgrades And online expansion, scalable automatic resource scheduling mechanism, multi-granularity resource quota management capabilities.

Kubernetes also provides comprehensive management tools, covering development, deployment testing, operation and maintenance monitoring and other links.

Service Mesh

Service Mesh is an enhancement of Kubernetes and provides more capabilities.

On September 1, 2018, Bilgin Ibryam published an article Microservices in a Post-Kubernetes Era on InfoQ. The Chinese version sees microservices in the post-Kubernetes era (the translation has some errors, just for reference).

The author's point of view in the article is: In the post-Kubernetes era, Service Mesh technology has completely replaced the use of software libraries to implement network operation and maintenance (such as Hystrix circuit breakers).

If Kubernetes fired the first shot on Spring Cloud, then Service Mesh is the terminator of Spring Cloud.

to sum up

Finally, we use a flowchart to describe the development process of the back-end architecture

Approximate timetable for each key node

Architecture/Technology Time/year
Centralized architecture ~
Distributed architecture ~
Docker 2013
Microservice 2014
Spring Cloud 2014
Kubernetes mature 2017
Service Mesh 2017

It can be seen that in the microservice ecology, the road represented by Spring Cloud has no successors, and the future belongs to Service Mesh.
After 2 years of development, Service Mesh is currently mature enough and there are already production cases. Let’s take a look at Service Mesh. Before that, we must first understand a concept, cloud native.

Cloud Native

How to understand "cloud native"? The reason for putting this topic on the front is that this is the most basic understanding of the concept of cloud native, and this will directly affect all subsequent cognition.

Note: The following cloud native content will all quote Ao Xiaojian's Talking Cloud Native (Part 1): What should a cloud native application look like? This article, the picture is too good.

The definition of cloud native has been developing. Everyone may have a different understanding of cloud native, as Shakespeare said: There are a thousand Hamlets in the eyes of a thousand people.

In 2018, CNCF (Cloud Native Computing Foundation) updated the definition of cloud native.


This is the representative technology described in the new definition. Among them, container and microservice have appeared in different definitions in different periods, and service grid, a new hot technology that only began to be accepted by the community in 2017, is listed very eye-catchingly Come out, side by side with microservices, instead of the service mesh we usually think is just a new way of implementing microservices.

So how do we understand cloud native? Let's give it a try, take the term Cloud Native apart to understand, and first look at what Cloud is.

What is Cloud

A quick review of the history of cloud computing will help us have a more perceptual understanding of the cloud.

The emergence of cloud computing is closely related to the development and maturity of virtualization technology. After the x86 virtual machine technology matured around 2000, cloud computing gradually developed.

Based on virtual machine technology, forms such as IaaS/PaaS/FaaS and their open source versions have appeared one after another.

In 2013, docker appeared, the container technology was mature, and then a battle was organized around the container. At the end of 2017, kubernetes won. CNCF was established in 2015 and has formed a cloud native ecosystem in recent years.

In this process, the shape of the cloud has been changing. It can be seen that more and more functions are provided by suppliers, and fewer and fewer functions are required for customers or applications to manage by themselves.


The architecture has also been adapting to changes in cloud computing

What is native

After reviewing the history of cloud computing, we have a deeper understanding of Cloud, and then continue to look at: What is Native?
The dictionary's explanation is: innate.
So how do we understand Cloud and native together?

Here we throw out our own understanding: cloud native means native design for the cloud. The detailed explanation is: Application native is designed to run in the best way on the cloud, giving full play to the advantages of the cloud.

This understanding is a bit vague, but considering the constant changes in the definition and characteristics of cloud native over the years, as well as the inevitable changes that can be expected in the future, I feel that the understanding of cloud native can only be returned to cloud native. The starting point, not how to implement it.

Cloud Native is Tao, Service Mesh is technology

In the context of such a cloud-native understanding, let me introduce my vision for cloud-native applications, that is, what I think cloud-native applications should look like.

Before cloud native, the underlying platform was responsible for providing basic operating resources upward. The application needs to meet business requirements and non-business requirements. In order to better code reuse, the realization of general and good non-business requirements is often provided in the form of class libraries and development frameworks. In addition, some functions will be provided in the SOA/microservice era. It exists as a back-end service, so that it can be simplified as a call code to its client in the application.

The application then packages these functions together with its own business implementation code.

The emergence of the cloud can not only provide a variety of resources, but also provide a variety of capabilities to help applications, so that applications can focus on the realization of business needs.
Functions related to non-business requirements have been moved to the cloud, or infrastructure, as well as middleware sinking to the infrastructure.

Take communication between services as an example: the various functions listed above need to be implemented.

The idea of ​​the SDK: add a thick client to the application layer and implement various functions in this client.

The idea of ​​Service Mesh is to strip out the functions of the SDK client and put it in Sidecar. It is to sink more things into the infrastructure.

From the user's point of view, the application looks like this:

Cloud native is our goal, Service Mesh has handed over its own answer sheet, and then we can return to Service Mesh.

Service Mesh

The translation of the name is service grid. This term was first used by Buoyant, which developed Linkerd, and was used internally.

definition

The basic structure of the service grid

Disputes 2017

At the end of 2017, when the non-intrusive Service Mesh technology finally reached its maturity, when Istio/Conduit was born, people were shocked: Microservices are not only an intrusive gameplay, let alone Spring Cloud's one-man show!

Interpretation of Service Mesh in 2017: the battle of the heroes is rising

The article summarizes:
Startup Buoyant's product Linkerd won the start;
Envoy worked silently;
from Google and IBM jointly launched Istio, Linkerd took a sharp turn;
Buoyant launched Conduit at the end of 2017;
Nginmesh and Kong low-key participation;

A hundred schools of thought 2018

In 2018, what content is added in Service Mesh? In 2018, Service Mesh became very popular in China. Many companies launched their own Service Mesh products and solutions, and Service Mesh became even more lively.
See this article for details. The next generation of microservices! Service Mesh 2018 Annual Summary

The article summarizes:
Service Mesh is very
popular in China, and many companies have joined the battlefield; Istio released 1.0, becoming the most popular Service Mesh project and gained support from multiple parties;
Envoy continues to steadily and steadily, Envoy is directly adopted by Istio as the data plane, and is expected to become Data plane standard;
Linkerd1.x is in trouble, Conduit ran in small steps, but the response was mediocre. Buoyant decided to merge product lines, Linkerd1.x + Conduit = Linkerd2.0;
more companies participated in Service Mesh, including Nginx and Consul abroad , Kong, AWS, etc., in China, there are Ant Financial, Sina Weibo, Huawei, Ali Dubbo, Tencent, etc.;

Continuous development 2019

2019 will hear more voices of Service Mesh, please follow Service Mesh Chinese community

Same

As mentioned earlier, Istio is the most popular Service Mesh framework. One sentence defines Istio: an open platform for connecting, managing, and protecting microservices.
What functions can it provide to our microservices?

connection

  • Dynamic routing

  • Retry after timeout

  • Fuse

  • Fault injection

See the official website for details

protection

Security issues must be done right from the beginning. It is very convenient to implement secure communication in Istio.

Istio supports mutual TLS encryption

See official documentation

control

Rate limit
black and white list

See official documentation

Observation

  • Metric measurement: the number of requests per second, Prometheus and Grafana
    use Grafana to observe traffic

  • Distributed tracking: Jaeger or Zipkin
    quickly observe the call link

  • Log: non-application log

  • Grid visualization
    quickly clarify the relationship between services

to sum up

Virtualization technology promotes the transformation of this cloud computing technology, and by the way, it also affects the evolution of the back-end architecture. At present, we are in the cloud era, and there will be more meta-native applications appearing. As one of the best, Istio is worth your investment. Take a look at it.

Guess you like

Origin blog.csdn.net/lsx2017/article/details/114005289