Service Mesh trends: cloud native mainstay

Foreword

In this article finishing from a keynote speech at Kubernetes & Cloud Native Meetup Shanghai Station, published May 25, introduces the ServiceMesh latest product developments, analyze the trends and future direction; binding cloud practice of ants gold dress, elaborated in the cloud the core values ​​of Service Mesh, as well as the key role of native cloud landing in the context of the native.

There are mainly three parts:

  1. Service Mesh Product News: The past six months Service Mesh product developments, including the open source projects and cloud vendors launched on cloud services
  2. Service Mesh trends: According to the latest product developments, summarizes trends Service Mesh infer future direction
  3. Service Mesh with native cloud: cloud combine native to better understand the value and role of Service Mesh

Service Mesh Product News

Istio1.1 release

Istio Service Mesh is the most compelling community open source project, released a long-awaited Istio 1.1 version in March of this year, we take a look at Istio recent release of the case:

  • June 1, 2018, Istio released version 0.8, which is the first Istio history LTS version is a version of the biggest changes in the history of Istio;
  • July 31, 2018, Istio released a 1.0 version, known as the "Product Ready";
  • Then is a long wait, Istio 1.0 series with a small monthly version 1.0.1 released way all the way to 1.0.6, then began 1.1.0 snapshot 1 to 6, and then 1.1.0-rc 1 to 6 finally released the 1.1 version in March 20, 2019, known as "Enterprise Ready".

From Istio 1.0 to Istio 1.1, intermediate time span up to 9 months! We take a look through this long development time before release Istio 1.1 version which brings something new:

istio1.1-new-feature.png

Plotted red part, related to the adjustment Istio architecture, the architecture will be described below in detail changes brought Istio 1.1 version.

Istio 1.1 architecture changes

The figure is Istio 1.0 and Istio 1.1 Architecture FIG comparison:

istio-constructure.png

First schema change from Galley Istio 1.1: increased Galley component architecture diagram of Istio 1.1. But in fact Gallay components already exist in Istio version 1.0, but was Galley function is very simple, just do validation (Validation) configuration after the update, do not appear in Istio 1.0 architecture diagram. And after Istio 1.1 version, Galley positioning undergone tremendous changes: Galley began to share responsibilities Pilot / Mixer's.

In previous designs Istio 1.1 version, Istio three components Pilot / Mixer / Citadel Kubernetes need access to the API Server, in order to obtain registration information and service configuration information, including Kubernetes native resources such as service / deployment / pod, etc., as well as Istio custom resources (up to more than the number of 50 CRD). This design results in the various components Istio have had to Kubernetes the API Server to generate strong binding, not just a lot of code redundancy, but also because in the test API Server requires interaction and Kubernetes led Pilot / Mixer module testing difficult.

To solve this problem, after Istio 1.1, a working visit Kubernetes the API Server will be gradually handed over to Galley components, while other components such as the Pilot / Mixer will be decoupled and Kubernetes.

galley.png

This work is still in progress, the current Istio the CRD has been amended by Galley read, and K8s native resources (Service / Deployment / Pod, etc.), temporarily or read by the Pilot.

In order to facilitate synchronization data in the respective components, Istio introduced MCP (Mesh Configuration Protocol) protocol. In Istio 1.1 version, Pilot Galley synchronization data from the MCP protocol. MCP is inspired by the new agreement xDS v2 protocol (to be exact aDS) being developed for synchronizing data between Istio each module.

The second change Istio 1.1 architecture comes from the Mixer, in Istio 1.1 release, the recommended Out-of-Process Adapter, that is out of process adapters. Istio next version is expected to be deprecated In-Proxy Adapter, all current Adapter will be changed to Out-of-Process adapter.

What is In-Proxy Adapter? The figure is Mixer architecture diagram, in Istio design, Mixer is an independent process, Proxy and Mixer calls to interact remotely. The Mixer achieved Adapter pattern defines the Adapter API, and then built a very large number of various Adapter. These Adatper code stored in the code Mixer, the runtime processes are Mixer, so called In-Process Adapter.

in-process-adapter.png

In-Process Adapter problem is to achieve all of the Adapter Mixer and are bound directly includes the code and run-time. Therefore need to be updated when Adapter will need to update the entire Mixer, a realization Adapter of any problems will affect the entire Mixer, and a large number of Adapter also brought a large number of CRD. To this end, Istio 1.1 version of-Process Out-Adapter to solve this problem by introducing.


out-of-process-adapter.png

Out-of-Process Adapter runs Mixer process in independent process outside, so the Out-of-Process Development of Adapter / deployment and configuration can be independent of the Mixer, which will Mixer free from implementation details Adaper in.

However, Out-of-Process introduced Adapter, it will lead to new performance issues: is the original method call between Mixer and In-Process Adapter, now replaced by Out-of-Process After Adapter became a long-distance call. The Mixer has always been Istio architecture design in the most controversial, before the remote calls between Proxy and Mixer, has resulted in a very large performance bottleneck, and the introduction of Out-of-Process After Adapter from long-distance calls will become more than once times (each configuration entry into force of Out-of-Process Adapter means that a remote call), it makes the performance worse.

Out-of-Process summary introduction Adapter: the architecture more elegant , the performance is bad .

While Istio 1.1 for architecture regardless of the performance of the internal Istio there are other voices heard, as is planned Mixer v2. The most important decision this planning process is to give up the idea of ​​an independent Mixer, Mixer functions instead be incorporated into the Envoy, avoiding the overhead of remote calls between the Envoy and Mixer. Performance issues on Mixer and Mixer merger of thinking, there is a clear understanding and plan a year later, at last, begin to address Mixer Istio architectural design problems delighted when ants gold dress in June last year SOFAMesh project and return to the right direction.

Friends are interested can obtain more information by reading the following article (published a year ago, but still valid):

Currently Mixer v2 Review of planning is still in the state, implementation has not been clearly decided. If you want to merge Mixer, given the current Mixer is based on Golang write, and Envoy is based on C ++, which means rewrite all Adapter with C ++, a huge amount of work is probably not able to complete the short term. Of course, there is another novel (or brain-hole wide open) idea: the introduction of Web Assembly (WASM). Envoy is currently conducting try Web Assembly's support, if successful, to support the Mixer Adapter is a good choice by the way Web Assembly.

Other products dynamic community

Recently, in CNCF (UDPA / data plane common API) Working Group preparation Universal Data Plane API, standard API to develop a data plane, configured to provide a de facto standard L4 / L7 data plane. Creative Universal Data Plane API from the Envoy, implemented as xDS API. At present xDS v2 API is already the de facto standard for data plane API, this time UDPA will xDS v2 API is based. The initial members of the working group from the Envoy and gRPC project include representatives of ants gold dress is also actively involved in UDPA working group is still in the very early stages of preparation.

Linkerd2 on April 17, 2019 released the latest stable version Linkerd 2.3 version. Linkerd2 is currently the only open source products exist Istio confrontation, but the country is not high profile, users are few. More interesting is the development Linkerd2 startups Buoyant recent B round of financing from the investment arm of Google.

Dynamic cloud vendors

With the development of Service Mesh technology, and parties Service Mesh optimistic about the future, the major mainstream cloud providers have begun Mesh technology development effort Service.

First look at AWS, in April 2019, AWS announced App Mesh GA. App Mesh is AWS launched AWS native service grid is fully integrated with AWS, including:

  • Network (AWS cloud map)
  • Computing (Amazon EC2 and AWS Fargate)
  • Scheduling tool (AWS EKS, the EC2 Amazon ECS and customer management k8s)

appmesh.png

App Mesh data plane using Envoy, very creative products VM supports and the container supports a variety of product forms, as shown in FIG.

More details AWS App Mesh, please visit the article with AWS App Mesh redefining communications applications  .

Google's rule of thumb is to play around Istio. First launched at the end of 2018 Istio on GKE, namely "one-click integration Istio", and provide telemetry, logging, load balancing, routing and security capabilities mTLS. Then Google introduced Google Cloud Service Mesh, which is fully hosted version Istio, not only to provide a complete open source version of Istio characteristics, but also integrates important product Stackdriver on Google Cloud.

Recently, Google launched a beta test version of Traffic Director, Traffic Director is a fully managed service grid control plane traffic to support the global load balancing for virtual machines and containers, and hybrid cloud cloudy provide support, centralized traffic control and health checks there is also a very special features: supports automatic retractable traffic.

google-traffic-director.png

Details of Google Traffic Director, please see my previous blog post Google Traffic Director details .

Microsoft has launched a Service Fabric Mesh. Azure Service Fabric is Microsoft's framework for micro-services designed for public cloud, on-premises, and hybrid architecture and cloudy. The Service Fabric Mesh Azure is a fully managed products, launched a preview version in August 2018.

service-fabric-mesh.png

Last week (May 21) news, Microsoft released Service Mesh Interface on KubeConf. SMI is to run the grid service specification on Kubernetes, defines a common standard implemented by a variety of vendors, making standardization and service grid providers innovative end-user best of both worlds, SMI will be expected to bring a flexible Service Mesh and interoperability.

SMI is an open project, jointly launched by Microsoft, Linkerd, HashiCorp, Solo, Kinvolk and Weaveworks; and with the Aspen Mesh, Canonical, Docker, Pivotal, Rancher, Red Hat and VMware support.

smi.png

Service Mesh trends

In the past six months after completing Share Dynamic Service Mesh products, we have to explore the development trend Service Mesh analysis.

Trend 1: + cloud hosting

In the development of micro-services / container these years, we will find a very interesting (and even some dumbfounding) phenomenon:

trend1.png

  • In order to address the complexity of the problem monomer, we introduce micro-service architecture;
  • In order to solve the problems at the micro-service architecture deployed a large number of applications, we introduced into the container;
  • In order to address container management and scheduling problems, we introduce Kubernetes;
  • To solve the problem of invasive micro-services framework, we introduce Service Mesh;
  • In order to have a better Service Mesh underlying support, we turn on k8s Mesh Service is running.

In this process, from a single application (or micro-services) standpoint, indeed its reduced complexity, deployed in the case where there is the underlying support system / maintenance / management / control / monitoring also greatly simplified. But standing on the perspective of the whole system, the whole complexity has not disappeared, but decomposition from single service to micro, from the application to the sinking Service Mesh, complexity has not diminished from the total amount, but greatly increased.

The best way to solve this problem is on the cloud , using the hosted version of k8s and Service Mesh, so will the complexity of the underlying systems to cloud vendors, and customers only need to enjoy on the basis of cloud technology to bring the ease of use Service Mesh and powerful features.

Front when we share product updates, you can see the current Google / AWS / Microsoft These three giants have launched their own Service Mesh hosted products, while in the country, Ali cloud / Huawei cloud, also has a similar product launch, we ants gold dress hosted version will also launch on the cloud SOFAMesh in the financial cloud later. Here, I sum up in one sentence:

Almost all of the major public cloud providers are providing (or ready to) Service Mesh hosting package.

Trend 2: VM container and mix

The second trend is the VM container and mix, which is to support Service Mesh service operating environment, not only to support the container (especially a k8s), also supports virtual machines and support services that run under both environments of mutual visits, and even shield the differences between the two directly on the product level.

For example, Google's Traffic Director product:

google-traffic-director.png

AWS's App Mesh Products:

appmesh.png

It is provided at the product level direct VM container and mix of support, whether the application is running on the VM can support or the container, and can be easily migrated.

Trend 3: cloudy and hybrid cloud support

And support hybrid cloud cloudy recently becoming a hot new technologies and business models, and even Google Cloud shouted slogans are to be "All in Hybrid Cloud"!

Google Traffic Director unequivocally expressed the importance of Google Cloud Hybrid cloud:

google-traffic-director-hybird.png

Below is an example of the transformation of a given application Google Traffic Director: into micro Services Architecture from the monomer structure, from private cloud into the public cloud private cloud plus hybrid cloud model.

google-traffic-director-hybird2.png

An ideal solution for Service Mesh is undoubtedly achieve the above transformation and to provide hybrid cloud cloudy and support.

Binding and Serverless: Trends 4

Service Mesh technology and Serverless technology is working in two different latitudes of technology:

  • Focus Service Mesh technology is that inter-service communication , the goal is to release the client SDK, reducing the burden for the application, including the ability to provide security, routing, policy enforcement, traffic management.
  • Focus Serverless technology is that service operation and maintenance , the goal is no need to focus on customer service operation and maintenance, providing automatic retractable service instances, and pay according to the actual use.

Theoretically Service Mesh technology and local technology and Serverless no conflict can be combined. In fact the industry also began this trend, and the fusion of two ways:

  1. Serverless introduced in Service Mesh: Typical projects such as Knative and Knative of Google Cloud hosted version of Google Cloud Run, through the introduction of the container support and the use of Istio, Knative the Serverless the support extended to the outside of Function, in greatly expanded Serverless applicable under the scope, but also the ability to communicate between services introduced to the Serverless.
  2. Service Mesh introducing the Serverless: typical products such as Google Traffic Director, while providing Service Mesh capabilities to support traffic in accordance with the number of instances of auto-scaling service, which incorporates the characteristics of the part Serverless.

For binding Serverless and Service Mesh, we look to the future shape: the future should give rise to a new service model, and Serverless Service Mesh combined. As long as the service arrangements, you can get an automatic communication between the service and the ability of Service Mesh Serverless non-server operation and maintenance. In ants gold dress, we understand these became the final state in the future is one of the native cloud applications, we are actively exploring practical ways its landing.

servicemesh-serverless.png

Trends 5: Mesh mode extends

Recall Core Service Mesh mode, the basic principle is that the client SDK release, Proxy to run a separate process; the goal is to present in the original capabilities of the SDK sink, for the application of burden, to help apply the original cloud biochemistry.

Following this idea, the Service Mesh application scenarios generalization, not limited to synchronous communication between services, it can be extended to more scenarios: characterized by a network access, but is achieved by the client SDK.

In practice, the ants gold dress, we found Mesh mode applies not only to synchronize communications between the service can also be extended to the following scenarios:

  • Database Mesh: Database Access
  • Message Mesh: the message mechanism
  • Cache Mesh: Cache

Ant pattern above products are served in gold exploration, development and related products was trying to landing. Community also has a number of related products, such as Database Mesh Zhang Liang students in terms of pushing the Apache Shardingsphere  project.

Through more Mesh mode, we can cover more scenarios, enabling applications to make do in all aspects of burden, not just between Service Mesh corresponding communication services, thereby laying the foundation for the subsequent application of the original cloud biochemistry.

Trend 6: standardization, not locked

An important advocate of native cloud, the cloud is desirable to provide users with a consistent user experience, to promote standardization, avoid vendor lock (Not Lock-In).

Share from the front Service Mesh product developments can be seen that the current Service Mesh appear on the market a large number of suppliers and products: open-source, closed source, the big companies and small companies out. At the same time market boom has also brought the issue of fragmentation of the market - all the work around the periphery of business applications, such as through the Service Mesh traffic control, configure various security / monitoring / behavioral strategies, and went up on these requirements tools and ecosystem, but had firmly tied down on a specific Service Mesh achieve the so-called "vendor lock-in." The fundamental problem is that the realization of various different, there is no uniform standard. Therefore, to solve these problems, we must root of the problem: to solve the problem of standardization of Service Mesh .

Just recently this month, Service Mesh promote the standardization community, there were two big events:

  1. CNCF build Universal Data Plane API (common data plane API) Working Group, plans to xDS v2 API as a basis for developing a standard API data plane, the initial members of the working group of representatives from including the Envoy and gRPC project (can be understood as Google-led);
  2. Microsoft launched on KubeConf Service Mesh Interface, ready to define specifications to run on Kubernetes service grid, bringing the flexibility and interoperability for the Service Mesh. SMI led by Microsoft, the United Linkerd, HashiCorp, Solo, Kinvolk and Weaveworks.

In order to facilitate understanding of these two criteria, I prepared a map for everyone:

trend6.png

Wherein, Universal Data Plane API is a standard data plane, the control plane controls the data plane behavior through this API. The Service Mesh Interface is a control plane standards, the upper application / tool / ecosystem to implement the Service Mesh across different implementations provide consistent end-user experience through the Service Mesh Interface.

Of course, these two standardized API are just getting started, and standardization of work is usually more than just technical issues related to the complex interests, specifically the future direction is now difficult to conclude that only close attention.

Development Trend Analysis

We summarize recent trends Service Mesh six listed above:

trend-analysis.png

These trends are related and cloud core is to allow the cloud to provide the capabilities, including:

  • Let cloud to take on more responsibility
  • Provide higher levels of abstraction
  • More suitable scenes
  • Reduce the burden of application: to achieve lightweight application

And ultimately make business applications focused on business strategic objectives.

For the future direction of the Service Mesh technology, my view is: Service Mesh technology must not isolated self-development, but in the larger environment cloud native, and other technologies cloud native influence, ideas, best practices with each other, each other promotion and mutual support and common development. Cloud native is a huge technology system, Service Mesh need access to support and cooperate in this system, in order to maximize their own advantage.

Service Mesh with native cloud

In the last paragraph, let's talk about the relationship between technology and Cloud Service Mesh native, is this shared title says: Cloud native mainstay.

Why?

What is Cloud native?

Before explaining, first to ask a question: What is cloud native? I believe the problem many students have asked or been asked, everyone may have their own understanding and expression. Early this year, I specifically asked himself this question, and then I try to give an answer:

Cloud native means "original design for the cloud," specifically that: application native is designed to work optimally in the cloud, give full play to the advantages of the cloud.

cloud-native.png

About Cloud native understanding, as well as elaborate on this sentence, not to proceed with detailed here, interested students can browse the contents of my speech before, talking about more in-depth, what audacity to introduce ourselves:

  • Talk about cloud native (on) : how to understand cloud native? Cloud native application should look like? How cloud middleware that gave birth to the original development?
  • Talk about cloud native (lower) : convergence of the cloud and how to apply? How to make products more in line with native cloud?

Service Mesh core values

About the core values ​​of Service Mesh, as I understand it, it is not that Ling Lang everywhere various functions and features provided by the Service Mesh, but:

Separation of business logic and the business logic of the non-

The non-business logic functions to achieve, stripped from the Client SDK out into a separate Proxy process, which is the first step on the Service Mesh technology out, and it is an essential first step: Because this step realize the business logic and non-business logic separation, and is the most thorough physical separation, even if required to pay for a remote call.

And after this step forward, the front is a brighter future:

  • After the non-business logic and business logic, we can use these non-business logic continue to sink
  • Sink to the infrastructure, the infrastructure can be based on VM, which can be based on the container and k8s; VM can also be mixed and containers
  • Infrastructure may also be provided in the form of a cloud can be public cloud, private cloud, hybrid cloud may be cloudy;
  • You can choose cloud hosting, fully managed Ye Hao, Ye Hao hosting part, can be very flexible product form

He concluded that the separation of business logic and business logic of non-:

  • Likely to sink to provide infrastructure
  • May be provided on the cloud
  • It may provide for the application of lightweight

Note: Here's to say the cloud, referring to the native cloud cloud (Cloud Native), rather than the cloud-ready (Cloud Ready) cloud.

Mesh is a key step cloud native landing

In the past year, the ants gold dress has been trying to explore cloud native landed way, in the process, we have some insights, one very important one is: Mesh technology is a key step in cloud native lands.

cloud-native-important-step.png

As shown in FIG:

  • The bottom is a cloud-based k8s and container building, offers a variety of basic capabilities that are part of the sinking from traditional middleware
  • Mesh layer is on the cloud, and comprises various expansion of Service Mesh Mesh mode we mentioned earlier, standardized communication
  • After Mesh by divesting non-business functions and sink, applied to achieve a lightweight, traditional and emerging micro App can benefit from this service
  • Further, after the lightweight business applications, their workload has become after weight-loss burden fairly clean, leaving only the basic business logic, including traditional App, Container service running to and form new Function, these loads to Serverless relatively easy to form a lot of time converting

Serverless technology field with the latest technology trends and product development (such as the project to Knative represented, Serverless no longer just Function form, also supports BaaS, such as partial traditional workloads), Mesh into existing applications to provide the impetus for the transformation Serverless mode .

Here we'll share insights ants gold dress for the future development of middleware products, we believe that the future of middleware Mesh technology, and integrated into the infrastructure , as shown below:

middleware-future.png

On the left is the traditional form of middleware in the cloud era of native, non-business functions we want to peel off from the traditional rich client middleware out, then the middleware of these capabilities and these capabilities behind, sinking to infrastructure, sink to the cloud. And middleware products will be integrated into the infrastructure, as shown in the right. Future middleware as part of the infrastructure and the cloud, and Mesh become connected applications and infrastructure as well as bridges to other middleware products.

More importantly: business applications and therefore achieve weight reduction, after stripping various non-business functions, business applications will only focus on business logic to achieve the strategic objectives in order to achieve the transition from legacy applications to the cloud-native applications.

Summary: Service Mesh technology, we achieve the separation of business logic and non-business logic, it may provide for the application of the original lightweight and cloud biochemistry; and through the various functions of non-business logic to sink into infrastructure and cloud, pole large cloud infrastructure and enhance the ability to provide a great boost for cloud native ground.

Therefore, we believe: Service Mesh technology will play a very important role in the cloud in the native ground, indispensable.

Service Mesh Prospects

Finally, once again look into the future development of Service Mesh.

The left is the two most important raw power of Service Mesh early development: multi-language support and class library upgrades . On these two points, which recently introduced two years and to promote the concept of Service Mesh and students of basic products are mentioned, but today I want to point out: This is the Service Mesh starting point , but far from Service Mesh destination.

Service Mesh future, will not only stay in a simple and straightforward to meet multi-language support and class library upgrade, but to follow the trend of cloud native, solve real needs, at the same time try to keep the top business applications.

In this sharing Finally, I want to give you a stay a homework, determined students can try: If you want a better understanding of the value of Service Mesh, would like to have a clearer understanding of the future direction of the Service Mesh, especially hope that through their own thoughts and insights to understand the Service Mesh rather than simply being infused (including my indoctrination), try to do independent thinking as follows:

  1. Left aside these two points, not the limitations of thinking in this range;
  2. Consider the context of native cloud, combined with your own understanding of native cloud, and expectations of the cloud;
  3. Service Mesh trend for six of the right, forget what I described earlier, only consider the business value of the actual scene behind and customer needs, and to bring this scene, then carefully compare the use of Service Mesh Service Mesh and do not use two kinds solution in the case;
  4. Please think of the above process, more from the perspective of business applications to look at the problem, assuming you are on the cloud applications (remember the little baby on the front of the map?), How would you want to be treated?

Such thinking allows you hope to gain something.

Epilogue: welcome to apply SOFAStack cloud native Workshop

Finally to be advertising are welcome to attend SOFAStack cloud native workshop, we will in this event, introduced ant gold version to experience fully managed cloud service financial version of SOFAMesh are welcome to attend. I will be waiting for you at the workshop site of Shanghai KubeConf.

workshop.png

Tip: Use a nail to scan the above QR code. Or direct access KubeConf the event page (Click to read the original) to view details of the workshop.

No public: financial level distributed architecture (Antfin_SOFA)


Guess you like

Origin juejin.im/post/5cede31e5188253cfb7a3c3a