Is the era of microservice 2.0 coming? Where should programmers go?

It has been nearly three years since the microservice architecture began to emerge. The early Spring Cloud Netflix architecture has matured and has been integrated by Spring Cloud into new solutions to common cloud problems, such as Sleuth, Zipkin, Contract, etc. Happening.

But now the architecture tends to develop in a different direction. In this article, we will analyze the path of microservice architecture so far and the tools and technologies that will accompany us in the future.

The birth of microservices

Back to the origins, we must go back to the beginning of 2015, when the concept of "microservices" began to become strong in Spain. The first development stack of microservices was released, that is, the relatively popular Netflix stack was released in March 2015.

Today it is still the most watched and popular among all cloud computing solutions including Spring:
file

The other two solutions (Consul and Zookeeper) use different components from the Netflix stack. Netflix components include Zuul, Ribbon and Hystrix. Initially, the architecture consisted of the following parts:
file

Configuration server: Externalizing the configuration server allows us to centralize all configurations of the ecosystem. It is not part of Netflix (because Netflix uses Archaius), but it was developed by Spring.

  • Eureka: Server, used to register microservices and metadata about them.
  • Ribbon: A library used to balance requests in the client. It communicates with Eureka to obtain the registers of available instances of each microservice.
  • Hystrix: A library for cascading error management using circuit breaker mode.
  • Zuul: It will serve as an API gateway/edge service server and the entry point of the microservice ecosystem.

If we are used to the monolithic monolithic architecture now, this group of architectures seems to have become larger, but it solves the main requirements of a distributed architecture: registration, centralized configuration, load balancing, resistance to failure...

In terms of deployment logic, associated with the use of microservices, we use container solutions for deployment. In this case, we all know and are the most popular solution on the market: Docker.

Another problem is the container orchestration solution. We are one of the few early adopters of OpenShift 3. The Red Hat solution is based on Kubernetes, which was launched in June 2015.

But the reality is that there were already various container orchestration solutions. Of course, none of them are very mature, and they do not have much market share.

The establishment of microservices

Since its inception in 2015, the microservice architecture has quickly become very important and has gradually increased over time. Driven by the success of cloud solutions, as their main architectural solution, the two complement each other here.

As with any successful architecture or tool, a series of applications and other libraries cover areas of functionality that were not initially considered. This is the traceability of requests, which is a common requirement in distributed systems, and it did not go beyond manual solutions at first.

These and other needs are reflected in the new library packages that complete our ecosystem, some of which are:

Sleuth: The library allows us to distribute the requestable requests between different applications/microservices based on the combination of headers.
Zipkin: A server that stores temporary data, and references distributed requests for correlation and latency research.
Contract Contract: The library allows us to implement a consumer-driven contract model to increase the confidence that our changes will not cause any API condition interruption.
In addition, evolution also followed, not just part of it, they began to define standard stacks for other functions, such as components that are essential for recording and monitoring also began to emerge.

At this time, tools such as (Elasticsearch-Logstash or FluentD-Kibana) for recording and monitoring these purposes have become an indispensable part of these new architectures, increasing its popularity.

With all these new tools/library packages, we enjoy a more complete ecosystem, and at the same time more complex than it is now, it actually covers all the needs we have.

On the other hand, there was a need for non-blocking communication in the microservice architecture design. At that time, Vert.x was used without a pure integrity solution. Later, Spring 5's React provided support.

The rise of Kubernetes

As we have previously commented, when these new architectures appear, there really are not many container orchestration solutions on the market.

Kubernetes, Openshift, Docker Swarm, all appeared in version 1.0.0 in 2015, and Mesos in 2016... There is no leading solution in the market.

Over time, we seem to be an obvious dominator, and that is Kubernetes, or Kubernetes-based Openshift solutions.

Because of this, we can already find that the management solution Kubernetes is implemented on different platforms: Google's Kubernetes engine, Amazon AWS EKS, etc.

Similarly, some of the functions discussed at the beginning of the post, such as load balancing, registration, and centralized configuration performed by Ribbon, Eureka and Config Server, can also be provided by PaaS. So why use these features provided by Spring Cloud Netflix?

This is a question frequently asked by several customers. The answer is simple: in the initial stage of the architecture, there was no orchestration solution in the market.

Including these parts (Eureka, Ribbon...) in the software architecture makes it more portable. Because these services are contained in the artifacts themselves, applications can be moved between different cloud solutions without worrying about the exhaustion of these horizontal services.

Similarly, the solutions provided by Spring Cloud Netflix have more powerful features than the solutions usually provided by cloud solutions. These are some additional features that provide:

In addition to allowing us to implement our own balancing algorithm, Ribbon also provides different balancing algorithms, which provides more flexibility than the typical Round Robin or Random that includes PaaS.
Eureka allows us to include and consult other information about the instance in the registry: URL, metadata...In PaaS solutions, we usually cannot choose the information to be merged into the registry.
Config Server provides us with a hierarchical attribute system that allows us to consult the various branches and/or tags of the git repository.
We have an architecture with all these possibilities, but we don't take full advantage of them. This usually happens in most clients: they don't need such advanced architecture features because they think they can be implemented through PaaS.

Today, Kubernetes cloud solutions are the dominant PaaS in the market. If we think about the PaaS concept, its purpose is: abstract from lower-level functions/resources so that applications can focus on business logic. All these functions are very clear, they are not part of the business scope.

This allows us to separate our application, that is, our business logic, which makes the separation between the various layers of the system more clear.

These are the structural features of Spring Cloud Netflix that Kubernetes can absorb:

1. Registration, load balancing and health check (eureka and Ribbon)

A new Pod that will appear in the Kubernetes system loads a microservice, but unlike the Eureka Ribbon combination, load balancing is not done on the client side, so the application in Kubernetes does not have to know all the existing instances of the service (this is through the Eureka customer End).

What the application in the pod knows is the Kubernetes service layer, which is an abstraction that condenses service instances. In this way, the client calls this service layer, the service layer will maintain a constant address, and the address will perform the balancing of a specific target instance.

Kubernetes will also be responsible for regular health checks to check the health of the instance. In the case of Eureka, it is the instance that informs whether the server has the correct availability.

2. Centralized configuration (configuration server)

Since the latest version of Kubernetes has configmaps available. These allow us to store properties separately as environment variables as properties files (local or remote).

However, Kubernetes still has some functions that cannot cover the functions that Spring Cloud Netflix does, which will not allow us to completely separate. These functions are cascading error management, gateway API, request traceability...This is the next important step for us to enter the microservice architecture.

The birth of a new favorite

If we consider the part of the microservice architecture that brings us the most problems, most people agree that these problems are related to the network. Specifically, everything has to do with delays, management of remote call failures, balance, traceability of requests, calls to non-existent instances or drops...

Responsibilities in these situations are divided into different levels. For example, PaaS (or registration service) is responsible for providing us with a list of health instances. Hystrix is ​​responsible for managing external calls to control timeouts and manage failure situations...

In this gray area, nested between the application layer and PaaS, when more problems arise, we will find a new revolution in microservice architecture here.

Same

Istio is a service network solution based on Google’s experience and good practices in implementing large-scale services. It was jointly developed with IBM and Lift, and was released as Opensource in May 2017. They plan to release a version every month.

For those who are not familiar with the concept of service mesh, the definition here seems to be the best:

"The service mesh is a dedicated infrastructure layer used to make service-to-service communication secure, fast and reliable. If you are building a cloud native application, you need a service mesh", Blog Buoyant: What is a service mesh grid? Why do I need one?

Service Mesh is a concept that began to emerge in large numbers last year. The evidence for this is that large companies with large amounts of traffic such as Paypal or Ticketmaster are already using it, and Envoy and Linkerd are already part of the Cloud Native Computing Foundation.

Before discussing why these big changes are taking place in the microservices world, let's see how it will be implemented.

Istio is a tool that collects the functions we place in the lower layer (PaaS) and immediately above (application) to manage all content related to network communication.

In fact, Istio did not introduce new functions, but moved the existing functions to the middle layer to be placed.

To this end, what it does is to place a proxy next to our applications, which will intercept all their network communications and manage them to provide reliability, resilience and security.

Placing this proxy next to our application is called sidecar-proxy. In Kubernetes, in the pod of our application's container deployment, an additional container with this proxy is deployed, as shown in the following figure:
file
Istio uses Envoy as the sidecar-proxy by default, which will accompany all our microservices . You can also use Linkerd for the data plane.

The fact that Istio runs in a separate container of our application leads to a greater separation between the service mesh itself and the application.

In addition, when implementing collection functions from libraries such as Ribbon and Hystrix, it can completely eliminate the application's management of the complexity of the architecture.

When dealing with all things related to network communication, Istio provides us with a lot of functions, including:

  • Routing requests: We can route requests based on different criteria such as source application, purpose, application version, request header... In addition, we can get the traffic by percentage or repeat what can make us canary deployment and A/B testing .
  • Health check and load balancing: control healthy instances and balance them using different available algorithms.
  • Management timeout and circuit break: we can configure the timeout, circuit break, and retry configuration of different services...
  • Fault injection: To test the resilience of our application, we can insert two types of faults: delay and cancel request.
  • Quota management: Allows you to set call restrictions.
  • Security: secure communication between various services, access based on the roles of the two parts of authentication communication, whitelist and blacklist...
  • Monitoring and recording: recording, capturing service mesh metrics, distributed traceability...
    it can be deployed on different infrastructures: Kubernetes, environments registered based on Eureka or Consul, and environments registered in CloudFoundry and Mesos soon.

If we study its functions carefully, we will find that it collects many responsibilities of the Netflix suite: disconnection and Hystrix timeout management, load balancing Ribbon zone requests...

In addition, Istio is integrated with some of the solutions already used by Spring Cloud, just like the case of Zipkin, it can work in an environment that uses Eureka as a record.

It also integrates with other existing solutions on the market for metric storage, logging, quota management...e.g. Prometheus, FluentD, Redis...

Concluding remarks

Finally, thank you all for watching. The above article is a personal opinion. If it is helpful to you, remember to pay attention to like, forward and support!

Guess you like

Origin blog.csdn.net/Lubanjava/article/details/103479543