docker microservice architecture

What the hell is all that noise about microservices?

As more and more products were built with reusable Rest-based services, people quickly discovered that splitting business functionality into reusable services was very effective, but it also came with a risk. Every time you update a service, you have to retest every service you deploy with it, and even if you are confident that the code change won't affect other services, you can never really be sure of that, because the services will inevitably share code, data and other components.

With the rise of containerization, you can run code very efficiently in a completely isolated environment, and combining the two of them will allow for product architecture optimizations in terms of fine-grained scalability and versioning, at the cost of The price is increased complexity and some duplication of code.

Is containerization just the new virtualization?

The answer is not quite. Containers and virtual machines share some similarities in that they are both isolated environments managed by a controlling process (container manager and hypervisor, respectively), but the main difference between the two is that for each virtual machine, running It is a complete stack of components - from the operating system to the application server, and emulated virtual hardware including network components, CPU and memory.

For containers, they run in a more completely isolated sandbox, and only the smallest kernel of the operating system appears in each container, sharing the resources of the underlying system. The biggest advantage of containerization is that it has a smaller footprint and can run more instances than virtual machines on the same hardware. Containers also have some key limitations: the biggest one is that containers can only run on Linux-based operating systems (kernel isolation is a Linux-specific technology).

Related to this limitation is that Docker - currently the most popular container service provider - cannot run directly on Mac or Windows systems because they are not Linux, the alternative is to run Docker, you need to use VirtualBox to start a Linux virtual machine machine, and then run Docker in the virtual machine. Fortunately, it is mostly managed by Docker ToolBox (formerly Boot2Docker).

Docker has gained so much support that the public repository for container images, Docker Hub, has over 136,000 public images. Many of these are created by individuals, some extend from "official" images and then customize to their own needs, but others are fully customized for platform configuration from "base" images. We will use these "base" and "official" mirrors to start our research journey.

So we've talked about microservices and containerization, but where does Spring Boot come into play?

I chose to build my own microservices using Java, specifically the Spring Boot framework. I chose it mainly because I am familiar with Spring, easy-to-develop Rest service controllers, business services, and data storage, and can easily introduce Scala's Akka/Play programming model. One of the most well-known advantages of the microservices architecture is that the services are completely independent, so there is no need and should not care what language or platform each service is built in.

Personally, I think the maintenance cost of being multilingual outweighs the flexibility benefits gained, but there are applicable use cases, such as where a department within a large organization has chosen a different tech stack as "standard" Down. Another possible scenario is if you decide to switch from one language/platform to another - you can migrate one microservice at a time, providing the same endpoint web service interface. The goals of our efforts are as follows:
• A guide on how to set up microservices and Docker from the beginning to the end.
• Understand the pros and cons of different decisions made around the many components of a microservices architecture - from source control to service versioning and everything in between.
• Analyze "pure" microservice beliefs and see how they apply to a "real world" scenario.
• See how Docker faces the hustle and bustle and what is necessary to run Docker for professional development.
• Build a complete solution with a collection of microservices, each with its own container, a persistence layer hosted in its own container, and a cluster of containers.
• Other valuable content.


The business scenario I will simulate is an employee task assignment and identification system for a software development company, which includes the following tasks:
• Employee logs into the system
• Employee sees a list of required tasks, such as writing a blog post about emerging technologies, participating in Meetings, hold a Code Review
• Employees submit these tasks to their managers for approval to complete
• Employees get "scoreds" for completing tasks
• Employees receive rewards based on "scoring" such as company gifts, one-on-one free lunches with the CEO, etc. Wait.


It's good to end this article here - we've already started understanding microservices, containerization, and the business scenarios we'll discuss next, and when we move on to the second part, we'll set up the relevant tools and dive into how to use Docker work, then build our first container.




The goal of building a PaaS cloud platform based on microservice architecture and Docker container technology is to provide our developers with a set of processes for rapid service development, deployment, operation and maintenance management, and continuous development and continuous integration. The platform provides resources such as infrastructure, middleware, data services, and cloud servers. Developers only need to develop business code and submit it to the platform code base, and make some necessary configurations. The system will automatically build and deploy to achieve agile application development and rapid development. iterate. In terms of system architecture, the PaaS cloud platform is mainly divided into three parts: microservice architecture, Docker container technology, and DveOps. This article focuses on the implementation of microservice architecture.

       Implementing microservices requires a lot of technical effort to develop infrastructure, which is obviously unrealistic for many companies. Don't worry, the industry already has very good open source frameworks for our reference. At present, the relatively mature microservice frameworks in the industry include Netflix, Spring Cloud, and Ali's Dubbo. Spring Cloud is a set of frameworks for implementing microservices based on Spring Boot. It provides the components required for developing microservices. If used together with Spring Boot, it will be very convenient to develop cloud services of microservice architecture. Spring Cloud contains many sub-frameworks, of which Spring Cloud Netflix is ​​one of the frameworks. In our microservice architecture design, many components of the Spring Cloud Netflix framework are used. The Spring Cloud Netflix project has not been around for a long time, and there are very few related documents. The blogger studied this framework and gnawed a lot of English documents at that time, which was extremely painful. For students who are new to this framework, if they want to build a microservice application architecture, they may not know how to start. Next, we will introduce our microservice architecture construction process and which frameworks or components are needed to support the microservice architecture.

       In order to directly and clearly show the composition and principle of the microservice architecture, the blogger drew a system architecture diagram, as follows:

      

      

       As can be seen from the above figure, the general path of microservice access is: external request → load balancing → service gateway (GateWay) → Microservices → Data Services/Message Services. Both service gateways and microservices use service registration and discovery to call other dependent services, and each service cluster can obtain configuration information through the configuration center service.

       Service Gateway (GateWay)

       The gateway is a gate between external systems (such as client browsers, mobile devices, etc.) and the internal systems of the enterprise. All client requests access background services through the gateway. In order to cope with high concurrent access, the service gateway is deployed in a cluster, which means load balancing is required. We use Amazon EC2 as a virtual cloud server and ELB (Elastic Load Balancing) for load balancing. EC2 has an automatic capacity configuration function. When user traffic reaches a peak, EC2 can automatically add more capacity to maintain the performance of the virtual host. ELB elastic load balancing automatically distributes the incoming traffic of the application among multiple instances. In order to ensure security, client requests need to be protected by https encryption, which requires us to perform SSL offloading and use Nginx to offload encrypted requests. External requests are routed to a GateWay service in the GateWay cluster after ELB load balancing, and are forwarded to the microservice by the GateWay service. As the boundary of the internal system, the service gateway has the following basic capabilities:

       1. Dynamic routing: dynamically route requests to the required backend service clusters. Although the inside is a complex distributed microservice mesh structure, the external system looks like a whole service from the gateway, and the gateway shields the complexity of the backend service.

       2. Current limiting and fault tolerance: Allocate capacity for each type of request, discard external requests when the number of requests exceeds the threshold, limit traffic, and protect background services from being overwhelmed by large traffic; directly create at the boundary when internal party services fail For some responses, focus on fault-tolerant processing instead of forwarding the request to the internal cluster to ensure a good user experience.

       3. Identity authentication and security control: Perform user authentication for each external request, reject requests that fail to pass authentication, and implement anti-crawling functions through access mode analysis.

       4. Monitoring: The gateway can collect meaningful data and statistics to provide data support for background service optimization.

       5. Access log: The gateway can collect access log information, such as which service is accessed? Process (what exception occurred) and result? How much time does it take? By analyzing the log content, the background system is further optimized.

       We use Zuul, an open source component of the Spring Cloud Netflix framework, to implement the gateway service. Zuul uses a series of different types of filters (Filter), and by rewriting the filter, we can flexibly implement various functions of the gateway (GateWay).

       Service registration and discovery

       Since the microservice architecture is a mesh structure composed of a series of fine-grained services with a single responsibility, the services communicate through a lightweight mechanism, which introduces the problem of service registration and discovery. The service provider needs to Register the report service address, and the service call should be able to discover the target service. Eureka components are used in our microservice architecture to implement service registration and discovery. All microservices (by configuring Eureka service information) are registered with the Eureka server, and heartbeats are sent regularly for health checks. The default configuration of Eureka is to send a heartbeat every 30 seconds, indicating that the service is still alive. The interval for sending heartbeats can be passed through The configuration parameters of Eureka are configured by themselves. After the Eureka server receives the last heartbeat of the service instance, it needs to wait for 90 seconds (the default configuration is 90 seconds, which can be modified by configuration parameters), and then it determines that the service has died (that is, there is no service for 3 consecutive times). Heartbeat received), the registration information of the service will be cleared when Eureka self-protection mode is turned off. The so-called self-protection mode means that when a network partition occurs and Eureka loses too many services in a short period of time, it will enter the self-protection mode, that is, a service does not send a heartbeat for a long time, and Eureka will not delete it. Self-protection mode is on by default and can be set to off through configuration parameters.

       The Eureka service is deployed in a cluster (the deployment method of the Eureka cluster is described in detail in another blogger's article), and all Eureka nodes in the cluster will automatically synchronize the registration information of the micro-service regularly, so that all Eureka nodes can be guaranteed. The service registration information remains the same. So in the Eureka cluster, how does the Eureka node discover other nodes? We use the DNS server to establish the association of all Eureka nodes. In addition to deploying the Eureka cluster, we also need to build a DNS server.

       When the gateway service forwards external requests or calls each other between background microservices, it will go to the Eureka server to find the registration information of the target service, discover the target service and call it, thus forming the entire process of service registration and discovery. Eureka has a large number of configuration parameters, as many as hundreds, and bloggers will explain in detail in another article.

       Microservice deployment

       Microservices are a series of single-responsibility, fine-grained services that divide our business into independent service units, with good scalability and low coupling. Different microservices can be developed in different languages. A single business that a service handles. Microservices can be divided into front-end services (also called edge services) and back-end services (also called intermediate services). Front-end services are the necessary aggregation and tailoring of back-end services and then exposed to different external devices (PC, Phone, etc.) , all services will be registered with the Eureka server when they are started, and there will be intricate dependencies between services. When the gateway service forwards an external request to call the front-end service, it can find the target service to call by querying the service registry. The same is true when the front-end service calls the back-end service. A request may involve mutual calls between multiple services. Since each microservice is deployed in the form of a cluster, load balancing needs to be done when the services call each other, so each service has an LB component to achieve load balancing.

       Microservices run in Docker containers in the form of images. Docker container technology makes our service deployment simple and efficient. The traditional deployment method requires installing the operating environment on each server. If we have a large number of servers, installing the operating environment on each server will be an extremely heavy task. Once the operating environment changes, we have to reinstall it. , which is simply catastrophic. With Docker container technology, we only need to generate a new image for the required basic image (jdk, etc.) and microservices, and deploy the final image to run in a Docker container. This method is simple, efficient, and can be deployed quickly. Serve. Multiple microservices can run in each Docker container. Docker containers are deployed in clusters, and Docker Swarm is used to manage these containers. We create an image repository to store all the base images and the final delivery images generated, and manage all images in the image repository.        There are intricate dependencies between

       service fault-tolerant microservices. A request may depend on multiple backend services. In actual production, these services may fail or be delayed. In a high-traffic system, once a service is delayed

, which may exhaust system resources in a short period of time and bring down the entire system, so if a service cannot isolate and tolerate its failures, it is catastrophic in itself. Hystrix components are used in our microservices architecture for fault tolerance. Hystrix is ​​an open source component of Netflix. It provides elastic fault-tolerant protection for services through mechanisms such as circuit breaker mode, isolation mode, fallback, and current limiting to ensure system stability.

       1. Fusing mode: The principle of the fuse mode is similar to that of a circuit fuse. When a short circuit occurs in the circuit, the fuse is blown to protect the circuit from catastrophic losses. When the service is abnormal or has a large amount of delay, the service caller will actively initiate the fuse when the fuse conditions are met, execute the fallback logic and return directly, and will not continue to call the service to further drag down the system. The circuit breaker is configured with a service call error rate threshold of 50% by default. If the threshold is exceeded, the circuit breaker mode will be automatically activated. After the service is isolated for a period of time, the circuit breaker will enter the semi-fuse state, that is, a small number of requests are allowed to try. If the call still fails, it will return to the circuit breaker state. If the call is successful, the circuit breaker mode will be turned off.

       2. Isolation mode: Hystrix adopts thread isolation by default. Different services use different thread pools and are not affected by each other. When a service fails and exhausts its thread pool resources, the normal operation of other services is not affected. isolation effect. For example, we configure a service to use a thread pool named TestThreadPool through andThreadPoolKey to isolate it from other named thread pools.

       3. Fallback: The fallback mechanism is actually a fault-tolerant method when a service fails, and the principle is similar to exception handling in Java. You only need to inherit HystixCommand and rewrite the getFallBack() method, and write processing logic in this method, such as throwing an exception directly (fast failure), returning null or default value, or returning backup data, etc. When an exception occurs in the service call, it will turn to getFallBack(). There are the following situations that will trigger fallback:

       1) The program throws a non-HystrixBadRequestExcepption exception. When the HystrixBadRequestExcepption exception is thrown, the calling program can catch the exception and does not trigger the fallback. When other exceptions are thrown, the fallback will be triggered;

       2) The program runs Timeout;

       3) The circuit breaker starts;

       4) The thread pool is full.

       4. Current limiting: Current limiting refers to limiting the concurrent access to the service, setting the number of concurrent accesses per unit of time, and rejecting requests that exceed the limit and fallingback to prevent the background service from being overwhelmed.

       Hystix uses the command mode HystrixCommand to wrap the dependent call logic, so that related calls are automatically under the protection of Hystrix's elastic fault tolerance. The calling program needs to inherit HystrixCommand and write the calling logic in run(), and use execute() (synchronous blocking) or queue() (asynchronous non-blocking) to trigger the execution of run().

       The dynamic configuration center

       microservice has many dependent configurations, and some configuration parameters may be dynamically modified during service operation, such as dynamically adjusting the circuit breaker threshold according to the access traffic. The traditional method of implementing information configuration, such as placing it in configuration files such as xml and yml, and packaging it together with the application, requires resubmitting the code, packaging and building, generating a new image, and restarting the service for each modification, which is too inefficient. Obviously unreasonable, so we need to build a dynamic configuration center service to support the dynamic configuration of microservices. We use Spring Cloud's configserver service to help us build a dynamic configuration center. The microservice code we developed is stored in the private repository of the git server, and all configuration files that need to be dynamically configured are stored in the configserver (configuration center, also a microservice) service under the git server, and the microservice deployed in the Docker container starts from The git server dynamically reads the information of the configuration file. When the local git warehouse modifies the code and pushes it to the git server warehouse, the git server hooks (post-receive, which will be called automatically after the server completes the code update) automatically detects whether there is a configuration file update. If so, the git server passes the message queue. Send a message to the configuration center (configserver, a microservice deployed in a container) to notify the configuration center to refresh the corresponding configuration file. In this way, the microservice can obtain the latest configuration file information and realize dynamic configuration.

       The above frameworks or components are the core supporting the implementation of the microservice architecture. In actual production, we will also use many other components, such as log service components, message service components, etc., which can be selected and used according to business needs. In our microservice architecture implementation case, we refer to many open source components of the Spring Cloud Netflix framework, including Zuul (service gateway), Eureka (service registration and discovery), Hystrix (service fault tolerance), Ribbon (client load balancing) )Wait. These excellent open source components provide a shortcut for us to implement a microservice architecture.

       The above paragraphs mainly introduce the basic principles of microservice architecture for your reference.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326152845&siteId=291194637