Microservices full stack: in-depth core components and development techniques


Microservices, simply put, is a design approach in which an application is organized as a set of small, autonomous services that can run independently and are often built around business functionality. These services run independently of each other and communicate through well-defined APIs. Compared with monolithic applications, microservice architecture provides greater flexibility and scalability, allowing teams to independently develop, deploy, and scale services.

image-20230915114824429

1. Service registration and discovery

In the world of microservices, service registration and discovery are key mechanisms to ensure that each independent service can find and interact with other services. As applications increase in size and complexity, it becomes critical to clearly understand and manage the interactions between these services.

1.1. Client registration (ZooKeeper)

Apache ZooKeeper, as a solid cornerstone of distributed systems, has won a lot of industry respect. Many distributed systems, including various microservice frameworks, rely on ZooKeeper to provide critical services such as naming, configuration management, grouping services, and distributed synchronization. But here, we will focus on its application as a client registry in a microservice architecture.

ZooKeeper Introduction
ZooKeeper Download Link
ZooKeeper was originally created by Yahoo but later became a top-level project of Apache. It is designed for distributed applications and provides a set of services through which distributed applications can continue to work in the event of partial failures. This is achieved through ZooKeeper's core architecture, which is designed to connect small computer nodes to form a powerful distributed framework.

ZooKeeper’s data model

ZooKeeper's data structure is much like a distributed file system, consisting of directories and files. But in ZooKeeper, each node is called a "znode". Each znode can store data and can have child nodes.

When a microservice wants to register itself, it creates a znode for itself in ZooKeeper. Typically, this znode will store key information about the service, such as its IP address, port, and any other metadata.

Service registration process

  1. Startup and connection : When a microservice starts, it initializes a connection to the ZooKeeper cluster.
  2. Create znode : A microservice creates a znode at a specified path, usually based on the name of the service.
  3. Store data : The service will store its metadata in this znode. This metadata may include IP address, port, version number, startup time, etc.
  4. Periodic heartbeat : To let ZooKeeper know that the service is still active, the service will periodically send heartbeats to its znodes.
  5. Logout : When a service shuts down, it deletes its znodes in ZooKeeper.

[ZooKeeper client registration]

import org.apache.zookeeper.ZooKeeper;
import org.apache.zookeeper.CreateMode;
import org.apache.zookeeper.ZooDefs;

// 初始化ZooKeeper客户端并注册服务
public class ServiceRegistry {
    
    
    private static final String ZK_ADDRESS = "localhost:2181";
    private ZooKeeper zooKeeper;

    public ServiceRegistry() throws Exception {
    
    
        // 连接ZooKeeper
        this.zooKeeper = new ZooKeeper(ZK_ADDRESS, 5000, watchedEvent -> {
    
    });
    }

    // 注册服务
    public void registerService(String serviceName, String serviceInfo) throws Exception {
    
    
        String path = "/services/" + serviceName;
        if (zooKeeper.exists(path, false) == null) {
    
    
            zooKeeper.create(path, serviceInfo.getBytes(),
            ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
        }
    }
}

// 使用方法:
ServiceRegistry registry = new ServiceRegistry();
registry.registerService("myService", "serviceInstanceInfo");

ZooKeeper’s consistency model

ZooKeeper uses a protocol called "Zab" to ensure the consistency of its data. The Zab protocol ensures that all write operations are ordered, meaning that all operations on multiple nodes are performed in the same order.

safety

ZooKeeper provides an ACL-based security model that allows administrators to control which clients can perform which operations. This is useful to prevent malicious or misconfigured clients from causing harm to the system.

image-20230915111318244

Summarize

ZooKeeper, as a key component of distributed systems, provides a reliable and highly available service registration platform for microservices. By understanding its inner workings, we can better leverage it to power our microservices architecture.

1.2. Third-party registration (independent service Registrar)

With the increasing popularity of microservices architecture, registering each service directly can sometimes become complex and time-consuming. Therefore, it is necessary to introduce a third-party service registration mechanism, that is, an independent service Registrar, to help manage these services.

What is a third-party service Registrar?

The third-party service Registrar is an intermediate layer between microservices and service registration centers. It can automatically detect, register and unregister microservices. Rather than relying directly on each microservice to register itself, this approach provides a centralized location for management and monitoring.

Why is third-party registration required?

  1. Automated management : As microservices increase, manually registering, updating, and deregistering service instances can become cumbersome. Third-party registration can handle these tasks automatically.
  2. Centralized monitoring : Using a third-party registry, developers and operations teams can monitor the status and health of all services in one place.
  3. Better security : Since all registration and deregistration operations go through a central point, you can better control which services are allowed to register, preventing the registration of malicious services.

How third-party registration works

  1. Service detection : Registrar will periodically scan the network or specific endpoints to find new service instances.
  2. Service registration : Once a new service instance is discovered, Registrar will automatically register it to the service registration center.
  3. Health Check : Registrar regularly checks the health of each service instance. If a service instance is found to be no longer healthy or unreachable, it will unregister the instance from the service registry.
  4. Metadata management : For services that require additional configuration, Registrar can store and manage these metadata to ensure that each service runs as expected.

scenes to be used

The following are several scenarios where a third-party service Registrar may be needed:

  • Large deployments : With hundreds or thousands of microservice instances, it is impractical to manually manage each instance.
  • Dynamic environment : In a cloud environment, service instances may be started and shut down frequently. Third-party registration ensures that the service registry is always up to date.
  • High security requirements : In high security environments, it may be necessary to ensure that only trusted services can register.

Challenges and considerations

  1. Network overhead : Due to the need to frequently check the health of service instances, a large amount of network traffic may be generated.
  2. Single point of failure : If the Registrar itself fails, it may affect all service registration and deregistration operations.

image-20230915111346338

Using the third-party service Registrar can greatly simplify the management and monitoring of microservices. However, selecting and deploying an appropriate Registrar solution requires careful planning and testing to ensure it meets the needs of your specific environment.

1.3. Client Discovery

In the world of microservices, service discovery is one of the core components. When a service needs to interact with another service, it first needs to know the location of the other service. This is the purpose of service discovery. In client discovery mode, the calling service is responsible for knowing which service instance it should interact with.

What is client discovery?

Client discovery is a pattern of service discovery in which a client or consumer service is responsible for determining the available service instances in the network and then communicating directly with an instance. This is in contrast to the server-side discovery model, where the API gateway or load balancer decides which service instance should be talked to.

How client discovery works

  1. Registration : Whenever a service instance starts and becomes available, it registers its address with the service registry.
  2. Query : When the client needs to communicate with the service, it first queries the service registration center to obtain a list of all available service instances.
  3. Selection : The client selects one from the obtained service list for communication. This usually involves some form of load balancing, such as round robin or random selection.
  4. Communication : The client communicates directly with the selected service instance.

advantage

  1. Flexibility : Clients can implement their own load balancing strategies as needed.
  2. Reduced latency : There are no intermediary components (such as API gateways or load balancers) to handle requests, thus reducing communication latency.

shortcoming

  1. Client complexity : Each client must implement service discovery and load balancing logic.
  2. Consistency challenge : All clients must update their service discovery logic and policies consistently.

Tools and techniques for client discovery

Many service discovery tools, such as Eureka, Consul, and Zookeeper, support client discovery mode.

  1. Eureka : Eureka, created by Netflix, is one of the most popular service discovery tools in microservices architecture. The Eureka client provides built-in load balancing strategies and can be easily integrated with Spring Cloud.
  2. Consul : Developed by HashiCorp, Consul provides a versatile service discovery solution that supports health checks, KV storage, and multiple data centers.
  3. Zookeeper : As mentioned earlier, Zookeeper is a distributed coordination service, which is also often used for service discovery.

image-20230915111430618

Client discovery provides a flexible, low-latency way for microservices to find and communicate with other services. However, it also increases client complexity and requires logical and policy consistency across all clients. Choosing whether to use client-side discovery depends on your specific needs and constraints.

1.4. Server-side discovery

Server-side discovery is a common service discovery pattern in microservice architecture. As opposed to client-side discovery, server-side discovery moves the responsibility of finding services from the client to the server.

What is server-side discovery?

In server-side discovery, the client application first requests a central load balancer or API gateway to know the location of the service. This central component queries the service registry, determines the location of the service instance, and then routes the request to that service instance.

How server-side discovery works

  1. Registration : As with client discovery, service instances register their location with the service registry when they start.
  2. Routed requests : Clients send their requests to a central load balancer or API gateway rather than directly to a service instance.
  3. Select a service instance : The load balancer queries the service registry, finds available service instances and decides which instance to route the request to.
  4. Request forwarding : The load balancer forwards the client's request to the selected service instance.

[Discover services from ZooKeeper]

// 从ZooKeeper中发现服务
public List<String> discoverService(String serviceName) throws Exception {
    
    
    String path = "/services/" + serviceName;
    return zooKeeper.getChildren(path, false);
}

// 使用方法:
List<String> serviceInstances = discoverService("myService");

advantage

  1. Simplified client : The client logic is simpler and only needs to know the location of the central load balancer.
  2. Centralized traffic management : Traffic shapes, routing and load balancing policies can be managed from a central location.

shortcoming

  1. Increased latency : Requests need to go through additional jumps, which may cause slight delays.
  2. Single point of failure risk : If there is a problem with the central load balancer or API gateway, all requests may be affected.

image-20230915111510336

scenes to be used

Server-side discovery is particularly suitable for environments with high client diversity, such as mobile applications, third-party developers, or multiple front-end interfaces.

1.5. Consul

Consul is a service discovery and configuration distribution tool developed by HashiCorp. It is designed to provide high availability and support across data centers.

img

Main features of Consul

  1. Service discovery : Consul enables applications to provide and discover other services, and provides health checks to determine the health status of service instances.
  2. Key/Value Store : A distributed key/value store for configuration and dynamic service configuration.
  3. Multiple data centers : Consul supports multiple data centers, making it ideal for large-scale applications.

How to use Consul

  1. Installation and running : Consul is a single binary file that can be downloaded from its official website. It runs in agent mode, with a Consul agent on each node.
  2. Service registration : Services can be registered by defining a service definition file and then using consul agentcommands.
  3. Health check : Consul can regularly check the health status of service instances through various methods (such as HTTP, TCP and executing specified scripts).

Consul vs. other service discovery tools

While Eureka, Zookeeper, and other tools also provide functionality for service discovery, Consul offers some unique features such as multi-datacenter support and key/value storage.

1.6. Eureka

Eureka is a service discovery tool open sourced by Netflix, which is particularly suitable for large distributed systems in cloud environments. Its name is derived from the Greek meaning "I have found it!"

Eureka’s core components

  1. Eureka server : Provides service registration services. All client applications that provide services should register with Eureka and provide metadata information.
  2. Eureka Client : is a Java client used to simplify interaction with the Eureka server. The client also has a built-in load balancer.

How Eureka works

  1. Service registration : When the Eureka client starts, it registers its own information with the Eureka server and periodically sends heartbeats to renew the contract.
  2. Service consumption : The service consumer obtains the registry information from the Eureka server and caches it locally. Consumers will use this information to find service providers.
  3. Service goes offline : When the client shuts down, it sends a request to the Eureka server asking it to delete the instance in the registry.

Features of Eureka

  1. Availability : Eureka handles partial failures due to network issues very well. If the client cannot contact the service due to a network partition, the client caches the server's state and uses this information to handle its request.
  2. Load balancing : The Eureka client includes a load balancer that can provide load balancing for requests to service instances.
  3. Integration with Spring Cloud : Eureka can be seamlessly integrated with Spring Cloud, making it ideal for Spring Boot applications.

1.7. SmartStack

SmartStack is a service discovery tool developed by Airbnb and is based on two main components: Nerve and Synapse.

Nerve

Nerve is a daemon designed to run on each service instance. It is responsible for registering the service with Zookeeper. If a service instance becomes unhealthy, Nerve will be responsible for deregistering it from Zookeeper.

Synapse

Synapse is another daemon designed to run on every machine that needs to discover services. It periodically pulls service registration information from Zookeeper and updates the configuration of its local load balancer (such as HAProxy).

SmartStack Features

  1. Automatic health checks : Nerve and Synapse work together to ensure that only healthy service instances are routed.
  2. Resilience and reliability : SmartStack ensures high availability of services, even in the face of network partitions or other failures.
  3. Integration with existing technology : Using Zookeeper as the central storage and HAProxy as the load balancer, SmartStack can be easily integrated with existing technology stacks.

1.8. Etcd

Etcd is an open source, highly available distributed key-value store, which is mainly used for shared configuration and service discovery. Developed by CoreOS, etcd is designed for large clusters, specifically to provide reliable data storage for Kubernetes.

The core features of etcd

  1. Consistency and high availability : Etcd is based on the Raft algorithm to ensure data consistency in distributed systems.
  2. Distributed locks : Use etcd to implement locking mechanisms for distributed systems.
  3. Monitoring and alerting : Key-value pairs can be monitored for changes, such as configuration changes or service registration/unregistration.
  4. Simple API : etcd provides a simple RESTful API, making it easy to integrate with various applications.

How to use Etcd

  1. Installation and startup : Its binaries can be downloaded from etcd’s GitHub repository. After etcd is started, it will start listening for client requests.
  2. Key-value operations : Using the HTTP API or the provided command line client etcdctl, users can set, get, delete and monitor key-value pairs.
  3. Service discovery : In etcd, service instances store their addresses and other metadata as key-value pairs when they are started. Clients requiring these services can query etcd to discover them.

Comparison of etcd with other service discovery tools

Compared with tools such as Zookeeper and Consul, etcd provides a simpler and more direct API. It is designed to meet the needs of modern container clusters such as Kubernetes, so it is ideally suited for use in this environment.

2. API Gateway

In the microservice architecture, the API gateway is a server, which is the entry point of the system and is responsible for request routing, API composition, load balancing, authentication, authorization, security, etc.img

Why do you need an API gateway?

API Gateway is a server, which can also be said to be the only node entering the system. This is very similar to the Facade pattern in object-oriented design patterns. API Gateway encapsulates the internal system architecture and provides APIs to various clients. It may also have other features such as authorization, monitoring, load balancing, caching, request sharding and management, static response handling, etc.

  1. Single entrance : Provide a unified API entrance for external consumers, hiding the internal structure of the system.
  2. API composition : Combine the operations of multiple microservices into a single composite operation, thereby reducing the number of requests and responses between the client and the server.
  3. Load balancing : Distribute incoming requests to multiple instances to improve system scalability and availability.
  4. Security : Centralized security measures such as authentication, authorization and SSL handling.

Common Features of API Gateways

  1. Request routing : Forwarding API requests to the appropriate microservice.
  2. Request/response transformation : Modify the request and response format between the client and the service.
  3. API aggregation : Combining data and functionality from multiple services into a single, consistent API.
  4. Security : Includes rate limiting, authentication, and authorization.
  5. Caching : Provides caching for common requests, reducing response times and service load behind them.

[API gateway function example]

import org.springframework.cloud.gateway.route.RouteLocator;
import org.springframework.cloud.gateway.route.builder.RouteLocatorBuilder;
import org.springframework.context.annotation.Bean;

// Spring Cloud Gateway的一个简单配置示例
public class ApiGatewayConfiguration {
    
    
    @Bean
    public RouteLocator gatewayRoutes(RouteLocatorBuilder builder) {
    
    
        return builder.routes()
            .route(r -> r.path("/service-api/**")
            .uri("http://localhost:8080/"))
            .build();
    }
}

API Gateway is responsible for request forwarding, composition, and protocol conversion. All requests from clients must first go through API Gateway, and then route these requests to the corresponding microservices. API Gateway will often handle a request by calling multiple microservices and aggregating the results from multiple services. It can convert between web protocols and non-Web-friendly protocols used internally, such as HTTP protocol and WebSocket protocol. The figure below shows an API Gateway adapted to the current architecture.

2.1. Request forwarding

In the microservice architecture, request forwarding is one of the core functions of the API gateway. When a client makes a request, it is the API Gateway's responsibility to determine which service should handle the request and forward it to the appropriate service instance.

working principle

  1. Dynamic routing : Instead of hard-coding the address of a specific service, the gateway determines the route dynamically. This is usually based on a service discovery mechanism such as Eureka or etcd discussed earlier.
  2. Load balancing : Requests are not just forwarded to any service instance, but the load and health of each instance are taken into account.
  3. Filter chain : Before and after forwarding the request, the gateway can apply a series of filters, such as security filters, response transformation filters, etc.

Forwarding strategy

  1. Loop Robin : Select each service instance in sequence.
  2. Least Connections : Forward requests to the instance with the fewest connections.
  3. Latency-aware : Consider the latency of each instance to decide on forwarding.
  4. Geolocation : Forward based on the geographical location of the request source.

2.2. Response merging

In a microservices environment, a client request may require multiple services to work together to produce the final response. API gateway can aggregate responses from multiple services to provide a unified and consistent response to the client.

scenes to be used

  1. Combined views : For example, a user's profile view might need to get data from the user service, order service, and review service.
  2. Analytics and reporting : Aggregate data from multiple services to generate complex reports.

accomplish

  1. Parallel requests : API Gateway can send requests to multiple services in parallel, thereby reducing overall response time.
  2. Data transformation : Convert and standardize data formats from different services.
  3. Error handling : Decide what to do when one of the services returns an error or times out.

2.3. Protocol conversion

As technology develops, different services may use different communication protocols. An API gateway can act as a protocol converter, converting client requests from one protocol to another.

example

  1. HTTP to gRPC : The client may use HTTP/REST, while the internal service uses gRPC. API Gateway can convert these two types of communication.
  2. Version conversion : Older clients may use outdated API versions. The gateway can convert these requests into new version requests.

2.4. Data conversion

In a microservices architecture, different services may use different data formats due to historical reasons, technology choices, or team preferences. API gateway acts as an intermediary between microservices and clients, and sometimes needs to convert data formats.

scenes to be used

  1. Version compatibility : When a service upgrades and changes its data format, to ensure that older clients can still work, the gateway can convert the data in the old format to the new format.
  2. Format standardization : Convert XML to JSON, or vendor-specific formats to standard formats.

Data transformation strategy

  1. XSLT transformation : For XML data, you can use XSLT to transform the data.
  2. JSON conversion : Use libraries such as Jackson or Gson to convert JSON data.
  3. Data Mapping : Defines the mapping between source and target data structures.

2.5. Security certification

API gateways often bear the responsibility of application security because it is the first point of contact for all inbound requests.

Main safety features

  1. Authentication : Determine who the requester is. Common methods include token-based authentication such as JWT.
  2. Authorization : Determines what the requestor can do. For example, some users may only have read access, while others have write permissions.
  3. Rate Limiting : Limit the rate of requests based on user or IP address to prevent abuse or attacks.
  4. Firewall functionality : Block requests from malicious sources, or block certain types of requests.

Implementation Strategy

  1. API Key : Each request must include an API key, which is used by the gateway to identify and authenticate the requester.
  2. OAuth : A standard authorization framework that allows third-party applications limited access to user accounts.
  3. JWT (JSON Web Tokens) : A concise, self-contained way to represent claims of information between recipients.

3. Configuration Center

In the microservice architecture, the configuration center is a service that stores external configuration. External configuration is configuration separate from the application and can be changed without restarting the application.

Why do you need a configuration center?

  1. Dynamic changes : Dynamically change configurations at runtime without restarting the service.
  2. Centralized management : For large systems with a large number of microservices, centralized management of configurations is necessary.
  3. Version control : Save historical versions of configurations and be able to roll back to previous versions.

3.1. Zookeeper Configuration Center

Apache ZooKeeper is a high-performance, distributed, open source coordination service for distributed applications. Although it is not specifically designed for configuration management, it is often used in this scenario.

image-20230915112147300

Advantages of ZooKeeper Configuration Center

  1. High availability : Due to its distributed nature, ZooKeeper is able to provide high availability and fault tolerance.
  2. Real-time : When configuration changes, related service instances can be notified in real time.
  3. Distributed locks : ZooKeeper supports distributed locks, which are useful for synchronizing configurations across multiple services.

How to use ZooKeeper as a configuration center

  1. Create nodes : In ZooKeeper, you can create persistent nodes or temporary nodes to store configuration information. Temporary nodes disappear when the client disconnects.
  2. Listening for configuration changes : A service can listen for changes in its configuration nodes. When other services or administrators change the configuration, the service is notified and can reload the configuration.
  3. Versioning : ZooKeeper provides a version number for each znode (data node in ZooKeeper), which helps avoid problems with concurrent changes.

[Get configuration from ZooKeeper]

// 从ZooKeeper获取配置
public String getConfig(String configKey) throws Exception {
    
    
    String path = "/config/" + configKey;
    if (zooKeeper.exists(path, false) != null) {
    
    
        return new String(zooKeeper.getData(path, false, null));
    }
    return null;
}

// 使用方法:
String myConfigValue = getConfig("myConfigKey");

3.2. Configuration center data classification

In a large microservices environment, configuration data can be huge and needs to be managed and classified effectively.

Classified by environment

  1. Development environment : The configuration used locally by developers.
  2. Test environment : An environment used for QA and automated testing.
  3. Production environment : The environment used by actual users.

Classified by service

For each microservice, there is its own configuration.

Classified by function

For example, database configuration, message queue configuration, third-party service configuration, etc.

Permissions and access control

Not every service or person should have access to all configurations. The configuration center should support role-based access control to ensure that only authorized services or personnel can read or modify configurations.

image-20230915112208695

4. Event Scheduling (Kafka)

Apache Kafka is a distributed stream processing platform used to build real-time, fault-tolerant, high-throughput data stream pipelines. In microservices architecture, Kafka is often used as a core component of event-driven architecture.

Advantages of Kafka

  1. High throughput : Kafka is designed to handle millions of events or messages per second.
  2. Durability : Messages are saved even if the consumer is temporarily unavailable or crashes.
  3. Distributed : Kafka clusters can be distributed across multiple machines to provide fault tolerance and high availability.

Application of Kafka in microservices

  1. Event source : records every event that occurs for transaction, auditing or recovery purposes.
  2. Data integration : Integrate data from multiple microservices into a large data warehouse or data lake.
  3. Asynchronous processing : Decoupling producers and consumers through Kafka allows asynchronous processing.

[Publish events using Kafka]

import org.apache.kafka.clients.producer.*;

// Kafka事件发布服务
public class KafkaProducerService {
    
    
    private final Producer<String, String> producer;
    private static final String TOPIC = "event-topic";

    public KafkaProducerService(Properties properties) {
    
    
        this.producer = new KafkaProducer<>(properties);
    }

    public void sendEvent(String key, String value) {
    
    
        producer.send(new ProducerRecord<>(TOPIC, key, value));
        producer.close();
    }
}

// 使用方法:
Properties properties = new Properties();
properties.put("bootstrap.servers", "localhost:9092");
properties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
properties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

KafkaProducerService kafkaService = new KafkaProducerService(properties);
kafkaService.sendEvent("eventKey", "eventValue");

5. Service tracking (starter-sleuth)

In a complex microservices environment, it becomes critical to understand how requests propagate through various services. This helps diagnose performance issues, track errors, and optimize the overall behavior of the system. That's what service tracking is for.

Spring Cloud Sleuth is a component of the Spring Cloud family that provides a simple and effective way to add tracing to Spring Boot applications.

How Spring Cloud Sleuth works

  1. Request ID : Sleuth automatically generates a unique ID for each request entering the system, called a "trace id". This ID is propagated throughout the system with requests.
  2. Span ID : Whenever a request arrives at a new service or a new activity starts, Sleuth generates a new "span id". This helps distinguish different parts of the same request in different services.

[Spring Cloud Sleuth placement]

import org.springframework.cloud.sleuth.zipkin2.ZipkinSpanReporter;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class SleuthConfig {
    
    

    @Bean
    public ZipkinSpanReporter makeZipkinSpanReporter() {
    
    
        return new ZipkinSpanReporter() {
    
    
            @Override
            public void report(zipkin2.Span span) {
    
    
                System.out.println(
                    String.format("Reporting span [%s] to Zipkin", span)
                );
            }
        };
    }
}

This code shows how to configure Spring Cloud Sleuth to integrate with Zipkin to report tracking data to Zipkin.

Integrate with other tools

Spring Cloud Sleuth can be integrated with tools such as Zipkin, Elasticsearch, Logstash, Kibana (ELK stack), etc. to visualize and analyze trace data.

6. Service circuit breaker (Hystrix)

In a microservices architecture, when one service fails, it can trigger a chain reaction that causes the entire system to crash. A service fuse acts like a fuse in an electrical circuit: when an abnormal condition is detected, it "trips" to prevent further damage.

Netflix Hystrix is ​​one of the most well-known service circuit breaker implementations.

How Hystrix works

  1. Command pattern : Using Hystrix, you encapsulate the code that calls the remote service in a HystrixCommand object.
  2. Isolation : Hystrix provides isolation for each service call through a thread pool or semaphore, ensuring that the failure of one service will not affect other services.
  3. Circuit Breaker : If a remote service fails continuously up to a threshold, Hystrix will "trip" and automatically stop all calls to the service.

[Hystrix circuit breaker example]

import com.netflix.hystrix.HystrixCommand;
import com.netflix.hystrix.HystrixCommandGroupKey;

public class SimpleHystrixCommand extends HystrixCommand<String> {
    
    

    private final String name;

    public SimpleHystrixCommand(String name) {
    
    
        super(HystrixCommandGroupKey.Factory.asKey("ExampleGroup"));
        this.name = name;
    }

    @Override
    protected String run() throws Exception {
    
    
        // 这里放可能会失败的代码
        return "Hello, " + name + "!";
    }

    @Override
    protected String getFallback() {
    
    
        return "Fallback for: " + name;
    }
}

// 使用方法:
String response = new SimpleHystrixCommand("Test").execute();

6.1. Hystrix circuit breaker mechanism

Circuit breakers are at the heart of Hystrix. Its working principle is similar to that of a real-life circuit fuse:

  1. Closed state : This is a normal state and all requests will be processed normally. If the failure rate exceeds a predetermined threshold, the circuit breaker goes to the "open" state.
  2. Open state : In this state, to prevent further harm, all requests automatically fail without attempting to call the remote service.
  3. Half-open state : After a period of time, the circuit breaker will move to a half-open state, allowing some requests to pass through. If these requests are successful, the circuit breaker will return to the closed state; otherwise, it will open again.

These three states ensure that the system can recover quickly in the face of failure, while also providing a buffer for remote services to have time to recover.

image-20230915112343517

7. API management

With the widespread application of microservices, the number, variety, and complexity of APIs have increased dramatically. Effective API management aims to simplify the design, deployment, maintenance and monitoring of APIs while ensuring their security, reliability and availability.

Core components of API management

  1. API gateway : As the entry point of the API, it is responsible for request routing, combination, conversion, verification, rate limiting, etc.
  2. API design and documentation : Provide a set of standardized API design guidelines and continuously maintain API documentation.
  3. API Monitoring and Analytics : Monitor API usage, performance, and errors and provide data-driven insights.

API management challenges

  1. Version Control : As business needs change, the API may change. How not to affect existing client management API versions is an important consideration.
  2. Rate Limits and Quotas : To prevent abuse and ensure fair usage, usage limits need to be set for APIs.
  3. Security : including authentication, authorization, preventing malicious attacks, etc.
  4. Compatibility : New API versions should be backwards compatible so as not to impact existing users.

Best Practices for API Management

  1. Open API Specification (OAS) : Use a standard API description format, such as OpenAPI, to ensure consistency.
  2. API testing : Similar to software testing, but more focused on the contract, performance and security of the API.
  3. API life cycle management : Define the complete life cycle of the API from design to deprecation, and manage the API according to this life cycle.

In microservice architecture, API management has become a key component. When the number of services increases, not having an effective API management strategy can quickly lead to chaos. Through the above methods and tools, organizations can ensure the health, safety and efficiency of their APIs.

[API flow control example]

// 使用Spring Boot Rate Limiter进行API流量控制
import io.github.bucket4j.Bucket;
import io.github.bucket4j.Bandwidth;
import io.github.bucket4j.Refill;
import io.github.bucket4j.local.LocalBucketBuilder;

import java.time.Duration;

public class RateLimiterService {
    
    

    private Bucket createNewBucket() {
    
    
        Refill refill = Refill.greedy(10, Duration.ofMinutes(1));
        Bandwidth limit = Bandwidth.classic(10, refill).withInitialTokens(1);
        return LocalBucketBuilder.builder().addLimit(limit).build();
    }

    public boolean tryConsumeToken(Bucket bucket) {
    
    
        return bucket.tryConsume(1);
    }
}

// 使用方法:
RateLimiterService rateLimiter = new RateLimiterService();
Bucket bucket = rateLimiter.createNewBucket();
boolean canProcessRequest = rateLimiter.tryConsumeToken(bucket);
if (canProcessRequest) {
    
    
    // 处理API请求
} else {
    
    
    // 超出限额,拒绝请求或等待
}

The above code shows how to implement rate limiting of the API in a Spring Boot application using the Bucket4j library.

Security of microservices is another important area. The main concerns include communication security (such as using TLS encryption), API authentication and authorization, and data security.

[API security authentication example]

// 使用Spring Security进行API安全认证
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;

@EnableWebSecurity
public class APISecurityConfig extends WebSecurityConfigurerAdapter {
    
    

    @Override
    protected void configure(HttpSecurity http) throws Exception {
    
    
        http
            .authorizeRequests()
                .antMatchers("/public/**").permitAll()
                .antMatchers("/private/**").authenticated()
                .and()
            .httpBasic();
    }
}

The above code snippet shows how to set up basic authentication for API paths using Spring Security. /public/The API below is public, while /private/the API below requires authentication.

Guess you like

Origin blog.csdn.net/weixin_46703995/article/details/132899828