Types of Application Architecture-Microservice Architecture

1. What

The microservice architecture means that a whole system is composed of many different subsystems, which exist independently and call each other. A typical microservice system has the following components:

  1. Registry
  2. Configuration Center
  3. Gateway
  4. Individual sub-modules

Two, usage scenarios

The amount of data is tens of millions, and the amount of visits is tens of millions

3. Advantages

  1. Reusability, eliminating code copy
  2. Focus to prevent the spread of complexity
  3. Decoupling and eliminating public library coupling
  4. High quality, SQL stability and guarantee
  5. Easy to expand, eliminate database coupling
  6. High efficiency, improve the R&D efficiency of the caller

Four, granularity

  1. Unified service layer
  2. One sub-business, one service
  3. One database, one service
  4. One interface, one service

The best practice is: one service per sub-business

Five, high availability

  1. How to know whether it is highly available: shut down any machine online, the online service will not be down
  2. Methodology: clustering (redundancy) + automatic failover
  3. Specific steps:
    a. Reverse proxy layer: reverse proxy redundancy, VIP + Keepalived
    b. Site application layer: site application layer redundancy, nginx automatic perception
    c. Service layer: service layer redundancy, service connection pool automatic perception service Layer survival
    d. Cache layer: Cache layer redundancy, cache client + cache layer sentinel discovery mechanism
    e. Database read: Database Slave node is redundant, database connection pool automatically discovers whether the database is available
    f. Database Write: Database Master node is redundant I, VIP + Keepalived

Six, high concurrency

  1. What: It is designed to ensure that the system can process many requests in parallel at the same time. Concepts include: response time, QPS, TPS, number of concurrent users, etc.
  2. How: Scale up, scale out
  3. Specific steps:
    a. Reverse proxy layer: DNS polling
    b. Site application layer: nginx reverse proxy
    c. Service layer: Service layer connection pool
    d. Data layer: Data level segmentation (according to data range, or data ha Greek way to expand horizontally)

Seven, load balancing

  1. What: It usually refers to the [even] allocation of requests/data to multiple operation units for execution

  2. How:
    In a homogeneous environment, the focus is on "uniform"
    heterogeneous environments, and the focus is on "matching load and capacity"

  3. In a homogeneous environment, the specific steps of load balancing:
    In a homogeneous environment, the implementation of load balancing basically does not require additional support. In the realization of high concurrency and high availability infrastructure:
    a. Client to reverse proxy DNS polling is completed;
    b. The reverse proxy to the site is completed by nginx;
    c. The site to service is completed by the connection pool;
    d. The service to the data layer is also completed by the connection pool provided by the client of the data layer framework.
    Because it is isomorphic, the equilibrium strategy is simple, and both polling and random methods can be implemented.

  4. In a heterogeneous environment, the specific steps of load balancing:
    static weight:
    What: Static weight and homogeneous load balancing strategy are almost the same, assuming three machines, if the configuration load is 1:1:1, then it is load balancing, so you can Load balancing is seen as a special case of static weighting;
    advantages: fast and simple;
    disadvantages: static, unable to change in real time, and overload protection cannot be implemented

    Dynamic weight:
    What: The weight is dynamically changed according to the processing capability of the service. The size of the weight reflects the probability of the load being routed to this machine;
    How:
    a. Identify the service processing capability: a successful return indicates that the capability is okay, and timeout indicates that it cannot withstand the current Flow;
    b. Design dynamic weights: add small points for success, reduce large points for failure;
    advantages: can dynamically reflect the processing capabilities of services in a heterogeneous environment, and allocate
    loads that match the capabilities. Disadvantages: load requires additional development

  5. Overload protection:
    What: When a service is overloaded, it is likely to cause what we call an "avalanche". The traffic of the service reaches the peak of the processing capacity, and then the traffic increases, the successfully processed requests plummet, and the service enters an unavailable state.
    How:
    a. Static method:
    Static overload is to set a flow threshold, beyond which there will be no more requests.

    b. Dynamic mode:
    Dynamic overload protection and dynamic load balancing strategy are similar.

    1. Connection represents service, score represents processing power
    2. Add a small score for success, and a large score for failure
    3. Critical boundary (such as 1 consecutive failure) gasping (reduced flow, short duration)
    4. Death state (such as 2 consecutive failures or more) breathing out (no flow, long duration)
      If the overload protection is implemented on a certain node, the request originally processed by the node is forwarded to other nodes; if all nodes (ie If the entire cluster is overloaded, the request is discarded; therefore, the cluster's overload protection strategy is different from the strategy of a certain node, one is lost and the other is forwarded.

connection pool

  1. What: Compared to short connections (closed after use), the connection pool is a pool for maintenance. If several connections are needed, they will be removed from the pool when needed, and put into the pool after use. The connection in the pool can be reused of.
  2. Why: If the connection is established every time, the connection is used to send and receive requests, and the connection is closed. When encountering a high concurrency scenario, establishing and closing the connection will become the bottleneck.
  3. How:
    a. Core interface: initialize; take connection; put back connection
/*数据结构很简单,总共是两个数组,一个是表示所有真正连接的数组,还有一个是第三行出现的一个lock数组,
lock数组的作用即是表征下标对应的连接的状态,当前是否被占用。*/
init() {
    
    
	for i to N {
    
    
		Array DBClientConnection[i] = new()
		Array DBClientConnection[i] -> connect()
		Array lock[i] = 0
	}
}

/*拿连接的过程也非常简单,首先就是遍历锁数组,如果为0,那么设置锁为1,返回连接即可*/
GetConnection() {
    
    
	for i to N {
    
    
		if(Array lock[i] == 0) {
    
    
			Array lock[i] = 1
			return Array DBClientConnection[i]
		}
	}
}

/*放回连接只需要把lock设置为0就可以了*/
FreeConnection(c) {
    
    
	for i to N {
    
    
		if(Array DBClientConnection[i] == c) {
    
    
			Array lock[i] = 0
		}
	}
}

b. Other considerations:
1) The complexity can also be optimized to O(1)
2) Connection availability detection, if the connection fails, you need to reconnect and replace the original connection
3) If the downstream server fails, the invalid connection should be eliminated off for automatic transfer failure, thereby achieving high availability
4) If there are additional nodes downstream, dynamically connected to the expansion tank, to achieve automatic service discovery, in order to achieve scalability. You can monitor the configuration file (such as MD5) and reload it if it is changed; or call back through the configuration center.
5) To ensure the probability of connection selection, to achieve load balancing. Load balancing based on connection pool can be realized through polling, random, static weight, dynamic weight, etc.

reference

  1. High availability: https://www.jianshu.com/p/dcb73d907342
  2. High concurrency: https://www.jianshu.com/p/be66a52d2b9b
  3. Load balancing 1: https://www.jianshu.com/p/41f437542ffc
  4. Load balancing 2: https://blog.csdn.net/Sunsscode/article/details/107693303

Guess you like

Origin blog.csdn.net/hudmhacker/article/details/108199056