Thinking: "spike" in the real Spring Cloud micro Services Architecture (with source code)

abstract

  • Thinking more about the spike, spike in the original architecture of the new new implementations

  • 1, Introduction Architecture

  • 2, on the characteristics of spike scene analysis

    • *

Analysis, doing spike system design at the beginning, been thinking about how to design the spike system to make it within the existing technology base and range of knowledge, can do best; but also can make full use of the company's existing middleware to complete the implementation of the system.

We all know that to achieve a normal end of the spike WEB system, front-end processing and back-end processing is just as important; the front usually do CDN, the back end will generally do a distributed deployment, current limiting, so a series of operational performance optimization, and perform some optimization of the network, such as multi-line IDC (Telecom, China Unicom, mobile) access, bandwidth upgrade the like. And because the current system is based on micro-channel front-end small program, so it is optimized as much as possible on the front end portion is done in code, CDN can avoid this step a;

Thinking more about the spike, spike in the original architecture of the new new implementations

The original plan:

Distributed lock control by way of the final inventory is not oversold, and eventually be able to control a single link to the next order, into the queue slowly to the consumer orders

New Program "

After a request comes in, the start determination and activity determination after repeated spike, i.e. into the message queue, and then do other operations in the consumer stock determination end message through the message queue reaches the peak clipping operation

In fact, I think the two solutions are possible, but what kind of concrete used in the scene; the original plan is more suitable for a relatively small flow of platform, and the whole process will be much easier; but many of the new programs is very large program platform through the message queue to achieve the purpose of clipping; both of which are added to the program can enter a request to limit the true, by the self-energizing redis atom to record the number of requests when the request reaches n times the amount of the stock, later re-entering the request, direct return to the active too hot tips;

1, Introduction Architecture

The back-end service project is based on micro-architecture framework built SpringCloud + SpringBoot

On the front end of the micro channel small program store

The core support assembly

  • Services Gateway Zuul

  • Service Registration discovery Eureka + Ribbon

  • Services Framework Spring MVC / Boot

  • Service fault-tolerant Hystrix

  • Distributed Lock Redis

  • Service call Feign

  • Kafka message queue

  • Private cloud disk file services

  • Rich text component UEditor

  • Xxl-job scheduled task

  • Configuration Center apollo

2, on the characteristics of spike scene analysis

Scene characteristic spike system

  • Large number of users take place at the same time buying spike at the same time, the site instantaneous traffic surge;

  • Spike is usually far greater than the number of access requests amount of inventory, only a few users to spike success;

  • Spike business process is relatively simple, the general is operating under orders;

Spike architecture design

  • Limiting: Given that only a small number of users to spike success, so most of the traffic restrictions, allowing only a small portion of the traffic entering service back-end (yet processing);

  • Clipping: spike system for instantaneous influx of a large number of users, so the instantaneous peak in the beginning there will be snapped up high. There are commonly implemented method using a buffer or clipping message middleware technology;

  • Asynchronous Processing: For highly concurrent systems using asynchronous mode processing system can greatly improve concurrency, asynchronous processing is one implementation of clipping;

  • Memory cache: the biggest bottleneck spike system eventually could read and write the database, mainly I / O, performance will be low in the disk, if we can move most of the business logic to handle cache, efficiency have greatly improved;

  • Scalability: If you need to support more concurrent users or more, the system is designed to elastically expand, if traffic comes, like the expansion of the machine;

Thinking: "spike" in the real Spring Cloud micro Services Architecture (with source code)

Thinking: "spike" in the real Spring Cloud micro Services Architecture (with source code)

Spike design ideas

  • Since the applet belongs distal end, the distal end portion of the pressure access so there is no, access to the front end of the pressure out of the question;

  • 1, spike-related activities associated with the interface pages, plus all the queries can be cached, add all redis cache;

  • 2, activities related to real inventory, lock stock, purchase, order processing status so the whole place redis;

  • 3, when a request comes in, first by redis atoms from the growing number of requests currently recorded, when the request exceeds a certain amount, say 10-fold stock, directly back into the return to the active request in response to too hot; but can buying into the request, the first to enter the event ID is distributed lock granularity, the first step repetitive verification users to buy, the next step to meet the conditions, otherwise Ordered tips;

  • 4, the second step, to determine the current lockable stock is greater than the quantity purchased, the next step to meet the conditions, otherwise sold out tips;

  • 5, the third step, the current lock request for later inventory, inventory subtracted from the lock, and the message queue into kafka single request;

  • 6, the fourth step, marking a polling redis in the key (request for polling the user interface to determine whether an order successfully), at the consumer end consumer complete kafka need to remove the key after creating an order, and maintain an active user id + id of the key, to prevent repeat purchases;

  • 7, the fifth step, message queues consumption, create an order, create an order redis success in the real inventory is deducted, and the delete key polling. If the next single process abnormality occurs, delete the purchase of the key, lock stock return, prompting the user a single failure;

  • 8, a sixth step, a polling interface for buying the distal end after completion of operation, the final check whether the order is successful, the main polling state is determined based on a key in the redis;

  • 9, the entire process will intercept all requests to the backend of the cache level redis, in addition to inventory orders eventually there will be a limit order to interact with the database, but substantially free of other interactions, database I / O pressure down lowest;

About limiting

SpringCloud zuul have a good level limiting policy can prevent malicious requests same user behavior

 1 zuul:
 2     ratelimit:
 3         key-prefix: your-prefix  #对应用来标识请求的key的前缀
 4         enabled: true
 5         repository: REDIS  #对应存储类型(用来存储统计信息)
 6         behind-proxy: true  #代理之后
 7         default-policy: #可选 - 针对所有的路由配置的策略,除非特别配置了policies
 8              limit: 10 #可选 - 每个刷新时间窗口对应的请求数量限制
 9              quota: 1000 #可选-  每个刷新时间窗口对应的请求时间限制(秒)
10               refresh-interval: 60 # 刷新时间窗口的时间,默认值 (秒)
11                type: #可选 限流方式
12                     - user
13                     - origin
14                     - url
15           policies:
16                 myServiceId: #特定的路由
17                       limit: 10 #可选- 每个刷新时间窗口对应的请求数量限制
18                       quota: 1000 #可选-  每个刷新时间窗口对应的请求时间限制(秒)
19                       refresh-interval: 60 # 刷新时间窗口的时间,默认值 (秒)
20                       type: #可选 限流方式
21                           - user
22                           - origin
23                           - url

About load shunt

When visiting the magnitude of an event of particularly large, it may come from the domain name distribution nginx even made a highly available, but in fact eventually stand-alone online, when the pressure is always no match for the large flow, we can consider multi-domain IP mapping. That map the same domain name under multiple IP outside the network, and then mapped to multiple sets of highly available service nginx DMZ, and then configure nginx cluster application services available to relieve stress;

There is also passing on distributed redis redis cluster implementation can be used, but springcloud hystrix can also have the effect of fault-tolerant service;

And on a range of parameters nginx, springboot the tomcat, zuul and other operations are also crucial to optimize access performance improvement;

Add that, even if the front is a small program implementation, but activities related to the resources on their own picture of cloud disk services, so the activities related to activities before uploading pictures resources CDN is crucial, even if it is otherwise based on IDC you have 1G the traffic bandwidth will be eaten every minute;

Code: https://github.com/coderliguoqing/distributed-seckill

Guess you like

Origin blog.51cto.com/14230003/2464697