Classic interview questions - how to build a spike system?

Appropriate to circumvent business

  1. According to certain rules on the part of the user to directly return did not grab. For example, some users have been identified as a malicious user, garbage user, zombie users, just tell the user has finished grab
  2. Dispersing different clients open time activities inlet. For example the flow rate in the dispersion 1 second to 10 seconds

Hard-core compression technology

  1. Limiting policy. For example, in the stress test, we measured QPS system to the limit, then the excess is returned directly have been robbed finished, you can check to see redis QPS data through the Nginx lua script which can dynamically adjust
  2. Asynchronous clipping. Redis on the number of pre-cut red envelope, please return immediately grab a red envelope successful user wait, and then send the message to the message queue, a second clipping flow, so that the background service process slowly
  3. Service logic. For example, the business logic is to use transaction control to create order records to the database, reducing inventory operations, create inventory operation before the operation to put the cut, in order to avoid reducing the number of update row lock hold time
  4. Machine configuration. Of course, the server machine configuration about high as possible, the better the more fierce database configuration, high concurrency grab a red envelope mainly higher CPU and IO load network, tend to choose the CPU and network IO performance of the machine

Architecture and implementation details

  • Front-end module (static page, CDN, client-side caching)
  • Queuing module (Redis, asynchronous queue orders)
  • Service Module (transaction processing business logic, to avoid concurrency issues)
  • Preventing the brush module (codes, frequently limit user access)

Module parses

Front-end module

  1. Static pages, the background rendering template changed the way way to use HTML file with AJAX asynchronous requests, reduce server rendering overhead, while the spike into the page in advance CDN
  2. Client Cache, Cache-Control to configure the client to cache the page some time, enhance the user experience
  3. Static resource optimization, CSS / JS / picture compression, enhance the user experience

Queuing module

  1. Redis objects on the buying pre-cut stock, and then return immediately snapped up successful user please wait, here takes advantage of Redis most of the requests to intercept a small part flow into the next stage
  2. If too many goods involved spike, flow into the next stage is still relatively large, you need to use the message queue, the request after Redis filter directly into the message queue, so that the message queue of the second peak cut traffic

Service Module

  1. Message queue consumer, business logic is to use the database transaction control orders, and inventory reduction operations, and orders placed before the operation to reduce inventory operations, reduce inventory avoided update row lock hold time

Preventing the brush module

  1. Write the script for a malicious user to brush, IP and save the user ID restrictions in trade in Redis
  2. For the average user clicks crazy, buying using JS control button every few seconds to click once
  3. Computational mathematics in the background to generate verification codes, use Graphics, BufferedImage achieve pictures, ScriptEngineManager calculation expression

Exception handling process

  1. If, during the spike in crashes spike activity because the service is interrupted, then there is no good way, only to immediately try to recover a crashed application or service to find another time to re-spike activity
  2. If the process of placing orders due to certain restrictions under the user's lead to a single failure, you should roll back the transaction, immediately tell the user the failure reason

to sum up

in principle

Business Optimization idea: appropriate to circumvent business
technology optimization ideas: try to request interceptor upstream database, because once the large number of requests into the database, the performance would drop dramatically
architectural principles: the right, simple, evolution (above is the final version of the first edition can be said did not use the queue, use the cache directly - such an architecture database)

difficulty

  1. How high concurrent high-volume one step from the business and technical aspects to deal methodically over
  2. How to deal with abnormal situations and prepare contingency plans in the code

pit

  1. The above solutions by utilizing the message queue Redis cluster to carry a very high concurrency, but the high cost of operation and maintenance. Such as Redis and message queues must be used to ensure the stability of the cluster, it will lead the operation and maintenance cost is too high. So the need to maintain a professional operation and maintenance team.
  2. Avoid multiple orders the same user at the same time, you need to write business logic or add a unique index user ID and item ID in the Orders table; avoid oversold issues, need to add> 0 condition on the number of sql update

optimization

  1. 7 will be used together with 4-layer load balancing layer Nginx load balancing LVS further improved concurrency
  2. Above optimize the application architecture, deployment in Redis, message queues, database, selects the bandwidth of the virtual machine hard drives tend to high speed read and write
  3. Warm-up ahead, the latest update to the static resource synchronization on all nodes of the CDN, load product information that you need to sell in advance in Redis
  4. Limiting the use of distributed access Redis reduced pressure, and the number of concurrent connections arranged in Nginx the speed limit
发布了142 篇原创文章 · 获赞 31 · 访问量 2万+

Guess you like

Origin blog.csdn.net/qq_38905818/article/details/103803676