Java project electricity supplier, spike, snapping up high-concurrency scenarios specific scene and deal with some of the concepts and ideas

Here I draw on other online heavyweights of view:

One:
the challenges brought by high concurrent
  reason: spike rush will often bring high concurrency scenarios tens of thousands per second for faster returns the results to the user.
  Throughput index QPS (processing requests per second), assuming a service request response time consuming to 100ms, we have 10 Web servers, each to its maximum number of connections 500.
  Idealized calculated:
  10 * 500 / 0.1 = 50000
  Do we really have a handle 50,000 concurrent?
  otherwise. High concurrency scenarios, Web server opens up more of the connection process, the more CPU context switching. CPU will increase the pressure, leading to CPU service request response time consuming than many would expect. Maybe you have only withstand a 20,000 concurrent.
  This time we need to how to do?

  A: 1, the interface needs to design a reasonable request, how do?
    Static and dynamic separation, static HTML can be deployed by Ng.
    The core interfaces bottleneck in the background, high pressure storage concurrency, MySQL inappropriate Redis with fast memory access.
    2, restart and overload protection
    if you Yingkang 20,000 30,000 concurrent flow, causing the server to process no connection is available, the system will fall into an abnormal state, and slow response time. When the system response time for a long time, some users prefer the more frequent clicks. Leading to a vicious cycle, "avalanche", the entire system collapsed, even restart the service did not help.
    How to do it?
    Overload, if the system detects a full state, reject the request of self-protection.
    (1) a simple manner by filtration distal end
    (2) disposed CGI entry level overload, the request from the client directly returned

Two:
Data security at high concurrency
  when multiple threads write to the same file, "thread safety" will appear. High concurrent data security is the truth. For example, there may be a super-fat.
  Program:
  pessimistic lock idea: When modifying data, a locked state, an external request rejection modification.
  Cons:
  Some highly concurrent threads may never grab this "lock", the request will die there. Accumulated to a certain extent, the number of connections is exhausted, the system exception.
  FIFO queue idea: requests are lined up, does not always lead to some requests can not get a lock.
  Disadvantages:
  high concurrency may lead to memory queue "explode", if you set a great memory queue, the system processes the request not keep up the speed of the continuous rapid influx of requests. Piled up, or will result in slower response, the system caught an exception.
  Optimistic locking thinking:
  Compared with the pessimistic locking, optimistic locking are qualified to perform the request, but will get a version number, in line with the version number of the update is considered successful.
  Disadvantages:
  increase the cost of the computer's CPU to calculate, but this is a better solution.
  Cache server idea:
  Redis can be distributed to ensure that data can be cached to the average of each machine, the first thought is to approach the data fragmentation, because Redis is a key-value store, the first thought is fragmented Hash, possibility is for hashing the key to obtain a long value modulo the number of distributed mapping will be a corresponding database, you can not read this database to locate

three:
water under high concurrency and killing.
  the reason:Massive spike or buy, and other requests sometimes are not really in the user sends a request for some "stolen" goods to use some "brush-vote" and other similar tools. This approach is helping them to send more requests to the server. Also produced some of the more advanced automatic request script. These practices are making more than their proportion of the number of requests, the success rate is high.
        It is clear that these belong to cheating, but we also have some solutions.
  A: divided into the following case
  1, the same account, a plurality of one-time transmission request.
  High concurrency may lead to skip certain logical judgment.
  Program: entrance program, a user requests only once, other filter. By Redis memory caching service, write a flag (allows only one request is successful, the binding characteristics of watch optimistic locking)
  2, multiple accounts, a one-time sending multiple requests to
  a lot of early registration function is not limited, resulting in some special work room by writing scripts automatically registered registered a large number of "zombie accounts." Specialize in all kinds of brush behavior, as well as some forwarding sweepstakes, greatly enhance their probability of winning.
  Program: detecting a request frequency specified IP machine, if a request for the IP unusually high frequency. It codes for a pop-up or prohibition of its request.
  3, multiple accounts, different IP to send different request
  some agencies own exclusive group of IP, and then made a random proxy IP services, paid to provide these "studio" use. There are some direct black out the user's computer, forward IP packet, so that ordinary users computer into a proxy IP exports.
  Solution: difficult to distinguish, and easy to "friendly fire." You can clean up in advance by the high threshold of business, or by "data mining."

Personal finishing concurrent solutions.

application level a: separate read and write, the cache, the queue, the cluster, the token, the system resolution, isolation, system upgrades (expansion direction may be horizontal).

. Time for space B: single reduction request time, so that the system will improve concurrency in a unit time.

. C space for time: lengthen the overall processing time business, in exchange for back-end system capacity space.

Guess you like

Origin www.cnblogs.com/eyesCentre/p/10948377.html