Java address the high concurrency spike

One: Problem

First we have to consider is how to address the high concurrency, high concurrency bottlenecks where there is to find out about friends know for sure is in the database, because the data will be confusion in the large number of requests to operate the database, oversold, system crashes, mysql deadlock and so on.

Two: the idea

  • 1. static pages : the entire page is stored in redis, the value of the page to read the next visit redis

  • CDN 2. : mainly static resource files to accelerate the entire site, such as images, css, js, etc. (see Ali go tutorial)

  • 3. Mathematical codes : when the user authentication code calculation result can be reduced into the number of requests simultaneously, reducing the pressure redis, mysql, server.

  • 4: stock indicator : This is a great optimization, to judge by identifying redis inventory is adequate, such as inadequate interrupted to read redis inventory. Example: boolean over = map.get (goodsId) ; when we map value is true when read by a key, an error is returned to the user, ( 'inventory shortage') if (over) {return Result.error; .....} after this matter into the plurality of requests to run only two lines of code, the following operation can not enter.

  • 5. Generate dynamic url : mainly to prevent a malicious user to advance through a fixed commodity spike url (safety aspects of this issue can not be taken lightly, even you did not do those security measures following operations are no good)

  • 6. redis pre-cut stock : user spike in commodity redis went to get the current stock quantity, then subtract redis directly stored in inventory spike time (rest assured here, Redis, and MySQL data are synchronized, just go to the MQ queue operation is completed next single, MySQL database number will -1), so as to avoid inventory data read MySQL.

  • 7. MQ message queue : it is a key intermediate message, a message is sent by the producer to the consumer, business operations, and the producers do not need to know the results, that is, after the user clicks spike wait for the results, then go on polling query processing result (asynchronous operation), thus to avoid repeated requests for database operations. (Where polling query is a query directly from the redis go inside, because the result will be a successful spike after spike into redis, the polling time by key to query)

  • Nginx 8. : a good way to solve high concurrency, which is more than we increase the number of tomcat server. When the user access request can be submitted to the idle tomcat server.

  • 9. cluster database, database table hash

  ① large sites are complex applications, these applications must use the database, then in the face of heavily accessed, the database bottlenecks will be revealed soon, then a database will soon be unable to meet the application, so we need to use database or library table hash cluster.

  Master ② in terms of database cluster, many databases have their own solutions, Oracle, Sybase and others have very good programs, commonly provided by MySQL / Slave is a similar program, you use what kind of DB, will refer to the corresponding the solution can be implemented.

  ③ mentioned above, since the database cluster architecture, the cost of expansion will be subject to the terms of restricting the use of DB type, so we need from the application point to consider improving the system architecture, database table hash is commonly used and most effective solution Program.

  ④ We installed the application and business application or database to separate functional modules, different modules corresponding to different databases or tables, and then according to certain strategies smaller hash database on a page or function, such as user table , a hash table in accordance with user ID, so that the performance of the lift system at low cost and has good scalability.

  • 10. Load Balancing

Load balancing is a large site will address the high load and high-end access a large number of concurrent requests using the solution.

  • 11. Reverse Proxy

Server clients to directly access the server does not directly provide services, it gets resources from other servers, then returns the results to the user.

Proxy and Reverse Proxy:

A proxy server is access to resources on behalf of our visit, then returns the results. For example, the proxy server to access external networks. Reverse proxy server is normal when we visit a server, the server calls himself the other servers.

That reverse proxy, to the user's request requesting the load balancing device, the load balancing apparatus repeat the idle request to distribute application server process, after the process is completed and then returned to the user through the load balancing device, so that for the user, later, the distribution is not visible.

Reverse proxy implementations

1) the need for a load balancer to distribute the user request, the user requests the server to distribute the idle

2) to return to their server load balancing equipment to service

3) Load Balancing service returns the user to the server

We take the initiative to use the proxy server is for our services, do not need to have your own domain name; own reverse proxy server is used, we do not know, have their own domain name.

Guess you like

Origin juejin.im/post/5d049f09f265da1b7f297c45