E-commerce examples, business concurrency, website concurrency and solutions

1. How to prevent multiple users from snapping up the same product at the same time and prevent high concurrency from placing orders for the same product at the same time?

http://ask.csdn.net/questions/159351

http://www.cnblogs.com/lingmaozhijia/articles/1339222.html

Recently, I have been working on a panic buying system, but the headache is that there are often problems with inventory in the case of multi-user and high concurrency. It is found that the query and insert are out of sync when multiple users place orders at the same time, and there is a time difference between the query and the insert, which leads to inventory problems in the case of high concurrency (my project is probably like this, first check for update). Get the product information table and put it into the array in the global table. When the user's balance is successfully deducted, update the product information table minus the number of orders placed by the user. MySQL is used in the database, and it is locked when querying the product information table. , but when there are more and more data in the product information table, there is a time difference in the query, which leads to high concurrency. When the query product information table is put into the variable array, in the time difference after the execution, other users are also placing orders, resulting in inventory. question). Now ask a question, the same product at the same time prevents multiple users from snapping up, that is to say, only one user can place an order in the same second under the condition of high concurrency. The current idea is to queue up and block the queue.

 

1. update table set num=num-1 where num>1
directly update without checking, if the update is successful, it means grabbed

 

2. Put the snap-buying system into two steps. The first step is to place an order (i.e. snap-up). If the order is successfully placed, the quantity will be reduced immediately, and the table data will be updated. The
second step is payment. Write a program in the background. Automatically delete the order, then increase the quantity.

In this way, concurrency can be avoided. If you go in one step, there will be concurrency problems no matter how short the time is.

 

3. Row locks can be added to the database

 

4. It seems a bit like I buy a plane ticket

If you place an order, the quantity will be reduced immediately, and if you do not pay within half an hour, the order will be cancelled and the quantity will be restored.

 

5. You can use queue + lock table to do it.

 

2. Recently, a company is doing production scheduling management on the B/S platform for my unit. The platform is MySQL4.0+Java. During testing I found that the developers had no control over the concurrency of the data. That is, when two people modify a record at the same time, both can submit it, and the data of the last submitter will overwrite the record of the previous modifying person. 
    After I raised an objection to this, the software developer told me that the problem of concurrency control can be solved by using the authority management in the production management system! !
    I think that because the B/S structure cannot always be connected to the database like the C/S structure, the ability of the database itself to control concurrency is weakened, so the foreground program needs to make corresponding judgments before submitting. Using any authority to restrict the user's write operations (to the same record) is fundamentally unavoidable from concurrency!

 

1. If you change the SQL statement in the program, that is, UPDATE, it should be fine! UPDATE plus the WHERE condition, not only the primary key is equal, but also the updated field must also add the original value. For example, the table structure is:
ID,aNAME,ageData
100
,anders,40
101,arg,56
If you want to change the name of arg to args,
update the table name set aname = 'args' where ID = 101 and aname = 'arg'
If the first Change aname to something else, and the update is an error! The situation that the landlord said will not occur.

 

 3. Website Concurrency

 I often see the problem of high concurrency, but high concurrency is actually the last thing to consider. Why, he is illusory, very few websites really need these things, and many of them are technologies that you are already using. It is enough to have this awareness, there is no need to stare at this problem all the time. Only very few websites can really achieve high concurrency. 

To make a simple summary, from the perspective of low cost, high performance and high scalability, there are the following solutions: 
  1. HTML static 
  2. Image server separation 
  3. Database clustering and library table hashing 
  4. Cache 
  5. Mirror 
  6 , load balancing; a typical strategy for using load balancing is to build a squid cluster on the basis of software or hardware four-layer switching. This idea is used in many large websites, including search engines. This kind of architecture is low-cost and high-performance There is also strong scalability, and it is very easy to add or remove nodes to the architecture at any time. 

The following is also a summary made by an expert, which is the same as the above part. When there is high concurrency, performance bottlenecks and commonly used countermeasures 

1. Database bottleneck. Mysql concurrent link 100 

2. apache concurrent link 1500 

3. Program execution efficiency 

 

1. When there is a database bottleneck, the current processing plan is nothing more than master-slave, cluster. Add memcached [memory object caching system]

For example: Introduction to the new system of Mobile Home and sharing of its architecture ( http://www.slideshare.net/Fenng/ss-1218991?from=ss_embed

It is to optimize at the cache layer. The network architecture ( http://www.bopor.com/?p=652 ) is solved by adding databases and sub-tables and sub-databases. Sina adds mq (message queue) to distribute data. The wind station also uses a key-value database. In fact, this can be understood as a persistent cache. 

2.apache bottleneck. 

Add servers. load balancing. Such as sina's F5 

Due to the limitation of the number of processes. Some basically unchanged code will be moved out to a separate server. Such as css/js/images. 

A successful domestic case is tom's cdn. Another example is the emergence of nginx and the reverse proxy of squid are based on this reason. 


3. The execution efficiency of php. There are multiple reasons. 

1). The efficiency of itself is low. 

Solved success stories are Zend Optimizer and facebooke hiphop. Taobao is to compile php code into modules to solve efficiency problems. 

2). Database query efficiency. If there may be Sql data problems such as order by, group by, etc. 

This should actually come down to database design issues. 
The solution is to build the correct index. Increase memcache [distributed cache system].

For the like table, use special search services such as sphinx. [SQL full-text retrieval engine] and Lucence [Java-based full-text information retrieval]. 

Programmers should be able to use explain to analyze SQL statements. 

 

4. Large-scale concurrency of web systems - e-commerce spikes and snap-ups

http://www.csdn.net/article/2014-11-28/2822858

2. Pessimistic locking idea

There are many ideas for solving thread safety, which can be discussed from the direction of "pessimistic locking".

Pessimistic locking, that is, when modifying data, the lock state is used to exclude the modification of external requests. In the locked state, you must wait.

 

Although the above solution does solve the problem of thread safety, don't forget that our scenario is "high concurrency". That is to say, there will be many such modification requests, each of which needs to wait for a "lock", and some threads may never have a chance to grab this "lock", and such requests will die there. At the same time, there will be many such requests, which will instantly increase the average response time of the system. As a result, the number of available connections will be exhausted and the system will fall into an exception.

3. FIFO queue idea

That's good, then let's slightly modify the above scene, we directly put the request into the queue, using FIFO (First Input First Output, first in first out), so that we will not cause some requests to never be obtained Lock. Seeing this, does it feel a little bit forced to turn multi-threading into single-threading?

 

Then, we now solve the lock problem, and all requests are processed in a "first-in, first-out" queue. Then there is a new problem. In a high concurrency scenario, because there are many requests, the queue memory may be "exploded" in an instant, and then the system will fall into an abnormal state again. Or designing a huge memory queue is also a solution, but the speed at which the system processes requests in a queue cannot be compared with the number of frantically pouring into the queue. That is to say, the requests in the queue will accumulate more and more, and eventually the average response time of the Web system will still drop significantly, and the system will still fall into an exception.

4. Optimistic locking idea

At this time, we can discuss the idea of ​​"optimistic locking". Optimistic locking is a looser locking mechanism than "pessimistic locking", and most of them are updated with a version number. The realization is that all requests for this data are eligible to be modified, but a version number of the data will be obtained. Only the version number that matches can be updated successfully, and the others will fail to return to snap-up. In this case, we don't need to consider the queue problem, but it will increase the computational overhead of the CPU. However, on the whole, this is a better solution.

 

There are many software and services that support "optimistic locking", for example, the watch in Redis is one of them. With this implementation, we keep the data safe.

1), the problem of optimistic locking, for example: two requests (A, B) will update a record at the same time, A updates (judging that the version is passed) At this time, the A transaction has not ended (there is logic behind), at this time, B Also update (judging that the version is also passed) B transaction ends, and A ends later, so the update of A transaction is wrong (dirty read). (Optimistic locking cannot solve the problem of dirty reads in the first place)

 

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326224757&siteId=291194637