Internet Finance Inventory Control Practice

        Spike, the name originated from Taobao, is now well-known to everyone. It is essentially a user's bidding for scarce resources. The whole process is instantaneously completed in seconds. In a distributed environment, the technical requirements behind this simple term are extremely high, and architects need to start from the perspective of overall balance and design well in order to cope with it calmly.

       In the process of bidding for scarce resources, full consideration should be given to optimizing the entire process from opening the page, constantly refreshing, placing an order, paying, deducting inventory, and shipping, including CDN caching of static resources, AJAX asynchronous data loading, data Cache, IP throttling, inventory control optimization and more.

The general deployment structure is as follows:

 

 There are many other optimization articles about high concurrency, which will not be repeated here. This article focuses on the evolution of the inventory control of the WeX cloud financial system, which has roughly gone through three stages.

 

1. Simple database lock

In the initial stage of business volume, concurrency is not high, and ordinary database row-level locks can complete inventory control.

There are two ways, the pseudo code is as follows:

 

Way 1 :

select for update;
if(stock - delta >= 0){
    update set stock = stock - delta;
    insert order;
}

 

 Way 2 :

result = update set stock = stock – delta where stock – delta >= 0;
if(result >= 0){
    insert order;
}

With the growth of business volume and the promotion of brands, high-quality products quickly become scarce resources, and spikes are gradually being staged. Users often wait for more than ten seconds or even dozens of seconds to purchase, and the user experience drops sharply. It was confirmed that the bottleneck was the database row-level lock. In order to make the business return to normal as soon as possible, we decided to adopt a temporary solution, that is, to use Memcache's Incr atomic operation to rapidly improve performance.

 

2. Memcache self-increment + database lock

The idea is to reduce the row-level lock competition, complete the count through memcache when the inventory is sufficient in the early stage, and directly complete the order record without inventory consumption; when the count result is greater than the inventory, clear the cache, and then count the total consumption of the orders that have occurred to update the inventory at one time. .

 

The pseudo code is as follows:

// count buffer is not empty
if(memcacheClient.get(key) != null){
     result = memcacheClient.incr(key, delta);
     if(result >= stock){
          // 1. Lock inventory
          select for update;
         
          // 2. Count the order consumption that has occurred
          sum = select sum(delta) from order;

          // 3, one-time update inventory
          update set stock = stock - sum;

          // 4. Clear the count
          memcacheClient.set(key, null);
     }else{
          insert order;
     }
}
// count exceeds stock availability
else{
     result = update set stock = stock - delta where stock - delta;
     if(result > 0){
          insert order;
     }
}

 After this transformation, the performance has been greatly improved, but careful readers will find that such a mode has potential security risks. The cache and database are independent services across the network, and they cannot guarantee strong consistency, network jitter of the cache service or cache service. Downtime may be oversold. However, the cost of such a temporary transformation is relatively small, and the immediate problem is quickly solved, ensuring the smooth operation of the business, and buying time for further evaluation and overall transformation in the future.

 

3. Database-based inventory partitioning

In the case of high concurrency and strong consistency, the solution will eventually be fundamentally changed around reducing hotspot competition, and there are basically two ways:

1. Queue mode, which transforms instantaneous peaks into relatively low-frequency streaming competition;

2. Decentralized hotspot mode, dividing an original block into multiple blocks and competing in blocks.

For the queue mode, a delayed response is often required, that is, the user waits for the application to be confirmed by the queue consumption result after completing the payment. Compared with the decentralized hotspot mode, the synchronous response can continue to be maintained, and there is no conflict in the subsequent stacking queue mode, so the final choice is decentralized. The hotspot scheme is used as an upgrade scheme.

The operation mode is as follows:



 When all the slave inventory is consumed, it enters the main inventory consumption, the whole process can be reused for each inventory consumption logic, the code is concise, and the performance is greatly improved. On this basis, the segmentation strategy can be controlled in combination with the characteristics of its own operation. The number of best-selling products in the segment can be relatively more, but it is not as much as possible. Consolidating inventory will bring additional overhead.

       Finally, business operations can also distribute bestsellers over time.

 

 

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=327067131&siteId=291194637