Let's perform a 200 thread pressure test on the transaction and look at the resources of the application server:
Look at the pressure test results:
Next, look at the database resources with 1000 threads:
And the time is also increased:
Then we generally have several steps for placing an order: 1. Verify that the product exists, whether the user is legal, and whether the purchase quantity is correct. 2. Decrease orders and reduce inventory. 3. Order storage, plus sales of goods. 4. Return to the front end.
Through the above steps, in fact, we have at least 6 operations on the database, and when reducing the database is based on the id operation, there is a row lock, so the performance needs to be optimized.
The optimization of transaction verification can be divided into two parts:
User risk control strategy optimization: Modeling strategy cache
Activity verification strategy optimization: the introduction of activity publishing process, model caching, and emergency offline capability
For example, we put product information in redis (the same as user information):
@Override
public ItemModel getItemByIdInCache(Integer id) {
ItemModel itemModel = (ItemModel) redisTemplate.opsForValue().get("item_validate_"+id);
if (itemModel == null){
itemModel = this.getItemById(id);
redisTemplate.opsForValue().set("item_validate_"+id , itemModel);
redisTemplate.expire("item_validate_"+id , 10 , TimeUnit.MINUTES);
}
return itemModel;
}
Pressure test
The reason why the difference is not too big is the server bandwidth problem, but there is still improvement.
For the emergency removal function, we can open an interface and delete redis.
For optimization of inventory line locks:
Deduct inventory cache
Asynchronous synchronous database
Inventory database final consistency guarantee
How to cache first? Generally, there will be a listing operation for active products, so we can write an interface to synchronize data:
@Override
public void publicPromo(Integer promoId) {
//通过活动id获取活动
PromoDO promoDo = promoDOMapper.selectByPrimaryKey(promoId);
if (promoDo.getItemId() == null || promoDo.getItemId().intValue() == 0){
return;
}
ItemModel itemModel = itemService.getItemById(promoDo.getItemId());
//将库存同步到redis内
redisTemplate.opsForValue().set("promo_item_stock_" + itemModel.getId() , itemModel.getStock());
}
Then the controller layer is called to synchronize to redis.
The next step is to reduce the inventory, the idea is to directly reduce the inventory in redis:
@Override
@Transactional
public boolean decreaseStock(Integer itemId, Integer amount) throws BusinessException {
//int affectedRow = itemStockDOMapper.decreaseStock(itemId,amount);
long result = redisTemplate.opsForValue().increment("promo_item_stock_" + itemId , amount * -1);
if(result >= 0){
//更新库存成功
return true;
}else{
//更新库存失败
return false;
}
}
Of course, this situation cannot be used in the production environment, because the database data is inconsistent, so you need to optimize it:
Use the asynchronous message queue to send the deducted messages to the message consumer side, and the consumer side completes the database deduction operation. Here we use rocketmq, which is a high-performance and high-concurrency distributed messaging middleware. The typical application scenario is distributed transactions and asynchronous decoupling.
Rocketmq installation is very simple, wget what look like decompression, start-up and testing can be seen in the official website http://rocketmq.apache.org/docs/quick-start/ ,
Start the server: nohup sh bin / mqnamesrv &
broker:nohup sh bin/mqbroker -n localhost:9876 &
Not to mention here, there are a few pits to pay attention to:
The mq boot just downloaded needs a lot of memory space, so you need to change it:
bin/runserver.sh:
bin/runbroker.sh:
The specific size configuration depends on the actual situation of the computer.
Rocketmq also provides us with many commands, you can look at mqadmin:
Try a new topic: ./mqadmin updateTopic -n localhost: 9876 -t stock -c DefaultCluster
There may be an error here:
Need to change the contents of tools.sh, you can use the command to find find / -name '* ext *' | grep jdk:
There is also a pit to note. If it is configured on the server, the ip on the broker may be the intranet ip, you can use sh ./mqbroker -m to take a look:
Need to modify conf / broker.conf:
The start command should be added with conf:
nohup sh bin/mqbroker -n localhost:9876 -c conf/broker.conf &
Next, the actual code:
<dependency>
<groupId>org.apache.rocketmq</groupId>
<artifactId>rocketmq-client</artifactId>
<version>4.3.0</version>
</dependency>
mq.nameserver.addr=47.107.*.*:9876
mq.topicname=stock
@Component
public class MqProducer {
private DefaultMQProducer producer;
@Value("${mq.nameserver.addr}")
private String nameAddr;
@Value("${mq.topicname}")
private String topicName;
@PostConstruct
public void init() throws MQClientException {
//mq producer初始化
producer = new DefaultMQProducer("producer_group");
producer.setNamesrvAddr(nameAddr);
producer.start();
}
//同步库存扣减消息
public boolean asyncReduceStock(Integer itemId , Integer amount){
Map<String , Object> map = new HashMap<>();
map.put("itemId" , itemId);
map.put("amount" , amount);
Message message = new Message(topicName , "increas" ,
JSON.toJSON(map).toString().getBytes(Charset.forName("UTF-8")));
try {
producer.send(message);
} catch (MQClientException e) {
e.printStackTrace();
return false;
} catch (RemotingException e) {
e.printStackTrace();
return false;
} catch (MQBrokerException e) {
e.printStackTrace();
return false;
} catch (InterruptedException e) {
e.printStackTrace();
return false;
}
return true;
}
}
@Component
public class MqConsumer {
private DefaultMQPushConsumer consumer;
@Value("${mq.nameserver.addr}")
private String nameAddr;
@Value("${mq.topicname}")
private String topicName;
@Autowired
private ItemStockDOMapper itemStockDOMapper;
@PostConstruct
public void init() throws MQClientException {
consumer = new DefaultMQPushConsumer("stock_consumer_group");
consumer.setNamesrvAddr(nameAddr);
consumer.subscribe(topicName , "*");
consumer.registerMessageListener(new MessageListenerConcurrently() {
@Override
public ConsumeConcurrentlyStatus consumeMessage(List<MessageExt> msgs, ConsumeConcurrentlyContext consumeConcurrentlyContext) {
//实现库存真正在数据库中扣减
Message msg = msgs.get(0);
String jsonString = new String(msg.getBody());
Map<String , Object> map = JSON.parseObject(jsonString , Map.class);
Integer itemId = (Integer) map.get("itemId");
Integer amount = (Integer) map.get("amount");
itemStockDOMapper.decreaseStock(itemId , amount);
return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
}
});
consumer.start();
}
}
Business Layer:
@Autowired
private MqProducer mqProducer;
@Override
@Transactional
public boolean decreaseStock(Integer itemId, Integer amount) throws BusinessException {
long result = redisTemplate.opsForValue().increment("promo_item_stock_" + itemId , amount * -1);
if(result >= 0){
//更新库存成功
boolean mqResult = mqProducer.asyncReduceStock(itemId , amount);
if (!mqResult){
//失败回滚redis
redisTemplate.opsForValue().increment("promo_item_stock_" + itemId , amount);
return false;
}
return true;
}else{
//更新库存失败
redisTemplate.opsForValue().increment("promo_item_stock_" + itemId , amount);
return false;
}
}
This can achieve synchronization, but there will still be several problems:
1. What to do if asynchronous message sending fails
2. What to do if the deduction operation fails
3. What to do if the order fails and the stock cannot be replenished correctly
Solve these problems later