The routine of refactoring the system-improving the concurrency capability

To improve the concurrency capability of the system, there are three points to sum up: asynchronous, cache, and parallel.

asynchronous

For example, we added a synchronous write kafka operation to a certain piece of business logic, and tp99 increased by 30 milliseconds instantly, which made the whole monitoring curve look very eye-catching, so we need to change this synchronization to asynchronous. The old system needs to be sorted out in the business. If the return value is not concerned in the business scenario, it can be made asynchronous. If the business cares about the return value, such as order logic, many downstream services need to pass in the main order ID to connect with the downstream service, so writing the main order must become a synchronization logic, but the main concern is this OrderId, we can Build an OrderId generator, so that a single OrderId service has better performance and can string together the entire business logic.

cache

In order to improve the response time of the entire logical link, we should place the data closer to the access, so that the response is faster. With the cache, we may have some kind of dependency to put as much data as possible into the cache without proper sorting and analysis. In this way, the cached data may become larger and larger, and the memory cost will become higher and higher. When the traffic caused by a promotion becomes larger and larger, if there is no reasonable expansion, it will cause unexpected problems. For example, our marketing activities are cached in redis. Each marketing activity actually has a lot of aggregated information, and each activity has an expiration date, which is more suitable for caching scenarios, but one day, the operation strategy of the product classmates changed. For Double 11, we need to do some preheating in advance, so that the cycle for users to receive coupons will be longer. If it is simply put in redis, in fact, when the overall activity information arrives at the peak, the original redis capacity will cause a certain amount of time. performance impact.

parallel

For downstream services, we create a thread pool for asynchronous processing, so we need to care about setting a reasonable thread pool. Comb through the code of the system, and change many synchronous for and while loops into a Future-based synchronous model to improve the overall parallelism and achieve a certain performance improvement.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325073510&siteId=291194637