Sharing of high concurrency and high throughput brought about by asynchrony

For the optimization of the increase of the concurrency of the system, in addition to the consideration of the internal performance of the module and the segmentation, shunting, etc.,
I will share the experience of concurrency and throughput optimization of open platforms that I have done before.
 
There are distributed calls between A, B, C, and D, and the system A is accessed through http.
 
 
Traditional way: There are synchronous blocking calls between various systems, including Http access is also blocking.
                  When the user concurrency is relatively large, the common phenomenon is that each system thread is in a blocked state, the thread-related resources are exhausted, the system concurrency does not increase, and the related CPU is actually not high.
                  Because system A has to wait for all background calls to be completed before releasing the received thread and returning the result, this is especially obvious in system A; indirectly, it cannot provide enough concurrency to the following system D. ·
Asynchronous method (SEDA): For high-concurrency distributed systems, the NIO method was proposed early to address this problem, including two parts,
                     Part of it is the asynchrony between back-end systems, typically mature frameworks such as Netty, Mina, xsocket, etc.
                    The other part is the asynchronous servlet for front-end access to http (all have corresponding specifications), which is supported in Http1.1, and is called synchronously for client LCP. Typically, Jetty6, jetty7, tomcat6, tomcat7 and other commercial servers have implementations.
                     The basic idea is to use a non-blocking method, separate the accepting thread and the worker thread, and improve the throughput of the entire system.
                     The advantage is that the availability of the entire system is greatly increased, the coupling between systems is greatly reduced, and there is a relatively high throughput. Each system can run at high speed without waiting for each other's resources, and can make full use of resources. At the same time, the whole system can also have good scalability.
                     The disadvantage is that, to be fair, the requirements for the CPU are still relatively high.
                     
For the invocation of distributed systems, while paying attention to availability, it is also necessary to pay attention to consistency. Traditional transactions (JTA, database) cannot be solved very well (also have a relatively large impact on performance).
The industry's approach to consistency uses the CAP BASE principle to reverse transactions (payment systems, bank transactions, and telecommunication production systems all use this approach).
http://blog.csdn.net/yangbutao/article/details/7437297

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326194540&siteId=291194637