2. System optimization under high concurrency-pressure test of dynamic resource cluster read operation

Next, the pressure test of the cluster is performed, the configuration is the same as that of the single machine, one database + redis + mq, two applications, and one nginx.

Without any optimization:

It can be seen that the database is under great pressure, nginx is under great pressure, and there are many wrong connections


Optimize the database (same machine):

tps becomes higher and visits 500 become less


Then optimize tomcat and establish a long connection (same machine):

It can be seen that, like the single machine, the performance declines after establishing a long connection, and this will be studied later.


ngxin establishes a long connection:

It has been improved and the relative application layer consumption has been reduced.


Next join redis:

Since 1000 threads are now a little bit less, and ended without running, it was increased to 3000, 20 times:

The tps has been improved again, this screenshot is the peak, on average about two thousand. The current bottleneck seems to be nginx, which will be studied later.


Use guava cache to store local cache + redis secondary cache:

Change the application server configuration to 4 core 8G:

From the results, the performance of memory cache is worse than redis. The pressure is all on the application server, not on the redis, so although the loss of network communication is avoided, the performance of the application server is still tested, so whether or how to use the memory cache still needs to be weighed.


nginx share dic:

Using nginx's shared memory dictionary, performance has improved a lot. Of course, the disadvantage is the same, the cache cannot be updated, and the pressure is all on the nginx server.


nginx uses lua to call redis:

Due to the loss of network communication, performance is not as good as nginx, but for management, it is better.


So far, the test of the read operation of dynamic resources is here. Static resources and page statics and CDN acceleration are very simple, and here is unpredictable.

Next, to summarize, in the case of a single machine, the indexed query speed will be much faster. In theory, the primary key index> unique index> non-unique index> no index, but the query speed of the primary key index is less than non-unique Index, need to study here.

 

By increasing the number of socket connections, database cache, etc., the system performance can be slightly improved, but after adding a long connection, it is reduced in the test. Here, Http11NioProtocol also needs to be studied.

 

Under concurrent conditions, optimizing the database configuration can increase the tps by a large amount, and it also decreases after adding long connections.

If the pressure of the database is reduced by redis, the performance will be further improved. If the multi-level cache structure of local cache + redis is used, the performance of the application server will be tested. The level of the machine configuration will affect the speed.

Using nginx's share dic, because it can be read directly from the nginx cache and does not need to pass through the application server and database, the speed is greatly improved, but due to the limitations of memory cache, redis can be combined, but due to network communication loss , So performance is not directly faster in nginx, of course, this also tests the machine performance of nginx server.

Published 97 original articles · won 28 · 10,000+ views

Guess you like

Origin blog.csdn.net/haozi_rou/article/details/105488134