1. System optimization under high concurrency-overview + stand-alone pressure test

There are many optimizations under concurrency. Let's summarize and test them one by one:

Read operation:

The first is stand-alone optimization:

1. Database: optimize statements to avoid fuzzy queries, multi-table commas, etc.

2. Database: read operation: add index to the database to increase query speed.

3. Database: increase the number of database socket connections, increase the cache, etc.

4. Container: increase tomcat waiting queue length, number of threads, etc.

5. Container: establish a long connection with the client

Then switch to a cluster:

6. Nginx reverse proxy load balancing

7.nginx establish long connection

8. Join the cache and use redis

9. Add nginx lua read cache

10. Static resource cdn acceleration

11. Use crawlers to make the page static and cdn speed up

Next is the optimization of the spike transaction:

12. Database: write operation: mybatis batch write operation, reduce the time of SQL pre-compilation.

13. Join mq, use transactional messages to deduct inventory and other operations

14. Issue limited spike tokens to control traffic based on the number of goods and sold out

15. Use thread pool, rely on the queue and downstream congestion window to adjust the queue to flood.

16. Add verification code

17. Use Guava's RateLimiter to perform token-like buckets to limit current

Mainly carry out these pressure tests, and find the best optimization method.


I don't consider the bandwidth problem first, so I set up a virtual machine locally and the test configuration is:

mysql version:

The amount of data:

jdk:

 

Pressure measuring tool: jmeter 

Test below


Stand-alone:


Basic (non-primary key without index):

1000 threads, 20 times

Running to the middle and stopping, it is really terrible.


Add an ordinary index (not add another million-level table):

Because the primary key is incremented, the unique index is not measured

1000 threads, 20 times

Still stopped in the middle, but it can still be seen that there is still some improvement.


Both tables are indexed:

1000 threads, 20 times

tps came up instantly, showing the importance of indexing.


The query uses the primary key (only one table is used, the other table cannot use the primary key, and the normal index is still used):

It can be seen that the index is not used efficiently when the primary key is used. But under normal circumstances, the primary key will be more efficient than the ordinary index, so we verify through a single table query:

Primary key query:

General index query:

Indeed, ordinary indexes are faster. My guess is that if the ID is the primary key, a clustered index based on the primary key will be created. In the case of large data, an index key value may correspond to one or more data pages, which makes a id will scan more data pages to get the id value. For ordinary indexes, you can directly check through the index, so the speed is faster than the primary key. Of course, this issue needs further study.


Increase the number of database socket connections, cache, etc .:

There are two notable places: 

When innodb_data_file_path is configured, you need to delete the original one or relocate a place.

Delete ib_logfile0 and ib_logfile1 under / var / lib / mysql.

The connection time is slightly changed. Of course, these values ​​still need to be adjusted after testing.


Single container optimization:

No parameters and parameters added, the number of threads at runtime:

 


Plus long connections?

/**
 * 当spring容器内没有TomcatEmbeddedServletContainerFactory这个bean时,会把此bean加载进spring容器中
 */
@Component
public class WebServerConfiguration implements WebServerFactoryCustomizer<ConfigurableWebServerFactory> {
    @Override
    public void customize(ConfigurableWebServerFactory factory) {
        //使用对应工厂类定制化tomcat connector
        ((TomcatServletWebServerFactory) factory).addConnectorCustomizers(new TomcatConnectorCustomizer() {
            @Override
            public void customize(Connector connector) {
                Http11NioProtocol protocol = (Http11NioProtocol) connector.getProtocolHandler();
                //定制化keepalivetimeout,30秒
                protocol.setKeepAliveTimeout(30000);
                //客户端发送超过10000个请求自动断开
                protocol.setMaxKeepAliveRequests(400);
            }
        });
    }
}

Instead, the tps is down, so we need to study about Http11NioProtocol later.

Published 97 original articles · won 28 · 10,000+ views

Guess you like

Origin blog.csdn.net/haozi_rou/article/details/105477951