nginx configure keepalive long connection

Detailed explanation of nginx keepalive and its configuration_keepalive_timeout_Hengzhe Zutianxia Blog-CSDN Blog

Why keepalive?

Because each time TCP is established, a three-way handshake must be established, which takes a long time. Therefore, in order to reduce the time required for TCP to establish a connection, you can set a keep_alive long connection.

Configuration of keep_alive on the client in nginx

keepalived_disable         disables certain browsers

keepalive_request : In our tcp connection, we do not wait for one request to be processed before continuing with the next request, but make an asynchronous request, which means that one tcp connection can make many requests. This parameter defaults to 1000, which is completely sufficient.

send_request : After tcp establishes a connection, the time the server spends in preparing data will disconnect if it exceeds the set time.

keepalive_timeout : If the tcp connection process exceeds this set time, the connection will be disconnected.

keepalive_time : The maximum time for tcp connection. (The client cannot be allowed to reuse tcp connections indefinitely)

nginx keepalive for upstream backend services

Configuration purpose: When nginx is connected to a server, configure keepalive to achieve connection reuse and improve transmission efficiency.

Configurable parameters in the upstream server list:

keepalive: the number of threads that can be supported

keepalive_requests: How many concurrent TCP connections each thread can have

keepalive_timeout: connection retention time

Parameters configured in the server:

proxy_http_version:1.1; Set the http version. By default, the http1.0 version initiates a request like the back-end service. 1.0 will close the connection after each request is initiated. The connection will be established again after the next request is initiated, which consumes time.

proxy_set_header Connection ""; When nginx sends a request to the backend server, the Connection parameter defaults to the close state, so a long connection keepalive will not be established with the backend server. This parameter means the Connection parameter of the header that nginx sends to the backend. If set to empty or set to keepalive, keepalive long connections are supported (http1.1 supports long connections by default).

Use ab stress testing tool to compare the performance before and after nginx keepalive parameter tuning

Install ab stress testing tool

yum install httpd-tools -y

The ab tool directly performs stress testing on the nginx server

1. First perform a stress test directly on the back-end server

-n: number of requests

-c: number of concurrency

ab -n 10000 -c 30 http://192.168.44.120/

Transfer rate: throughput rate, download rate per second

Requests per second (qps): Concurrency per second

2. Conduct ab stress test on nginx proxy server (without keepalive long connection configuration)

It can be seen that after passing the nginx proxy, the throughput and qps concurrency have dropped significantly. This is because nginx needs to transmit data to the back-end server after passing the proxy, and there is no lengthened connection configuration, so the performance decreases.

 3. Conduct ab stress test on nginx proxy server (add keepalive long connection configuration)

Add configuration

 

 Observation effect:

Concurrency and throughput have been improved, and response latency has been reduced. It is proved that the keepalive configuration can effectively optimize the request efficiency.

 

 Use the ab stress testing tool to compare the performance of nginx's keepalive parameters before and after tuning (the backend is tomcat)

Use ab direct connection to test tomcat, the performance is average

Use ab plus nginx (no keepalive) proxy to test tomcat, the performance is slightly lower than direct connection

Use ab plus nginx (keepalive) proxy to test tomcat. The performance is greatly improved compared to direct connection.

Conclusion: Therefore, adding nginx proxy in front of tomcat is definitely not only for dynamic and static separation and load balancing, but also for keepalive performance optimization to increase concurrency.

Note:

There are some special scenarios, such as the client browser does not support keepalive, or it only accesses by exposing the tomcat interface. You can add proxy keepalive through nginx for tuning. Generally, browsers have keepalive, so there is generally no need to pass nginx does tomcat keepalive performance tuning

おすすめ

転載: blog.csdn.net/h2728677716/article/details/132483708