nginx 499 error handling and nginx configuration parameters

nginx 499 error handling and nginx configuration parameters

background

Recently, in response to the group's efforts to reduce costs and increase efficiency, and save ci and stg machines, our project began to be containerized. During the transformation process, the access to the link changed, resulting in 499. The solution is as follows

Access link: domain name —> ELB (intranet access) —> openrestry (stg environment, supports custom lua scripts) —> ELB (providing service fixed IP) —> (container cluster) POd

499 processed

The reason why nignx reported 499 is that the server response timed out and nginx actively disconnected. For example: The default timeout time of nginx configuration is 60s. When the server interface responds for 62s, the result is returned. Then when 60s, nginx actively disconnects.
1. Adjust proxy_xxx_timeout as follows
2. Optimize the interface and improve the response speed of the interface

server{
    
    
    location / {
    
    
        proxy_connect_timeout 200;
        proxy_send_timeout 200;
        proxy_read_timeout 200;
    }
}

proxy_connect_timeout: Controls the timeout for establishing a connection with the backend server, in seconds, and the default is 60 seconds.

proxy_read_timeout: Controls the timeout period for waiting for a response from the backend server, in seconds, and the default is 60 seconds.

proxy_send_timeout: Controls the timeout for sending requests to the backend server, in seconds, and the default is 60 seconds.

These three parameters are used to limit the timeout period for communication with the backend server when using nginx as a reverse proxy, thereby preventing the proxy server from being blocked by the backend server's response and unable to respond to the client's request. For example, if an abnormality or network failure occurs in the backend server, the response will be delayed or never returned. At this time, the proxy server will wait for the timeout and then return an error response.

Note: When using these timeout settings, you should consider your application's network and operating environment to determine the appropriate timeout. Setting the timeout too short may cause the request to fail, while setting it too long may cause the front-end website to respond slowly. It can be adjusted appropriately according to the actual situation and tested to ensure reliability and performance. The larger the parameter value, the better. For interfaces that are really slow to respond, performance optimization must be performed.

Knowledge expansion: nginx configuration parameters

server{
    
    
    location / {
    
    
        proxy_ignore_client_abort  on;
        proxy_connect_timeout 200;
        proxy_send_timeout 200;
        proxy_read_timeout 200;
    }
}

proxy_ignore_client_abort: used to implement the "ignore client termination" function. By default, "nginx" will forward connection requests with the client to the backend server, but if the client disconnects from the request service midway, it will usually cause the connection to the backend to be closed early. At this point, the backend program may continue to execute for a period of time, but will no longer be able to send a response back to the proxy server, which may result in an incorrect response on the proxy server.

proxy_ignore_client_abort: off Default configuration, client disconnection will abort the response
proxy_ignore_client_abort: on Client disconnection will not abort the response, the proxy server will ignore the client request termination and let the backend server continue to respond until the response is completed.

http:{
    
    

    fastcgi_connect_timeout 200;
    fastcgi_send_timeout 200;
    fastcgi_read_timeout 200; 
}

If the above parameters include FastCGI in your technology selection, you can consider turning on the above fastcgi_xx_timeout parameter.

If you haven't come across it, let's briefly introduce it: FastCGI (Fast Common Gateway Interface) is an interface protocol for Web applications. It proposes a new way to process dynamic Web content. The way astCGI is handled is different from earlier CGI scripts. CGI scripts start a new process on each request, which may cause performance loss and resource waste. FastCGI utilizes some process reuse technologies so that each application can remain in memory and wait for the next request, greatly improving performance.

The Tomcat container does not have built-in FastCGI functionality. In a standard SpringBoot application, you need to specially install, configure and/or use a third-party FastCGI library or framework to communicate with the FastCGI protocol. For example, you can use Mod_jk to implement FastCGI communication between Tomcat and a front-end Web server.

http {
    
    

    client_max_body_size 500m; //限制客户端请求体超过配置的值,超过返回413
    client_body_buffer_size 10M; //
    # client_body_temp: /xx // 大于client_body_buffer_size是存储的磁盘目录
    keepalive_timeout  65s; //
    client_header_timeout 600s; //
    client_body_timeout  600s;  // 
    proxy_connect_timeout 200s;
    proxy_send_timeout    200s;
    proxy_read_timeout    200s;
    proxy_max_temp_file_size 0;

    limit_req_zone $binary_remote_addr zone=burstLimit:10m rate=40r/s;

    gzip on;
    gzip_min_length 1k;
    gzip_buffers 4 16k;
    gzip_types text/plain text/css application/json application/x-javascript application/javascript text/javascript appliaction/xml;
    gzip_comp_level 3;
    gzip_disable "MSIE [1-6].(?!.*SV1)";
    gzip_vary off;
}

client_max_body_size: Limits the client request body to exceed the configured value. If it exceeds the value, 413 Request Entity Too Large will be returned. It can be configured in http, server, and location. This parameter is for the size of a single request body, rather than controlling the combined size of all requests.

client_body_buffer_size: The size of the buffer allocated when the client requests the body data into the memory. If the requested data is smaller than this configuration, the data will be directly stored in the memory first. If the requested value is larger than client_body_buffer_size and smaller than client_max_body_size, the data will be stored in a temporary file first. The default path of the temporary file is /tmp/, which can also be specified by configuring client_body_temp. Table of contents

client_body_temp: When the request body is larger than client_body_buffer_size and smaller than client_max_body_size, configure this parameter to be stored in the specified disk path. Note that this requires modifying permissions (chomd), otherwise Permission denied will be reported.

keepalive_timeout: When using the http/1.1 protocol, you can use keepalive to reuse the TCP connection and reduce the time it takes for the TCP three-way handshake and four waves to establish a network connection. keepalive_time means to keep the connection for a specified time after each request is completed, which can avoid the process of frequently creating and closing connections. The default value is 75s, if it is set to 0, it means that keepalive connection is prohibited

client_header_timeout: The timeout for the client to send a complete request header to the server, the default is 60s. If the client does not send a complete request header within the specified time, nginx returns http 408 (request time out)

client_body_timeout: The timeout for the client to send a complete request body to the server. The default is 60s. If the client does not receive the request body data within the specified time, note that the data transmission of the entire request does not start after receiving the header. The timeout for the process of transmitting the body (official document: The timeout is set only for a period between two successive read operations, not for the transmission of the whole request body If a client does not transmit anything within this time, the request is terminated with the 408 (Request Time-out) error.), nginx returns http 408 (request time out).

proxy_max_temp_file_size: The size of the file cached to disk. If set to 0, it will not be cached to disk. If the set value is exceeded, nginx will synchronize the delivery content with the proxy server and will no longer buffer it to disk.

limit_req_zone: nginx current limiting module, $binary_remote_addr indicates the target of the limit, remote_addr records the IP information occupying 7~15 bytes, binary_remote_addr indicates the compressed IP, occupying only 4 bytes. zone=name:size allocates a memory space named name and size to store access frequency information; rate=40r/s means that the same IP only allows 40 requests per second.

gzip: on: Turn on gzip compression of static resources, off to turn it off

gzip_min_length: 1k Configure the size to enable compression. Only resources larger than this will be compressed. If they are smaller, do not compress.

gzip_buffers 4: 16k buffer, how many blocks are compressed in the memory buffer, and how big is each block?

gzip_types: text/plain text/css application/json application/x-javascript application/javascript text/javascript appliaction/xml; Use compression for those types of files

gzip_comp_level: 3; compression level (the higher the level [1-9], the smaller the compression, and the more CPU computing time is consumed)

gzip_disable: "MSIE [1-6].(?!.*SV1)"; What kind of uri does the regular match do not perform gzip compression? This means that gzip is not enabled in ie6 and below (because these versions do not support it)

gzip_vary: off whether to transmit compression flag

nginx adds health check

The open source version of nginx does not come with a health check for the load balancing back-end node, but does a health check on the back-end node through the additional nginx_upstream_check_module module.

upstream name{
    
    
 server 192.168.0.1:80;
 server 192.168.0.2:80;
 check interval=3000 rise=2 fail=5 timeout=1000 type=http;

}

For the upstream load balancing of name, each node is tested once every 3 seconds. If the request is normal for 2 times, the realserver status is marked as up. If the detection fails for 5 times, the realserver status is marked as down, and the timeout is 1s.

Various load balancing software and hardware

ELB HAproxy nginx openresty F5 LVS

HaProxy: A load balancing software, like nginx role, supports load balancing of TCP/HTTP protocols, making its load balancing function very rich; it supports about 8 kinds of load balancing algorithms, especially in http mode, there are many Multiple practical balancing algorithms; a monitoring page with excellent functions to understand the current status of the system in real time; powerful ACL support, giving users great convenience (ACL access control list access control list: requests that comply with ACL rules are specified by backend The back-end server pool performs load balancing based on ACL rules. If it does not comply, the response can be interrupted directly or handed over to other servers for execution); it comes with its own monitoring and checking, and nginx needs to add additional modules to support health checks. The disadvantage is that the configuration files are cumbersome and require a certain level of technology and experience for deployment and maintenance.

ELB: The cloud software load balancing service provided by AWS is practical for cloud application deployment and somewhat easy to use; the disadvantage is that customization is limited and may not be comprehensive enough for enterprise-level needs.

F5: Load balancing hardware has very good performance. The number of requests processed per second can reach millions. The purchase price is also very expensive, ranging from hundreds of thousands to one million yuan. It is suitable for large enterprises with particularly large traffic.

LVS: Software load balancing, developed by Dr. Zhang Wensong of Alibaba, is the earliest free software in China. It is easy to configure and has strong load resistance. Disadvantages: It does not support regular processing, does not support dynamic and static separation, and is relatively dependent on the network environment.

openresty: An extension of nginx with high degree of freedom. It supports practical Lua and a large number of third-party modules. It can easily build web applications and gateways that can handle high concurrency and high scalability, thereby turning nginx from a simple load balancing to a Powerful gateway and universal web application platform.

reference

nginx official document Module ngx_http_core_module

Detailed explanation of Nginx timeout keeplive_timeout configuration

nginx current limiting method one: limit_req&limit_req_zone limits processing rate

Haproxy's ACL rules and practical cases

Comparing Nginx and HAProxy, what are the advantages and disadvantages of each?

nginx load balancing configuration, automatic switching during downtime

Guess you like

Origin blog.csdn.net/u013565163/article/details/131271824