Summary of Nginx problems

Why is Nginx performance so high?

  • Mainly because of its event processing mechanism: asynchronous non-blocking event processing mechanism ( event-driven asynchronous model ): using the epoll model, Nginx will create some event objects, and then register these event objects into the event driver. When an event occurs, the event driver will notify Nginx to handle the event, so as to achieve the purpose of asynchronously processing user requests.
  • At the same time, an event queue is provided to process the events in the queue one by one. This approach ensures that Nginx can efficiently handle a large number of connection requests while keeping the system load low.
  • If an I/O event is blocked, it will not affect the processing of other I/O events, because Nginx is based on an event-driven model and can process multiple events at the same time. When an I/O event is blocked, Nginx will remove the event from the epoll listening queue and put it into a new queue , waiting for the next time epoll listens to the available state of the event and processes it again. This method can ensure that Nginx maintains stability and reliability in high-concurrency scenarios.

What are forward proxy and reverse proxy?

  1. The forward agent acts as the client. When the client sends a request, it will specify the address and port of the server. When nginx receives the request, it will directly send the data to the target server. After receiving the response, nginx will return it to the client. The forward proxy can hide the actual client . For the server, nginx is the client.
  2. The reverse proxy acts as a proxy for the server, and the requests sent by the client (without specifying the server address and port) are uniformly received by Nginx. After receiving the request, the nginx reverse proxy server distributes it to the server according to certain rules (load balancing) After the back-end business processing server has processed it, nginx will send it back to the client after receiving the response from the back-end server. The reverse proxy can do load balancing, and can also hide the existence and characteristics of the source server, which is relatively safe.

How does Nginx handle requests?

  • After the worker process receives the request, it first parses the request content. If it is a simple get request, such as requesting port 80, it first accesses the nginx configuration file. First, the listen and server_name instructions match the server module, and then match the location in the server module. , location is the actual address
server {
    
                		    	# 第一个Server区块开始,表示一个独立的虚拟主机站点
        listen       80;      		        # 提供服务的端口,默认80
        server_name  localhost;    		# 提供服务的域名主机名
        location / {
    
                	        # 第一个location区块开始
            root   html;       		# 站点的根目录,相当于Nginx的安装目录
            index  index.html index.htm;    	# 默认的首页文件,多个用空格分开
        }          				# 第一个location区块结果
}

What is the role of location?

  • The role of the location command is to execute different applications according to the URI requested by the user, that is, to match according to the URL of the website requested by the user, and to perform related operations if the match is successful.

nginx current limiting

  1. Limit access frequency

Use the ngx_http_limit_req_module module in Nginx to limit the access frequency

#定义限流维度,一个用户一分钟一个请求进来,多余的全部漏掉
	limit_req_zone $binary_remote_addr zone=one:10m rate=1r/m;
	#绑定限流维度
	server{
    
    
		
		location/seckill.html{
    
    
			limit_req zone=zone;	
			proxy_pass http://lj_seckill;
		}

	}

1r/s represents one request per second, and 1r/m receives one request per minute. If Nginx still has other requests that have not been processed at this time, Nginx will refuse to process the user's request.

  1. Or limit burst traffic
location/seckill.html{
    
    
			limit_req zone=zone burst=5 nodelay;
			proxy_pass http://lj_seckill;
		}

There is one more burst=5 nodelay; this means that Nginx will process the first five immediately for a user’s request, and the excess will be dropped slowly. If there is no other user’s request, I will process yours. If there are other requests My Nginx missed and did not accept your request

  1. Limit the number of concurrent connections

The ngx_http_limit_conn_module module in Nginx provides the function of limiting the number of concurrent connections

http {
    
    
	limit_conn_zone $binary_remote_addr zone=myip:10m;
	limit_conn_zone $server_name zone=myServerName:10m;
    }

    server {
    
    
        location / {
    
    
            limit_conn myip 10;
            limit_conn myServerName 100;
            rewrite / http://www.lijie.net permanent;
        }
    }

It is configured that a single IP can only have a maximum of 10 concurrent connections at the same time, and the maximum number of concurrent connections of the entire virtual server can only be 100 connections at the same time. Of course, the number of connections to a virtual server is only counted when the request headers are processed by the server.

These current limiting algorithms are implemented based on the principle of the leaky bucket algorithm.

Leaky bucket flow algorithm and token bucket algorithm

Leaky bucket algorithm: Burst traffic will enter a leaky bucket, and the leaky bucket will process requests sequentially according to the rate we defined. If the water flow is too large, that is, the burst traffic is too large, it will directly overflow, and the redundant requests will be rejected. So the leaky bucket algorithm can control the data transmission rate.

The mechanism of the token bucket algorithm is as follows: there is a token bucket with a fixed size, which will continuously generate tokens at a constant rate. If the token consumption rate is less than the token production rate, tokens will be produced until the entire token bucket is filled.

Nginx load balancing

• In order to avoid server crashes, everyone will share the server pressure through load balancing. Form a cluster of the corresponding servers. When a user accesses, they first access a forwarding server, and then the forwarding server distributes the access to servers with less pressure.

Balance algorithm:

  • polling
  • Weights
  • ip_hash: Each request is allocated according to the hash result of the access IP, so that visitors from the same IP can access a backend server regularly,并且可以有效解决动态网页存在的session共享问题
  • **url_hash:** Allocate requests according to the hash result of the accessed url, so that each url is directed to the same backend server, which can further improve the efficiency of the backend cache server.

Why doesn't Nginx use multithreading?

Nginx:

Use single thread to process requests asynchronously and non-blockingly (the administrator can configure the number of working processes of the Nginx main process), and will not allocate cpu and memory resources for each request, saving a lot of resources and reducing a lot of CPU context Switching, so that Nginx supports higher concurrency.

nginx module handler module, filter module, upstream module;

The handler module is for the client to access nginx. When nginx receives the request, it starts processing and then returns

The filter module is sent from the back end to nginx, and nginx sends it to the front end, and the back end data arrives at nginx. The filter can modify the response header and response body, and add something on the basis of the response.

The upstream mode is the module that nginx forwards to the backend, such as the fastcgi module.

Guess you like

Origin blog.csdn.net/weixin_44477424/article/details/131565258