One article to fully understand what Nginx is

Original address: https://blog.csdn.net/wuzhiwei549/article/details/122758937

What is Nginx?

Nginx is a lightweight/high-performance reverse proxy web server for HTTP, HTTPS, SMTP, POP3 and IMAP protocols. It implements very efficient reverse proxy and load balancing. It can handle 20,000-30,000 concurrent connections, and official monitoring can support 50,000 concurrent connections. There are now many users of nginx websites in China, such as Sina, NetEase, Tencent, etc.

What are the advantages of Nginx?

  • Cross-platform and simple to configure.
  • Non-blocking, high-concurrency connections: handles 20,000-30,000 concurrent connections, and official monitoring can support 50,000 concurrent connections.
  • Low memory consumption: opening 10 Nginx only takes up 150M of memory.
  • Low cost and open source.
  • High stability and very low probability of downtime.
  • Built-in health check function: If a server goes down, a health check will be done, and subsequent requests will not be sent to the down server >-. Resubmit the request to other nodes

Nginx application scenarios?

  • http server. Nginx is an http service that can provide http services independently. Can be used as a static web server.
  • Virtual host. Multiple websites can be virtualized on one server, such as virtual machines used by personal websites.
  • Reverse proxy, load balancing. When the number of visits to the website reaches a certain level and a single server cannot satisfy user requests, multiple server clusters are needed and nginx can be used as a reverse proxy. And multiple servers can share the load evenly, and there will be no downtime due to a high load on a certain server and a certain server being idle.
  • Security management can also be configured in nginx. For example, Nginx can be used to build an API interface gateway to intercept each interface service.

How does Nginx handle requests?

server {
    
     # 第一个Server区块开始,表示一个独立的虚拟主机站点
   listen       80# 提供服务的端口,默认80
   server_name localhost; # 提供服务的域名主机名
   location / {
    
     # 第一个location区块开始
     root   html; # 站点的根目录,相当于Nginx的安装目录
     index  index.html index.html; # 默认的首页文件,多个用空格分开
} #第一个location区块结果
  • First, when Nginx starts, it will parse the configuration file to obtain the port and IP address that need to be monitored, and then initialize the monitoring Socket in the Nginx Master process (create Socket, set addr, reuse and other options, bind to Specify the IP address and port, and then listen).
  • Then, fork (an existing process can call the fork function to create a new process. The new process created by fork is called a child process) to create multiple child processes.
  • Afterwards, the child process will compete to accept new connections. At this point, the client can initiate a connection to nginx. When the client performs a three-way handshake with nginx and establishes a connection with nginx. At this time, a certain sub-process will accept successfully, get the Socket of the established connection, and then create nginx's encapsulation of the connection, that is, the ngx_connection_t structure.
  • Next, set the read and write event processing functions, and add read and write events to exchange data with the client.
  • Finally, Nginx or the client actively closes the connection. At this point, a connection comes to an end.

How does Nginx achieve high concurrency?

If a server uses one process (or thread) to be responsible for one request, then the number of processes is the number of concurrency. It is obvious that there will be many processes waiting. What are you waiting for? The most likely thing is to wait for network transmission.

Nginx's asynchronous non-blocking working method takes advantage of this waiting time. When they need to wait, these processes are idle and on standby. Therefore, a small number of processes solve a large number of concurrency problems.

How does Nginx use it? To put it simply: if the same four processes are used, one process is responsible for one request, then after four requests come in at the same time, each process will be responsible for one of them until the session is closed. During this period, if a fifth request comes in. It is impossible to respond in time because the four processes have not finished their work. Therefore, there is usually a scheduling process. Whenever a new request comes in, a new process is opened to handle it.

Nginx is not like this. Every time a request comes in, there will be a worker process to process it. But it’s not the entire process. To what extent? Process where blocking may occur, such as forwarding the request to the upstream (backend) server and waiting for the request to return. Then, the processing worker will not wait so stupidly. After sending the request, he will register an event: "If the upstream returns, tell me and I will continue." So he went to rest. At this time, if another request comes in, he can quickly handle it in this way. Once the upstream server returns, this event will be triggered, the worker will take over, and the request will continue to go down.

This is why Nginx is based on the event model.
Since the working nature of the web server determines that most of the life of each request is in network transmission, the actual time spent on the server machine is not much. This is the secret to solving high concurrency with just a few processes. That is:
webserver happens to be a network IO-intensive application, not a computing-intensive one.
Asynchronous, non-blocking, using epoll, and optimized in a lot of details. It is also the technical cornerstone of Nginx.

What is a forward proxy?

A server located between the client and the origin server. In order to obtain content from the origin server, the client sends a request to the proxy and specifies the target (origin server). The proxy then forwards the request to the origin server and obtains the content. Returned to the client.

Only clients can use forward proxies. The summary of forward proxy is just one sentence: the agent acts as an agent for the client. For example: we use OpenVPN and so on.

What is a reverse proxy?

The Reverse Proxy method refers to using a proxy server to accept connection requests on the Internet, and then sends the request to the server on the internal network and returns the results obtained from the server to the client requesting the connection on the Internet. , at this time the proxy server appears as a reverse proxy server to the outside world.
The summary of reverse proxy is just one sentence: the agent acts as a proxy for the server.

What are the advantages of reverse proxy servers?

Reverse proxy servers can hide the existence and characteristics of the origin server. It acts as an intermediary layer between the internet cloud and web servers. This is great for security aspects, especially when you use a web hosting service.

What is the directory structure of Nginx?

├── client_body_temp
├── conf # Nginx所有配置文件的目录
│ ├── fastcgi.conf # fastcgi相关参数的配置文件
│ ├── fastcgi.conf.default         # fastcgi.conf的原始备份文件
│ ├── fastcgi_params # fastcgi的参数文件
│ ├── fastcgi_params.default       
│ ├── koi-utf
│ ├── koi-win
│ ├── mime.types # 媒体类型
│ ├── mime.types.default
│ ├── nginx.conf # Nginx主配置文件
│ ├── nginx.conf.default
│ ├── scgi_params # scgi相关参数文件
│ ├── scgi_params.default  
│ ├── uwsgi_params # uwsgi相关参数文件
│ ├── uwsgi_params.default
│ └── win-utf
├── fastcgi_temp # fastcgi临时数据目录
├── html # Nginx默认站点目录
│ ├── 50x.html # 错误页面优雅替代显示文件,例如当出现502错误时会调用此页面
│ └── index.html # 默认的首页文件
├── logs # Nginx日志目录
│ ├── access.log # 访问日志文件
│ ├── error.log # 错误日志文件
│ └── nginx.pid # pid文件,Nginx进程启动后,会把所有进程的ID号写到此文件
├── proxy_temp # 临时目录
├── sbin # Nginx命令目录
│ └── nginx # Nginx的启动命令
├── scgi_temp # 临时目录
└── uwsgi_temp # 临时目录

What attribute modules does the Nginx configuration file nginx.conf have?

worker_processes  1# worker进程的数量
events {
    
     # 事件区块开始
    worker_connections  1024# 每个worker进程支持的最大连接数
} # 事件区块结束
http {
    
     # HTTP区块开始
    include       mime.types;# Nginx支持的媒体类型库文件
    default_type application/octet-stream;# 默认的媒体类型
    sendfile on;# 开启高效传输模式
    keepalive_timeout 65# 连接超时
    server {
    
     # 第一个Server区块开始,表示一个独立的虚拟主机站点
        listen       80# 提供服务的端口,默认80
        server_name localhost;# 提供服务的域名主机名
        location / {
    
     # 第一个location区块开始
            root   html;# 站点的根目录,相当于Nginx的安装目录
            index index.html index.htm;# 默认的首页文件,多个用空格分开
        } # 第一个location区块结果
        error_page 500502503504  /50x.html;# 出现对应的http状态码时,使用50x.html回应客户
        location = /50x.html {
    
     # location区块开始,访问50x.html
            root   html;# 指定对应的站点目录为html
        }
    }
    ......

Why doesn't Nginx use multi-threading?

Apache: Create multiple processes or threads, and each process or thread will allocate CPU and memory to it (threads are much smaller than processes, so workers support higher concurrency than perfork). Excessive concurrency will drain server resources.

Nginx: uses a single thread to process requests asynchronously and non-blockingly (the administrator can configure the number of worker processes in the Nginx main process) (epoll). It does not allocate CPU and memory resources for each request, saving a lot of resources and reducing the time required. A lot of CPU context switching. That's why Nginx supports higher concurrency.

The difference between nginx and apache

Lightweight, it also serves as a web service and takes up less memory and resources than Apache.
Anti-concurrency, nginx processes requests asynchronously and non-blockingly, while apache is blocking. Under high concurrency, nginx can maintain low resources, low consumption and high performance.
Highly modular design, writing modules is relatively simple.
The core difference is that apache is a synchronous multi-process model, one connection corresponds to one process, while nginx is asynchronous, and multiple connections can correspond to one process.

Insert image description here

What is the separation of dynamic resources and static resources?

The separation of dynamic resources and static resources allows dynamic web pages in dynamic websites to distinguish unchanging resources from frequently changing resources according to certain rules. After the dynamic and static resources are split, we can separate them according to the characteristics of the static resources. Doing caching operations is the core idea of ​​static website processing.
A simple summary of the separation of dynamic resources and static resources is: the separation of dynamic files and static files.

Why do we need to separate dynamic and static?

In our software development, some requests require background processing (such as: .jsp, .do, etc.), and some requests do not require background processing (such as: css, html, jpg, js, etc. files). These files that do not need to be processed in the background are called static files, otherwise they are dynamic files.

Therefore our background processing ignores static files. Someone will say that if I ignore the static files in the background, it will be over? Of course this is possible, but the number of background requests will significantly increase. When we have requirements on the response speed of resources, we should use this dynamic and static separation strategy to solve the problem of dynamic and static separation. Deploy website static resources (HTML, JavaScript, CSS, img and other files) separately from background applications to improve user experience. Speed ​​of accessing static code and reducing access to background applications

Here we put the static resources in Nginx and forward the dynamic resources to the Tomcat server.
Of course, because CDN services such as Qiniu and Alibaba Cloud are now very mature, the mainstream approach is to cache static resources into CDN services to increase access speed.

Compared with the local Nginx, the CDN server has more nodes in the country and can provide users with nearby access. Moreover, CDN services can provide larger bandwidth, unlike our own application services, which provide limited bandwidth.

What is a CDN service?

CDN, content delivery network.
Its purpose is to add a new layer of network architecture to the existing Internet to publish website content to the edge of the network closest to users, so that users can obtain the content they need nearby and improve the speed at which users access the website.
Generally speaking, because CDN services are relatively popular nowadays, basically all companies will use CDN services.

How does Nginx separate dynamic from static?

You only need to specify the directory corresponding to the path. location/ can be matched using regular expressions. And specify the corresponding directory on the hard disk.
As follows: (All operations are on Linux)

location /image/ {
    
    
    root /usr/local/static/;
    autoindex on;
}
步骤:
# 创建目录
mkdir /usr/local/static/image
 
# 进入目录
cd  /usr/local/static/image
 
# 上传照片
1.jpg
 
# 重启nginx
sudo nginx -s reload

打开浏览器 输入 server_name/image/1.jpg 就可以访问该静态图片了

How is the Nginx load balancing algorithm implemented? What are the strategies?

In order to avoid server crashes, everyone will use load balancing to share server pressure. The platform servers are formed into a cluster. When a user accesses, they first access a forwarding server, and then the forwarding server distributes the access to servers with less pressure.

There are five strategies implemented by Nginx load balancing:
  • (1) Polling (default)
    Each request is assigned to different back-end servers one by one in chronological order. If a back-end server goes down, the faulty system can be automatically eliminated.
upstream backserver {
    
    
 server 192.168.0.1;
 server 192.168.0.2;
}
  • (2) Weight
    The greater the value of weight, the higher the access probability assigned. It is mainly used when the performance of each back-end server is unbalanced. The second step is to set different weights in the master-slave situation to achieve reasonable and effective utilization of host resources.
# 权重越高,在被访问的概率越大,如上例,分别是20%,80%。
upstream backserver {
    
    
 server 192.168.0.1 weight=2;
 server 192.168.0.2 weight=8;
}
  • (3) ip_hash (IP binding)
    assigns each request according to the hash result of the visiting IP, allowing visitors from the same IP to access a back-end server, and can effectively solve the session sharing problem of dynamic web pages.
upstream backserver {
    
    
 ip_hash;
 server 192.168.0.1:88;
 server 192.168.0.2:80;
}
  • (4) fair (third-party plug-in)
    must install the upstream_fair module.
    Compared with weight and ip_hash, which are more intelligent load balancing algorithms, the fair algorithm can intelligently perform load balancing based on page size and loading time, and prioritize those with short response times.
# 哪个服务器的响应速度快,就将请求分配到那个服务器上。
upstream backserver {
    
    
 server server1;
 server server2;
 fair;
}
  • (5) url_hash (third-party plug-in)
    must install the Nginx hash software package
    to allocate requests according to the hash result of the accessed URL so that each URL is directed to the same back-end server, which can further improve the efficiency of the back-end cache server.
upstream backserver {
    
    
 server squid1:3128;
 server squid2:3128;
 hash $request_uri;
 hash_method crc32;
}

How to use Nginx to solve front-end cross-domain problems?

Use Nginx to forward requests. Write cross-domain interfaces as interfaces of the local domain, and then forward these interfaces to the real request address.

How to configure Nginx virtual host?

  • 1. Virtual host based on domain name, distinguish virtual host by domain name - application: external website
  • 2. Port-based virtual host, distinguish virtual host by port - application: company internal website, management backend of external website
  • 3. IP-based virtual host.

Configure domain name based on virtual host

It is necessary to create the /data/www /data/bbs directory, and add the domain name resolution corresponding to the virtual machine IP address to Windows local hosts; add an index.html file to the corresponding domain name website directory;

# 当客户端访问www.lijie.com,监听端口号为80,直接跳转到data/www目录下文件
server {
    
    
    listen       80;
    server_name  www.lijie.com;
    location / {
    
    
        root   data/www;
        index  index.html index.htm;
    }
}

# 当客户端访问www.lijie.com,监听端口号为80,直接跳转到data/bbs目录下文件
 server {
    
    
    listen       80;
    server_name  bbs.lijie.com;
    location / {
    
    
        root   data/bbs;
        index  index.html index.htm;
    }
}

Port-based virtual hosts
use ports to distinguish, and browsers use domain names or IP addresses: port numbers to access

# 当客户端访问www.lijie.com,监听端口号为8080,直接跳转到data/www目录下文件
 server {
    
    
    listen       8080;
    server_name  8080.lijie.com;
    location / {
    
    
        root   data/www;
        index  index.html index.htm;
    }
}

# 当客户端访问www.lijie.com,监听端口号为80直接跳转到真实ip服务器地址 127.0.0.1:8080
server {
    
    
    listen       80;
    server_name  www.lijie.com;
    location / {
    
    
         proxy_pass http://127.0.0.1:8080;
        index  index.html index.htm;
    }
}

What is the role of location?

The function of the location directive is to execute different applications according to the URI requested by the user, that is, to match the website URL requested by the user. If the match is successful, relevant operations will be performed.

~ 代表输入的字母

Insert image description here

Location regular example:

# 优先级1,精确匹配,根路径
location =/ {
    
    
    return 400;
}

# 优先级2,以某个字符串开头,以av开头的,优先匹配这里,区分大小写
location ^~ /av {
    
    
   root /data/av/;
}

# 优先级3,区分大小写的正则匹配,匹配/media*****路径
location ~ /media {
    
    
      alias /data/static/;
}

# 优先级4 ,不区分大小写的正则匹配,所有的****.jpg|gif|png 都走这里
location ~* .*\.(jpg|gif|png|js|css)$ {
    
    
   root  /data/av/;
}

# 优先7,通用匹配
location / {
    
    
    return 403;
}

How to limit current flow?

Nginx current limiting is to limit the user request speed to prevent the server from being overwhelmed.
There are three types of current limiting:

  • Normally limit access frequency (normal traffic)
  • Burst limit access frequency (burst traffic)
  • Limit the number of concurrent connections
Nginx's current limiting is based on the leaky bucket flow algorithm
  • 1. Normally limit access frequency (normal traffic):

Limit the requests sent by a user and how often Nginx receives a request.
The ngx_http_limit_req_module module is used in Nginx to limit the access frequency. The principle of limitation is essentially based on the leaky bucket algorithm principle. In the nginx.conf configuration file, you can use the limit_req_zone command and the limit_req command to limit the request processing frequency of a single IP.

# 定义限流维度,一个用户一分钟一个请求进来,多余的全部漏掉
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/m;

# 绑定限流维度
server{
    
    
    
    location/seckill.html{
    
    
        limit_req zone=zone;
        proxy_pass http://lj_seckill;
    }

}

1r/s represents one request per second, and 1r/m receives one request per minute. If Nginx still has other people's requests that have not been processed at this time, Nginx will refuse to process the user's request.

  • 2. Burst limit access frequency (burst traffic):

Limit the requests sent by a user and how often Nginx receives one.
The above configuration can limit the access frequency to a certain extent, but there is also a problem: if the burst traffic exceeds the request and is rejected, it cannot handle the burst traffic during the event. How should we further handle it at this time?
Nginx provides the burst parameter combined with the nodelay parameter to solve the problem of traffic bursts. You can set the number of requests that can be processed beyond the set number of requests that can be processed. We can add burst parameters and nodelay parameters to the previous example:

# 定义限流维度,一个用户一分钟一个请求进来,多余的全部漏掉
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/m;

# 绑定限流维度
server{
    
    
    
    location/seckill.html{
    
    
        limit_req zone=zone burst=5 nodelay;
        proxy_pass http://lj_seckill;
    }

}

Why is there an additional burst=5 nodelay;? The addition of this can mean that Nginx will immediately process the first five requests from a user, and the redundant ones will be processed slowly. If there are no requests from other users, I will process yours. If there are other requests, Nginx will miss it and not accept your request.

  • 3. Limit the number of concurrent connections

The ngx_http_limit_conn_module module in Nginx provides the function of limiting the number of concurrent connections, which can be configured using the limit_conn_zone directive and limit_conn execution. Next we can look at it through a simple example:

http {
    
    
    limit_conn_zone $binary_remote_addr zone=myip:10m;
    limit_conn_zone $server_name zone=myServerName:10m;
}

server {
    
    
    location / {
    
    
        limit_conn myip 10;
        limit_conn myServerName 100;
        rewrite / http://www.lijie.net permanent;
    }
}

It is configured above that the maximum number of concurrent connections for a single IP can only be 10, and the maximum number of simultaneous connections for the entire virtual server can only be set to 100. Of course, the number of connections to the virtual server will only be counted after the request header is processed by the server. As mentioned just now, Nginx is implemented based on the leaky bucket algorithm principle. In fact, current limiting is generally implemented based on the leaky bucket algorithm and the token bucket algorithm.

Do you know the leaky bucket flow algorithm and the token bucket algorithm?

  • leaky bucket algorithm

The idea of ​​the leaky bucket algorithm is very simple. We compare water to a request, and the leaky bucket to the system processing capacity limit. Water first enters the leaky bucket, and the water in the leaky bucket flows out at a certain rate. When the outflow rate is less than the inflow rate At this time, due to the limited capacity of the leaky bucket, the subsequent water entering directly overflows (rejection of the request), thereby achieving flow limitation.

Insert image description here

  • Token Bucket Algorithm

The principle of the token bucket algorithm is also relatively simple. We can understand it as a hospital registration for medical treatment. Only after getting the number can the doctor be diagnosed.
The system will maintain a token bucket and put tokens into the bucket at a constant speed. At this time, if a request comes in and wants to be processed, you need to obtain a token from the bucket first. ), when there is no token available in the bucket, the request will be denied service. The token bucket algorithm limits requests by controlling the capacity of the bucket and the rate at which tokens are issued.

Insert image description here

How to configure high availability in Nginx?

When the upstream server (real access server) fails or fails to respond in time, it should be directly rotated to the next server to ensure high availability of the server.

Nginx configuration code:

server {
    
    
        listen       80;
        server_name  www.lijie.com;
        location / {
    
    
            ### 指定上游服务器负载均衡服务器
            proxy_pass http://backServer;
            ###nginx与上游服务器(真实访问的服务器)超时时间 后端服务器连接的超时时间_发起握手等候响应超时时间
            proxy_connect_timeout 1s;
            ###nginx发送给上游服务器(真实访问的服务器)超时时间
            proxy_send_timeout 1s;
            ### nginx接受上游服务器(真实访问的服务器)超时时间
            proxy_read_timeout 1s;
            index  index.html index.htm;
        }
    }

How does Nginx determine that another IP is inaccessible?

# 如果访问的ip地址为192.168.0.111,则返回403
 if  ($remote_addr = 192.168.0.111) {
    
    
     return 403;
 }

In nginx, how to prevent request from being processed with undefined server name?
Simply the server to which the request is dropped can be defined:
the server name is left as an empty string, it matches the request without the host header field, and a special nginx non-standard code is returned, thus terminating the connection.
How to restrict browser access?

## 不允许谷歌浏览器访问 如果是谷歌浏览器返回500
if ($http_user_agent ~ Chrome) {
    
    
  return 500;
}
Rewrite全局变量是什么?
$remote_addr //获取客户端ip
$binary_remote_addr //客户端ip(二进制)
$remote_port //客户端port,如:50472
$remote_user //已经经过Auth Basic Module验证的用户名
$host //请求主机头字段,否则为服务器名称,如:blog.sakmon.com
$request //用户请求信息,如:GET ?a=1&b=2 HTTP/1.1
$request_filename //当前请求的文件的路径名,由root或alias和URI request组合而成,如:/2013/81.html
$status //请求的响应状态码,如:200
$body_bytes_sent // 响应时送出的body字节数数量。即使连接中断,这个数据也是精确的,如:40
$content_length // 等于请求行的“Content_Length”的值
$content_type // 等于请求行的“Content_Type”的值
$http_referer // 引用地址
$http_user_agent // 客户端agent信息,如:Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.76 Safari/537.36
$args //与$query_string相同 等于当中URL的参数(GET),如a=1&b=2
$document_uri //与$uri相同 这个变量指当前的请求URI,不包括任何参数($args) 如:/2013/81.html
$document_root //针对当前请求的根路径设置值
$hostname //如:centos53.localdomain
$http_cookie //客户端cookie信息
$cookie_COOKIE //cookie COOKIE变量的值
$is_args //如果有$args参数,这个变量等于”?”,否则等于”",空值,如?
$limit_rate //这个变量可以限制连接速率,0表示不限速
$query_string // 与$args相同 等于当中URL的参数(GET),如a=1&b=2
$request_body // 记录POST过来的数据信息
$request_body_file //客户端请求主体信息的临时文件名
$request_method //客户端请求的动作,通常为GET或POST,如:GET
$request_uri //包含请求参数的原始URI,不包含主机名,如:/2013/81.html?a=1&b=2
$scheme //HTTP方法(如http,https),如:http
$uri //这个变量指当前的请求URI,不包括任何参数($args) 如:/2013/81.html
$request_completion //如果请求结束,设置为OK. 当请求未结束或如果该请求不是请求链串的最后一个时,为空(Empty),如:OK
$server_protocol //请求使用的协议,通常是HTTP/1.0或HTTP/1.1,如:HTTP/1.1
$server_addr //服务器IP地址,在完成一次系统调用后可以确定这个值
$server_name //服务器名称,如:blog.sakmon.com
$server_port //请求到达服务器的端口号,如:80

How does Nginx implement health check of backend services?

  • 1. Use nginx’s own modules ngx_http_proxy_module and ngx_http_upstream_module to perform health checks on the back-end nodes.
  • 2. (Recommended), use the nginx_upstream_check_module module to perform health checks on the backend nodes.

How to enable compression in Nginx?

After turning on nginx gzip compression, the size of static resources such as web pages, css, and js will be greatly reduced, which can save a lot of bandwidth, improve transmission efficiency, and give users a faster experience. Although it will consume CPU resources, it is worth it to give users a better experience.
The enabled configuration is as follows:

http {
    
    
  # 开启gzip
  gzip on;
 
  # 启用gzip压缩的最小文件;小于设置值的文件将不会被压缩
  gzip_min_length 1k;
 
  # gzip 压缩级别 1-10
  gzip_comp_level 2;
 
  # 进行压缩的文件类型。
 
  gzip_types text/plain application/javascript application/x-javascript text/css application/xml text/javascript application/x-httpd-php image/jpeg image/gif image/png;
 
  # 是否在http header中添加Vary: Accept-Encoding,建议开启
  gzip_vary on;
}

Put the above configuration into the http{...} node of nginx.conf.

Save and restart nginx, refresh the page (to avoid caching, please force refresh) and you will see the effect. Taking Google Chrome as an example, use F12 to view the response header of the request.

What is the role of ngx_http_upstream_module?

ngx_http_upstream_module is used to define server groups that can be referenced by fastcgi pass, proxy pass, uwsgi pass, memcached pass and scgi pass directives.

What is the C10K problem?

The C10K problem refers to a network socket that cannot handle a large number of clients (10,000) simultaneously.

Does Nginx support compressing requests upstream?

You can use the Nginx module gunzip to zip requests upstream. The gunzip module is a filter that decompresses responses using "content-encoding:gzip" for clients or servers that do not support the "gzip" encoding method.

How to get the current time in Nginx?

To get the current time of Nginx, you must use the SSI module and the date_local variable.
Proxy_set_header THE-TIME $date_gmt;

What is the purpose of explaining -s with Nginx server?

Executable file for running Nginx with -s parameter.

How to add modules on Nginx server?

During compilation, Nginx modules must be selected because Nginx does not support runtime selection of modules.

How to set the number of worker processes in production?

In the case of multiple CPUs, multiple workers can be set up. The number of worker processes can be set to the same number as the number of cores of the CPU. If multiple worker processes are started on a single CPU, the operating system will Scheduling between them will reduce system performance. If there is only one CPU, then only one worker process can be started.

nginx status code

499:
The server processing time is too long, and the client actively closed the connection.

502:
(1).Whether the FastCGI process has been started
(2).Whether the number of FastCGI worker processes is not enough
(3).FastCGI execution time is too long

fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;

(4). FastCGI Buffer is not enough. Like apache, nginx has front-end buffer limitations. You can adjust the buffer parameters
fastcgi_buffer_size 32k;
fastcgi_buffers 8 32k;

(5). Proxy Buffer is not enough. If you use Proxying, adjust
proxy_buffer_size 16k;
proxy_buffers 4 16k;

(6).php script execution time is too long.
Change the 0s in php-fpm.conf to a time

Guess you like

Origin blog.csdn.net/weixin_43824520/article/details/125989242