The latest detailed introduction of nginx version 2023

down:https://nginx.org/en/download.html

introduce

Nginx is a web server, which mainly handles the distribution of client and server requests, and is a high-performance reverse proxy server.

Forward proxy and reverse proxy

A proxy is a hypothetical layer of servers between the server and the client. The proxy will receive the client's request and forward it to the server, and then forward the server's response to the client.
Forward proxy means that the client sends its request to the proxy server first, and forwards the request to the server through the proxy server. Our commonly used VPN is a proxy server. In order to connect to foreign websites, the client needs to use a server that can connect to the external network as a proxy, and the client can connect to the proxy server.

The reverse proxy is different from the forward proxy. The forward proxy proxies the client, while the reverse proxy proxies the server. In the case of multiple servers distributed, in order to allow the IP addresses accessed by the client to be the same website, a reverse proxy is required.

Reverse proxy [proxy_pass]
The so-called reverse proxy is very simple. In fact, the root in the location configuration can be replaced with proxy_pass . The root description is a static resource that can be returned by Nginx; the proxy_pass description is a dynamic request that needs to be forwarded, such as proxying to Tomcat. Load balancing [upstream] In the above reverse proxy, we specify the address of Tomcat through proxy_pass. Obviously, we can only specify one Tomcat address, so what if we want to specify multiple Tomcats to achieve load balancing? First, define a group of Tomcats through upstream, and specify load policies (IPHASH, weighted arguments, least connections), health check policies (Nginx can monitor the status of this group of Tomcats), etc. Second, replace proxy_pass with the value specified by upstream.





nginx.conf basic configuration

image.png

main                                # 全局配置
worker_processes  2;  ## 默认1,一般建议设成CPU核数1-2倍
error_log  logs/error.log; ## 错误日志路径
pid  logs/nginx.pid; ## 进程id

  events {
    
    							# nginx工作模式配置
    # 使用epoll的I/O 模型处理轮询事件。
    # 可以不设置,nginx会根据操作系统选择合适的模型
    use epoll;
    
    # 工作进程的最大连接数量, 默认1024个
    worker_connections  2048;
    
    # http层面的keep-alive超时时间
    keepalive_timeout 60;
    
    # 客户端请求头部的缓冲区大小
    client_header_buffer_size 2k;
  }

http {
    
                                    # http设置
  #引入外部配置,提高可读性,避免单个配置文件过大
  include       mime.types;
  default_type  application/octet-stream;
  
  include mime.types;  # 导入文件扩展名与文件类型映射表
  default_type application/octet-stream;  # 默认文件类型
  
  # 日志格式及access日志路径
  log_format   main '$remote_addr - $remote_user [$time_local]  $status '
  '"$request" $body_bytes_sent "$http_referer" '
  '"$http_user_agent" "$http_x_forwarded_for"';
  access_log   logs/access.log  main;
  
  #使用高效文件传输,提升传输性能。启用后才能使用tcp_nopush,是指当数据表累积一定大小后才发送,提高了效率。
  sendfile        on;						
  #tcp_nopush     on;
  #设置客户端与服务端请求的超时时间,保证客户端多次请求的时候不会重复建立新的连接,节约资源损耗。
  keepalive_timeout  65;

# ===========================静态文件管理配置======================================
# 开启gzip压缩功能
     gzip on;
     
     # 设置允许压缩的页面最小字节数; 这里表示如果文件小于10k,压缩没有意义.
     gzip_min_length 10k; 
     
     # 设置压缩比率,最小为1,处理速度快,传输速度慢;
     # 9为最大压缩比,处理速度慢,传输速度快; 推荐6
     gzip_comp_level 6; 
     
     # 设置压缩缓冲区大小,此处设置为16个8K内存作为压缩结果缓冲
     gzip_buffers 16 8k; 
     
     # 设置哪些文件需要压缩,一般文本,css和js建议压缩。图片视需要要锁。
     gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; 
# ===========================静态文件管理配置======================================

# 正常的正向静态代理
  server {
    
    
    listen       80;		# 监听端口
    server_name  localhost;	# ip、域名

  # 转发动态请求到web应用服务器
    location / {
    
    			# 请求路由映射,匹配拦截
      root   html;		# 请求位置
      index  index.html index.htm; # 首页位置
    }
    location /manage {
    
    
      root /ctp-manage-ui/dist;
      index index.html;
      try_files $uri $uri/ /manage/index.html;
    }

# 注意维护新增微服务,gateway 路由前缀
		location ~* ^/(code|auth|admin|gen|inst|order) {
    
    
		   proxy_pass http://127.0.0.1:9999;
		   #proxy_set_header Host $http_host;
		   proxy_connect_timeout 15s;
		   proxy_send_timeout 15s;
		   proxy_read_timeout 15s;
		   proxy_set_header X-Real-IP $remote_addr;
		   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		   proxy_set_header X-Forwarded-Proto http;
		}

# 使用expires选项开启静态文件缓存,10天有效
     location ~ ^/(images|javascript|js|css|flash|media|static)/  {
    
    
       root    /var/www/big.server.com/static_files;
       expires 10d;
     }
  }

# =============================简单反向代理=============================================

# 超时设置

# 该指令设置与upstream服务器的连接超时时间,这个超时建议不超过75秒。
 proxy_connect_timeout 60;
 
 # 该指令设置应用服务器的响应超时时间,默认60秒。
 proxy_read_timeout 60
 
 # 设置了发送请求给upstream服务器的超时时间
 proxy_send_timeout 60;
 
 # max_fails设定Nginx与upstream服务器通信的尝试失败的次数。
 # 在fail_timeout参数定义的时间段内,如果失败的次数达到此值,Nginx就认为服务器不可用。
 
	server {
    
    
     listen       88;
     server_name  domain2.com www.domain2.com;
     access_log   logs/domain2.access.log  main;
    
     # 转发动态请求到web应用服务器
     location / {
    
    
       proxy_pass      http://127.0.0.1:8000;
       deny 192.24.40.8;  # 拒绝的ip
       allow 192.24.40.6; # 允许的ip   
     }
     
     # 错误页面
     error_page   500 502 503 504  /50x.html;
         location = /50x.html {
    
    
             root   html;
         }
   }

}

# =====================负载均衡 server=================================================
	server {
    
    
     listen          80;
     server_name     big.server.com;
     access_log      logs/big.server.access.log main;
     
     charset utf-8;
     client_max_body_size 10M; # 限制用户上传文件大小,默认1M
 
     location / {
    
    
       # 使用proxy_pass转发请求到通过upstream定义的一组应用服务器
       proxy_pass      http://backend_server;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header Host $http_host;
       proxy_redirect off;
       proxy_set_header X-Real-IP  $remote_addr;
     }
     
   }

# 负载均衡
   upstream backend_server {
    
    
     server 192.168.0.1:8000 weight=5; # weight越高,权重越大
     server 192.168.0.2:8000 weight=1;
     server 192.168.0.3:8000;
     server 192.168.0.4:8001 backup; # 热备
   }

	
}

location

The presence or absence of characters in location has no effect. That is to say, /homepage/ is the same as /homepage

location = / {
    
    
  # 精确匹配/,主机名后面不能带任何字符串 /
  [ configuration A ]
}
location / {
    
    
  # 匹配所有以 / 开头的请求。
  # 但是如果有更长的同类型的表达式,则选择更长的表达式。
  # 如果有正则表达式可以匹配,则优先匹配正则表达式。
  [ configuration B ]
}
location /documents/ {
    
    
  # 匹配所有以 /documents/ 开头的请求,匹配符合以后,还要继续往下搜索。
  # 但是如果有更长的同类型的表达式,则选择更长的表达式。
  # 如果有正则表达式可以匹配,则优先匹配正则表达式。
  [ configuration C ]
}
location ^~ /images/ {
    
    
  # 匹配所有以 /images/ 开头的表达式,如果匹配成功,则停止匹配查找,停止搜索。
  # 所以,即便有符合的正则表达式location,也不会被使用
  [ configuration D ]
}
URL matching method and priority

The priority of matching character matching rules decreases step by step

  • = Exact match 1
  • ^~ starts with some string 2
  • ~ case-sensitive regular match 3
  • ~* case-insensitive regular match 4
  • !~ case-sensitive mismatch regex 5
  • !~* case-insensitive mismatch regex 6
  • / Universal match, any request will match to 7

The difference between root and alias

root: concatenation (location custom name) alias: alias (location custom name)
image.pngroot: /Users/admin/www**/h5**/index.html
When there is no prefix, root is the same as alias, and root is generally used ;
When there is a prefix, if the prefix name is consistent with the last part of the directory path, root should be used, and alias should be used if they are inconsistent.

The alias must end with "/", otherwise it will be considered as a file, and the corresponding directory cannot be found; while root is optional for "/"

History mode
location /h5 {
    
    
  root /Users/admin/www/;
  index index.html;
  try_files $uri $uri/ /h5/index.html;
}

This syntax means:

  • After try_files, you can define multiple file paths and the last uri as an internal jump, where the file path is constructed together with the alias and root instructions ;
  • Multiple files are requested with the first found file ;
  • And the file ends with "/", it will check whether the directory exists;
  • When none of the files can be found, it will use the last uri to make an internal jump request .
  • variable interpretation

try_files fixed syntax

$uri refers to the home file (the path behind the ip address, if it is 127.0.0.1/index/a.png, then refers to index/a.png)
$uri/ refers to the home folder
/index.html to ip/ index.html address initiate request

Example for example:

  • I defined try_files $uri $uri/ /h5/index.html, root is /Users/admin/www;
  • Two files are defined, uri and uri, my access path is testhistory.com/h5/about, $uri is /h5/about, then adding root as the root directory cannot be found, and $url/ cannot find the corresponding directory ;
  • If the file cannot be found, it will jump to /h5/index.html internally, which is equivalent to requesting testhistory.com/h5/index.ht internally...

try_files $uri $uri/ /index.html;
Try to parse the following 2 files/folders (automatically distinguish whether the path behind the IP is a file or a folder), uri / uri/u r i / uri/,
if resolved, return the first one,
if not resolved, initiate a request jump to 127.0.0.1/index.html (the route must be real, otherwise an error will be reported)

Request forwarding and redirection

# 转发动态请求
server {
    
      
  listen 80;                                                         
  server_name  localhost;                                               
  client_max_body_size 1024M;

  location / {
    
    
    proxy_pass http://localhost:8080;   
    proxy_set_header Host $host:$server_port;
    }
  }

# http请求重定向到https请求
server {
    
    
  listen 80;
  server_name Domain.com;
  return 301 https://$server_name$request_uri;
}

Whether it is forwarding requests or redirecting, we have used variables starting with the $ symbol, which are global variables provided by Nginx. Their specific meanings are as follows:

$args, 请求中的参数;
 $content_length, HTTP请求信息里的"Content-Length";
 $content_type, 请求信息里的"Content-Type";
 $document_root, 针对当前请求的根路径设置值;
 $document_uri, 与$uri相同;
 $host, 请求信息中的"Host",如果请求中没有Host行,则等于设置的服务器名;
 $limit_rate, 对连接速率的限制;
 $request_method, 请求的方法,比如"GET"、"POST"等;
 $remote_addr, 客户端地址;
 $remote_port, 客户端端口号;
 $remote_user, 客户端用户名,认证用;
 $request_filename, 当前请求的文件路径名
 $request_body_file,当前请求的文件
 $request_uri, 请求的URI,带查询字符串;
 $query_string, 与$args相同;
 $scheme, 所用的协议,比如http或者是https,比如rewrite ^(.+)$ $scheme://example.com$1 redirect;
 $server_protocol, 请求的协议版本,"HTTP/1.0"或"HTTP/1.1";
 $server_addr, 服务器地址;
 $server_name, 请求到达的服务器名;
 $server_port, 请求到达的服务器端口号;
 $uri, 请求的URI,可能和最初的值有不同,比如经过重定向之类的。

file download server

server {
 listen 80 default_server;
 listen [::]:80 default_server;
 server_name  _;
 
 location /download {    
     # 下载文件所在目录
     root /usr/share/nginx/html;
     
     # 开启索引功能
     autoindex on;  
     
     # 关闭计算文件确切大小(单位bytes),只显示大概大小(单位kb、mb、gb)
     autoindex_exact_size off; 
     
     #显示本机时间而非 GMT 时间
     autoindex_localtime on;   
             
     # 对于txt和jpg文件,强制以附件形式下载,不要浏览器直接打开
     if ($request_filename ~* ^.*?\.(txt|jpg|png)$) {
         add_header Content-Disposition 'attachment';
     }
 }
}

Nginx configuration HTTPS

# 负载均衡,设置HTTPS
 upstream backend_server {
     server APP_SERVER_1_IP;
     server APP_SERVER_2_IP;
 }
 
 # 禁止未绑定域名访问,比如通过ip地址访问
 # 444:该网页无法正常运作,未发送任何数据
 server {
     listen 80 default_server;
     server_name _;
     return 444;
 }
 
 # HTTP请求重定向至HTTPS请求
 server {
     listen 80;
     listen [::]:80;
     server_name your_domain.com;
     
     location / {
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header X-Forwarded-Proto $scheme;
         proxy_set_header Host $http_host;
         proxy_redirect off;
         proxy_pass http://backend_server; 
      }
     
     return 301 https://$server_name$request_uri;
 }
 
 server {
     listen 443 ssl http2;
     listen [::]:443 ssl http2;
     server_name your_domain.com;
 
     # ssl证书及密钥路径
     ssl_certificate /path/to/your/fullchain.pem;
     ssl_certificate_key /path/to/your/privkey.pem;
 
     # SSL会话信息
     client_max_body_size 75MB;
     keepalive_timeout 10;
 
     location / {
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header X-Forwarded-Proto $scheme;
         proxy_set_header Host $http_host;
         proxy_redirect off;
         proxy_pass http://django; # Django+uwsgi不在本机上,使用代理转发
     }
 
 }

nginx load balancing

Upstream specifies the backend server address list, intercepts the response request in the server, and forwards the request to the server list configured in Upstream.

// 默认情况下采用的是轮询策略,将所有客户端请求轮询分配给服务端
upstream balanceServer {
    
    
    server 10.1.22.33:12345;
    server 10.1.22.34:12345;
    server 10.1.22.35:12345;
}

server {
    
    
    server_name  fe.server.com;
listen 80;
location /api {
    
    
    proxy_pass http://balanceServer;
        }
}

Polling (default)
Each request is assigned to different backend servers one by one in time order
weight weight
represents weight, the default is 1, the higher the weight, the more clients are assigned
image.png
ip_hash
each request is assigned according to the hash value of the access ip , so that each access client will access a fixed backend server, which can solve the problem of session loss
image.png

nginx common commands

# 启动nginx
start nginx
# 快速关闭Nginx,可能不保存相关信息,并迅速终止web服务
nginx -s stop
# 平稳关闭Nginx,保存相关信息,有安排的结束web服务
nginx -s quit
# 因改变了Nginx相关配置,需要重新加载配置而重载
nginx -s reload
# 重新打开日志文件
nginx -s reopen
# 为 Nginx 指定一个配置文件,来代替缺省的
nginx -c filename
# 不运行,而仅仅测试配置文件。nginx 将检查配置文件的语法的正确性,并尝试打开配置文件中所引用到的文件
nginx -t
#  显示 nginx 的版本
nginx -v
# 显示 nginx 的版本,编译器版本和配置参数
nginx -V
# 格式换显示 nginx 配置参数
2>&1 nginx -V | xargs -n1
2>&1 nginx -V | xargs -n1 | grep lua

question:

How does Nginx achieve hot deployment?

Hot deployment means that after the configuration file nginx.conf is modified, the configuration file can take effect without stopping Nginx or interrupting the request!

Master-Worker mode of Nginx

image.png
After starting Nginx, the Socket service is actually started on port 80 for monitoring. As shown in the figure,

what is the role of Nginx involving the Master process and the Worker process?
Read and verify the configuration file nginx.conf; manage the worker process;
what is the role of the worker process?
Each Worker process maintains a thread (to avoid thread switching) to handle connections and requests; note that the number of Worker processes is determined by the configuration file, generally related to the number of CPUs (favorable for process switching), and there are several configurations Worker process.
Solution 1:
After modifying the configuration file nginx.conf, the main process master is responsible for pushing the updated configuration information to the woker process. After the woker process receives the information, it updates the thread information inside the process. (A bit valatile)
Option 2: (nginx -s reload reload)
After modifying the configuration file nginx.conf, regenerate a new worker process, of course the request will be processed with the new configuration, and the new request must be handed over to The new worker process, as for the old worker process, just kill it after the previous requests are processed.
Nginx uses the second solution to achieve hot deployment!

What should I do if Nginx is down?

Keepalived+Nginx to achieve high availability
Keepalived is a high-availability solution, mainly used to prevent single point of failure of the server, and can achieve high availability of web services by cooperating with Nginx
Keepalived+Nginx to achieve high availability :
first: request Do not hit Nginx directly, you should first pass Keepalived (this is the so-called virtual IP, VIP)
Second: Keepalived should be able to monitor the life status of Nginx (provide a user-defined script, regularly check the status of Nginx process, and change the weight, , enabling Nginx failover)

Distributed session sharing problem

image.png

solution:

IP_hash of nginx:
  - 这样同一个客户端每次的请求都会被同一个服务器处理,配置简单,不⼊侵应⽤,不需要额外修改代码
  • shortcoming:
    • The server restarts and the session is lost
    • There is a risk of high load at a single point
    • single point of failure problem
Session sharing, session centralized management (redis)

image.png

Notice:

alias: left / : the difference between with and without

image.png
image.png

image.png
image.png

References:

nginx configuration reference

https://zhuanlan.zhihu.com/p/372610935
https://blog.csdn.net/qq_34817440/article/details/121501802

OSI seven layer model

Seven-layer model, also known as OSI (Open System Interconnection). The reference model is a standard system developed by the International Organization for Standardization (ISO) for the interconnection between computers or communication systems, generally known as the OSI reference model or the seven-layer model.
It is a seven-layer, abstract model body, including not only a series of abstract terms or concepts, but also specific protocols.
image.png

TCP/IP four-layer model

The TCP/IP protocol refers to the OSI architecture to a certain extent. There are seven layers in the OSI model, but they are complex, so in the TCP/IP protocol, they are simplified to four layers
image.png

Guess you like

Origin blog.csdn.net/weixin_44824381/article/details/130201063
Recommended