Detailed Nginx configuration & use of include directives in Nginx

Detailed Nginx configuration

preamble

Nginx was designed and developed by lgor Sysoev for the second most visited site in Russia, rambler.ru. Since its release in 2004, with the power of open source, it has been close to maturity and perfection.

Nginx is rich in functions and can be used as an HTTP server, a reverse proxy server, and a mail server. Support FastCGI, SSL, Virtual Host, URL Rewrite, Gzip and other functions. And it supports many third-party module extensions.

Nginx's stability, feature set, sample configuration files, and low system resource consumption make it come from behind. It has a usage rate of 12.18% of the active websites in the world, which is about 22.2 million websites.

It’s almost enough to brag about it. If you are not satisfied, you can find such boasting on Baidu Encyclopedia or some books, which abound.

Common functions of Nginx

1. Http proxy, reverse proxy: As one of the most commonly used functions of web servers, especially reverse proxy.

Here I give two pictures to explain the positive agent and the reaction agent. For details, you can read the information.

img

When Nginx is used as a reverse proxy, it provides stable performance and can provide flexible forwarding functions. Nginx can adopt different forwarding strategies according to different regular matches, such as going to the file server at the end of the image file, and going to the web server for dynamic pages. As long as you have no problem writing the regular and have a corresponding server solution, you can do whatever you want play. And Nginx performs error page jumps and abnormal judgments on the returned results. If there is an abnormality in the distributed server, he can re-forward the request to another server, and then automatically remove the abnormal server.

2. Load balancing

There are two types of load balancing strategies provided by Nginx: built-in strategies and extended strategies. The built-in strategies are round robin, weighted round robin, and IP hash. The expansion strategy is unconstrained, only what you can't think of, there is nothing he can't do, you can refer to all load balancing algorithms to find out one by one for him to implement.

The last 3 pictures, understand the implementation of these three load balancing algorithms

img

Ip hash algorithm, which performs hash operation on the IP requested by the client, and then distributes the request of the same client IP to the same server for processing according to the hash result, which can solve the problem of session not sharing.img

3. Web cache

Nginx can do different cache processing for different files, with flexible configuration, and supports FastCGI_Cache, which is mainly used to cache FastCGI dynamic programs. Cooperating with the third-party ngx_cache_purge, you can add and delete the specified URL cache content.

4. Nginx related address

Source code: https://trac.nginx.org/nginx/browser

Official website: http://www.nginx.org/

Nginx configuration file structure

If you have downloaded it, you may wish to open the nginx.conf file in the conf folder for your installation file. The basic configuration of the Nginx server and the default configuration are also stored here.

Comment sign bits in nginx.conf#

The structure of the nginx file, this is for students who are just getting started, you can take a closer look.

default config

#user  nobody;
worker_processes  1;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;

    server {
        listen       80;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
            root   html;
            index  index.html index.htm;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

        # proxy the PHP scripts to Apache listening on 127.0.0.1:80
        #
        #location ~ \.php$ {
        #    proxy_pass   http://127.0.0.1;
        #}

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        #location ~ \.php$ {
        #    root           html;
        #    fastcgi_pass   127.0.0.1:9000;
        #    fastcgi_index  index.php;
        #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
        #    include        fastcgi_params;
        #}

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        #    deny  all;
        #}
    }


    # another virtual host using mix of IP-, name-, and port-based configuration
    #
    #server {
    #    listen       8000;
    #    listen       somename:8080;
    #    server_name  somename  alias  another.alias;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}


    # HTTPS server
    #
    #server {
    #    listen       443 ssl;
    #    server_name  localhost;

    #    ssl_certificate      cert.pem;
    #    ssl_certificate_key  cert.key;

    #    ssl_session_cache    shared:SSL:1m;
    #    ssl_session_timeout  5m;

    #    ssl_ciphers  HIGH:!aNULL:!MD5;
    #    ssl_prefer_server_ciphers  on;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}

}

nginx file structure

...              #全局块

events {         #events块
   ...
}

http      #http块
{
    ...   #http全局块
    server        #server块
    { 
        ...       #server全局块
        location [PATTERN]   #location块
        {
            ...
        }
        location [PATTERN] 
        {
            ...
        }
    }
    server
    {
      ...
    }
    ...     #http全局块
}

1. Global block: Configure commands that affect nginx globally. Generally, there are user groups running the nginx server, nginx process pid storage path, log storage path, configuration file import, and the number of worker processes allowed to be generated.

2. Events block: The configuration affects the nginx server or the network connection with the user. There are the maximum number of connections for each process, which event-driven model to choose to handle connection requests, whether to allow multiple network connections to be accepted at the same time, enable serialization of multiple network connections, etc.

3. http block: multiple servers can be nested, configuration of most functions such as proxy, cache, log definition and configuration of third-party modules. Such as file import, mime-type definition, log customization, whether to use sendfile to transfer files, connection timeout, number of single connection requests, etc.

4. Server block: Configure the relevant parameters of the virtual host. There can be multiple servers in one http.

5. The location block: configure the routing of the request and the processing of various pages.

Here is a configuration file for everyone, as an understanding, and it is also configured into a test machine I built to give you an example.

########### 每个指令必须有分号结束。#################
#user administrator administrators;  #配置用户或者组,默认为nobody nobody。
#worker_processes 2;  #允许生成的进程数,默认为1
#pid /nginx/pid/nginx.pid;   #指定nginx进程运行文件存放地址
error_log log/error.log debug;  #制定日志路径,级别。这个设置可以放入全局块,http块,server块,级别以此为:debug|info|notice|warn|error|crit|alert|emerg
events {
    accept_mutex on;   #设置网路连接序列化,防止惊群现象发生,默认为on
    multi_accept on;  #设置一个进程是否同时接受多个网络连接,默认为off
    #use epoll;      #事件驱动模型,select|poll|kqueue|epoll|resig|/dev/poll|eventport
    worker_connections  1024;    #最大连接数,默认为512
}
http {
    include       mime.types;   #文件扩展名与文件类型映射表
    default_type  application/octet-stream; #默认文件类型,默认为text/plain
    #access_log off; #取消服务日志    
    log_format myFormat '$remote_addr–$remote_user [$time_local] $request $status $body_bytes_sent $http_referer $http_user_agent $http_x_forwarded_for'; #自定义格式
    access_log log/access.log myFormat;  #combined为日志格式的默认值
    sendfile on;   #允许sendfile方式传输文件,默认为off,可以在http块,server块,location块。
    sendfile_max_chunk 100k;  #每个进程每次调用传输数量不能大于设定的值,默认为0,即不设上限。
    keepalive_timeout 65;  #连接超时时间,默认为75s,可以在http,server,location块。

    upstream mysvr {   
      server 127.0.0.1:7878;
      server 192.168.10.121:3333 backup;  #热备
    }
    error_page 404 https://www.baidu.com; #错误页
    server {
        keepalive_requests 120; #单连接请求上限次数。
        listen       4545;   #监听端口
        server_name  127.0.0.1;   #监听地址       
        location  ~*^.+$ {       #请求的url过滤,正则匹配,~为区分大小写,~*为不区分大小写。
           #root path;  #根目录
           #index vv.txt;  #设置默认页
           proxy_pass  http://mysvr;  #请求转向mysvr 定义的服务器列表
           deny 127.0.0.1;  #拒绝的ip
           allow 172.18.5.54; #允许的ip           
        } 
    }
} 

The above is the basic configuration of nginx, the following points need to be noted:

1、1 remoteaddr remote_addrremoteadd r and http_x_forwarded_for are used to record the ip address of the client; 2.remoteuser: used to record the client user name; 3. remote_user: used to record the client user name; 3.remoteuser : used to record the client user name; 3. time_local: used to record the access time and time zone; 4. $request: used to record the url and http protocol of the request;

5. status: used to record the status of the request; success is 200, 6. status: used to record the status of the request; success is 200, 6.s t a t u s : used to record the status of the request; success is 200 , 6. body_bytes_s ent: record the size of the body content of the file sent to the client; 7.httpreferer: used to record the link from which page was accessed; 8. http_referer : Used to record the visits from that page link; 8.httpre f erer : used to record the visits from that page link; 8. http_user_agent : record the relevant information of the client browser;

2. Shocking group phenomenon: When a network connection arrives, multiple sleeping processes are woken up by colleagues, but only one process can get the connection, which will affect system performance.

3. Each command must end with a semicolon.




Detailed Nginx configuration parameters

1. Basic configuration

  • cluster configuration
    insert image description here
  • proxy configuration
    insert image description here

2. Optimize configuration

  • The number of Nginx processes, generally the number of CPU cores -1. View commands:cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c
    insert image description here
    insert image description here
  • Log and service ID path
    insert image description here
  • The maximum number of files that can be opened by a single process of Nginx
    insert image description hereinsert image description here
  • Nginx client maximum number of connections

insert image description here

  • Event processing model optimization

insert image description here

  • log format
    insert image description here
  • log cache

insert image description here

  • file upload size

insert image description here

  • compression configuration

insert image description here

  • buffer settings
    insert image description here

3. Security optimization

insert image description here

4. Call hold configuration

insert image description here

5. Status view configuration

insert image description here

  • The check module also needs to be configured in the cluster configuration

insert image description here

6. ssl configuration

insert image description here

7. Nginx service management

  • start up
[root@localhost nginx-1.20.1]# /usr/local/nginx1.20/sbin/nginx
  • stop
[root@localhost nginx-1.20.1]# /usr/local/nginx1.20/sbin/nginx -s stop
  • reboot
[root@localhost nginx-1.20.1]# /usr/local/nginx1.20/sbin/nginx -s reload
  • Check
[root@localhost nginx-1.20.1]# ps -ef | grep nginx

insert image description here

8. Reference configuration

#user  nobody;
# CPU核心数-1
worker_processes  3;
# nginx错误日志的目录
#error_log  logs/error.log;
error_log  logs/error.log  notice;
#error_log  logs/error.log  info;
# nginx进程id记录文件路径
pid        logs/nginx.pid;
# 单个进程可打开的最大文件数量
worker_rlimit_nofile 1024;
events {
    
    
	# epoll 模型对事件处理进行优化
	use epoll;
	# 客户端最大连接数,建议与单个进程可打开的最大文件数量保持一致
    worker_connections  1024;
}
http {
    
    
	#  隐藏nginx版本信息
	server_tokens off;
    include       mime.types;
    default_type  application/octet-stream;
	# 日志格式
	log_format  main  '[time:$request_time s] $remote_addr - $remote_user [$time_local] "$request" '  
			  '$status $body_bytes_sent "$http_referer" '
			  '"$http_user_agent" "$http_x_forwarded_for"'
			  '$upstream_addr $upstream_response_time $request_time $upstream_status '
					  '"$http_range" "$sent_http_content_range"'
					  '"$gzip_ratio"'
					  '"$query_string"' 
	'"-http_refer:$http_referer"';	
	# nginx日志缓存,降低日志IO。
	open_log_file_cache max=10240 inactive=60s valid=1m min_uses=2;
	# 文件上传大小
	client_max_body_size 100m;
	client_header_buffer_size 64k;
	large_client_header_buffers 4 4k;
	# 压缩配置
	gzip on;
	gzip_min_length 2k;
	gzip_buffers 4 16k;
	gzip_comp_level 3;
	gzip_vary on;
	gzip_types text/plain application/x-javascript application/javascript application/css  text/css application/xml application/json;
	#	缓存配置
	proxy_connect_timeout 3600s;# Nginx与代理的服务连接超时时间(Nginx请求代理服务)
	proxy_read_timeout 3600s;   # Nginx从代理服务读取文件超时时间
	proxy_send_timeout 3600s;	# Nginx向代理服务写入文件超时时间
	proxy_buffer_size 512k;		# 客户端请求头header大小
	proxy_buffers 64 512k;		# 缓冲区的大小和数量
	proxy_busy_buffers_size 512k;	#
	proxy_temp_file_write_size 512k;	#
	## 当上游服务器的响应过大不能存储到配置的缓冲区域时,Nginx存储临时文件硬盘路径 ,设置为服务器上存在的目录
	proxy_temp_path /usr/local/nginx1.20/cache_temp_path;
	# 注意【cache_one】,后续的location会用到
	proxy_cache_path /usr/local/nginx1.20/cache_path levels=1:2 keys_zone=cache_one:500m inactive=1d max_size=10g use_temp_path=off;
	# proxy_cache_key $host$request_uri;
	client_body_buffer_size 10240k;
	output_buffers 8 64k;
	postpone_output 1460;
	client_header_timeout 120s;
	client_body_timeout 120s;
    sendfile        on;
    keepalive_timeout  65;
	upstream cwbb {
    
    
	# 会话保持,必须安装sticky模块
	sticky name="hellosticky";
	server 192.168.137.121:8080 max_fails=5  fail_timeout=600s weight=10;
	server 192.168.137.121:8081 max_fails=5  fail_timeout=600s weight=10;
	server 192.168.137.121:8083 max_fails=5  fail_timeout=600s weight=10;
	server 192.168.137.121:8084 max_fails=5  fail_timeout=600s weight=10;
	check interval=3000 rise=2 fall=5 timeout=1000 type=http;
	}
    server {
    
    
        listen       80;
        server_name  localhost;
		
		# 如果没有配置https证书,则listen 443 ssl; ssl_certificate; ssl_certificate_key; ssl_session_cache; ssl_session_timeout;都可以用#注释
		#listen       443 ssl;
		#ssl_certificate      /usr/local/nginx1.20/cert/xxx.crt;
		#ssl_certificate_key  /usr/local/nginx1.20/cert/xxx.key;
		#ssl_session_cache    shared:SSL:10m;
		#ssl_session_timeout  5m;
		#ssl_ciphers  HIGH:!aNULL:!MD5;
		#ssl_prefer_server_ciphers  on;

		location ~* ^.+\.(jpg|jpeg|gif|png|js|ttf|css|json|)$ {
    
    
			proxy_pass http://cwbb;
			proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-Cookie;
			proxy_cache off;
			proxy_redirect off;
			proxy_set_header X-Real-IP $remote_addr;
			proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
			proxy_connect_timeout 180;
			proxy_send_timeout 180;
			proxy_read_timeout 180;
			proxy_buffer_size 128k;
			proxy_buffers 4 128k;
			proxy_busy_buffers_size 128k;
			proxy_temp_file_write_size 128k;
			proxy_cache_valid 200 304 302 24h;
			proxy_cache_key   $server_addr$uri$is_args$args;
			add_header Cache-Control no-cache;
		}
		# check模块配置
        location /check_status {
    
    
                   check_status;
                   access_log off;
            }
        # stub模块配置
        location /stub_status {
    
    
                   stub_status;
                   access_log off;
            }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
    
    
            root   html;
        }
		## 根目录访问 ,如果有其他需要代理的路径,则依次增加location即可
		location / {
    
    
			## 如果信息中心强制禁止不安全的请求类型,增加如下配置,GET|POST|HEAD是允许的请求类型
			if ($request_method !~ ^(GET|POST|HEAD)$) {
    
    
			      return 403 '{"timestamp":"2019-05-30T12:39:03.593","success":false,"errorCode":"403","errorMessage":"不安全的请求类型:$request_method","errorDetail":"不安全的URL:$request_uri","data":null}';
			}
			proxy_pass http://cwbb;
			limit_rate 400k;
			limit_rate_after 5m;
			proxy_connect_timeout 1200;
			proxy_send_timeout 1200s;
			proxy_read_timeout 1200s;
			proxy_redirect off;
			proxy_set_header Host $host;
			proxy_set_header X-Real-IP $remote_addr;
			proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
			add_header Cache-Control no-cache;
		}
    }
}
  • Proxy configuration effect (4 tomcat services of this machine are proxied)

insert image description here




The use of include directives in Nginx

1. Application scenarios:

When there are multiple domain names, if all configurations are written in the nginx.conf main configuration file, it will inevitably appear messy and bloated.

In order to facilitate the maintenance of the configuration file, it is necessary to split the configuration.

2. Create a vhost folder in the conf directory of nginx:

[root@Centos conf]# pwd
/usr/local/nginx/conf
[root@Centos conf]# mkdir vhost

3. Create test1.com.conf and test2.com.conf files in the vhost folder:

(1) Contents of test1.com.conf file:

server {
        listen 8000;
        server_name test1.com;
        location / {
            proxy_set_header Host $host:$server_port;
            proxy_set_header X-Real-Ip $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            # proxy_pass http://xxx.xxx.xxx;
            echo "test1.com";    # 输出测试

        }

}

(2) Contents of test2.com.conf file:

server {
        listen 8000;
        server_name test2.com;
        location / {
            proxy_set_header Host $host:$server_port;
            proxy_set_header X-Real-Ip $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            # proxy_pass http://xxx.xxx.xxx;
            echo "test2.com";    # 输出测试

        }

}

4. Add the following content to the http{…} section of the nginx.conf main configuration file:

include vhost/*.conf;    # include 指令用于包含拆分的配置文件

5. Edit the /etc/hosts file

# vim /etc/hosts
127.0.0.1 test1.com
127.0.0.1 test2.com

6. Test:

[root@Centos conf]# curl http://test1.com:8000
test1.com
[root@Centos conf]# curl http://test2.com:8000
test2.com
[root@Centos conf]# 

Guess you like

Origin blog.csdn.net/qq_43842093/article/details/130439341