Common configuration of Nginx

Common configuration of Nginx

Before introducing the configuration of nginx, let’s briefly talk about the installation of nginx in the Linux system. Follow the steps below to install successfully

Download address of nginx: https://nginx.org/en/download.html

执行:yum -y install gcc gcc-c++ automake autoconf libtool make

Install pcre: yum -y install pcre pcre-devel

安装zlib:yum -y install zlib zlib-devel

安装openssl :yum -y install openssl openssl-devel

Decompression: tar -zvxf nginx-1.20.2.tar.gz

Enter the nginx root directory: cd /usr/local/nginx/nginx-1.20.2

Execute: **./configure**

compile: make

Installation: **make install**

Execute: cd /usr/local/nginx/sbin

Start the nginx service: **./nginx**

Close nginx service: ./nginx -s stop

Restart the nginx service: ./nginx -s reload

Detection configuration file: nginx -t -c ~/youSite.conf

First look, remove the comments in the nginx.conf file, the simplest configuration is as follows:

worker_processes  1;
events {
    worker_connections  1024;
}
http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    server {
        listen       80;
        server_name  localhost;

        location / {
            root   html;
            index  index.html index.htm;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}
block configuration item

In the above code, events, http, server, location, etc. are all block configuration items. Block configuration items can be nested. The inner block directly inherits the outer block. When there is a conflict between the configurations of the inner and outer blocks, the configuration of the inner block shall prevail.

Syntax format of configuration items
配置项名
   配置项值
      1 配置项值
      2 ...
      ;

Note: A semicolon is required at the end of each line of configuration

Number of Nginx worker processes

Syntax: worker_processes number;

Default: worker_processes 1;

Normal configuration: How many processes should be configured according to the number of CPU cores.

Maximum number of connections per worker

Syntax: worker_connections number;

Configure a static web server with the HTTP core module

listening port

语法: listen address:port[default(deprecated in 0.8.21)|default_server|

Default: listen 80;

Configuration block: server

You can only add IP address, port or host name after listen

host name

Syntax: server_name name[…];

Default: server_name "";

Configuration block: server

Use the server_name configuration item to provide different services for requests from specific Host domain names, and the server_name configuration value can be an IP address

server_names_hash_bucket_size

语法: server_names_hash_bucket_size size;

Default: server_names_hash_bucket_size 32|64|128;

Configuration block: http, server, location

In order to improve the ability to quickly find the corresponding server name, Nginx uses a hash table to store the server name. server_names_hash_bucket_size sets the memory size occupied by each hash bucket

server_names_hash_max_size

语法: server_names_hash_max_size size;

Default: server_names_hash_max_size 512;

Configuration block: http, server, location

The larger the server_names_hash_max_size, the more memory will be consumed, but the collision rate of the hash key will be reduced and the retrieval speed will be faster.

The smaller the server_names_hash_max_size, the smaller the memory consumption, but the collision rate of the hash key may increase.

Handling of redirected hostnames

Syntax: server_name_in_redirect on|off;

Default: server_name_in_redirect on;

Configuration block: http, server, location

When using on to open, it means that the first host name configured in server_name will be used to replace the Host header in the original request when redirecting the request.

When off is used, it means that the Host header of the request itself is used when redirecting the request.

location

Syntax: location[=| | *|^~|@]/uri/{…}

Configuration block: server

location will try to match the above /uri expression according to the URI in the user request. If it can match, the configuration in the location{} block will be selected to process the user request. Of course, there are various matching methods. The following describes the location matching rules.

= indicates that the URI is used as a string to fully match the uri in the parameter
location = / {#只有当用户请求是/时,才会使用该location下的配置
		…
	}
~ Indicates that the URI is case-sensitive when matching letters.
~* means ignore letter case when matching URI.
^~ indicates that only the first half of the URI needs to be matched with the uri parameter when matching the URI
location ^~ images {# 以images开始的请求都会匹配上
		…
	}
Regular expressions can be used in the uri parameter
location ~* \.(gif|jpg|jpeg)$ {# 匹配以.gif、.jpg、.jpeg结尾的请求
		…
	}
/ means match all requests
location / {# /可以匹配所有请求
		…
	}

Definition of file path (commonly used)

Set the resource path as root

Syntax: root path;

Default: root html;

Configuration block: http, server, location, if

Defines the root directory of resource files relative to the HTTP request.

In the following configuration, if there is a request URI is /download/index/test.html, then the web server will return the content of the opt web html download/index/test.html file on the server.

location /download/ {
	root optwebhtml;
}
Set the resource path in alias mode

Syntax: alias path;

Configuration block: location

The alias is also used to set the file resource path. The difference between it and root lies in how to interpret the uri parameter immediately following the location, which will cause alias and root to map user requests to real disk files in different ways. If the URI of a request is /download/image/avator.jpg, and the file the user actually wants to access is in usr local/usr/image/avator.jpg, then if you want to use alias to set it, you can use the following method:

location /download/image {
	alias /usr/image/;
}

If you use the root setting, the statement is as follows, and the URI to access becomes /image/avator.jpg:

location /image {
	root /usr/;
}

Regular expressions can also be added after the alias, and Nginx will return the contents of the usr local/nginx/conf/nginx.conf file when the request is accessed to /test/nginx.conf

location ~ ^/test/(\w+)\.(\w+)$ {
	alias usrlocal/nginx/$2/$1.$2;
}
visit home page

Syntax: index file...;

Default: index index.html;

Configuration block: http, server, location

Index can be followed by multiple file parameters, and Nginx will access these files in order, for example:

Nginx will first try to access the path/index.php file. If it can be accessed, it will directly return the content of the file to end the request, otherwise it will try to return the content of the path html index.php file, and so on.

location {
    root path;
    index index.html htmlindex.php /index.php;
}
Redirect pages based on HTTP return codes

语法: error_page code[code…][=|=answer-code]uri|@named_location

Configuration block: http, server, location, if

When an error code is returned for a request, if it matches the code set in error_page, redirect to the new URI. For example:

error_page 404 404.html;
error_page 502 503 504 50x.html;
error_page 403 http://example.com/forbidden.html;
error_page 404 = @fetch;

If you don't want to modify the URI, but just want to redirect such a request to another location for processing, you can set it like this, and the request that returns 404 will be reverse-proxyed to the http://backend upstream server for processing.

location / (
	error_page 404 @fallback;
)
location @fallback (
	proxy_pass http://backend;
)
try_files

Syntax: try_files path1[path2]uri;

Configuration block: server, location

After try_files, there must be several paths, such as path1 path2..., and there must be a uri parameter at the end. The meaning is as follows: try to access each path in order, and if it can be read effectively, directly return the file end request corresponding to this path to the user , otherwise continue to visit downwards. If no valid file can be found for all paths, redirect to the last parameter uri. Therefore, the last parameter uri must exist, and it should be valid for redirection. For example:

try_files systemmaintenance.html $uri $uri/index.html $uri.html @other;
location @other {
    proxy_pass http://backend;
}

Allocation of memory and disk resources

The HTTP packet body is only stored in a disk file

语法: client_body_in_file_only on|clean|off;

Default: client_body_in_file_only off;

Configuration block: http, server, location

When the value is not off, the HTTP packet body in the user request is always stored in the disk file, even if it is only 0 bytes, it will be stored as a file. When the request ends, if the configuration is on, the file will not be deleted (this configuration is generally used for debugging and locating problems), but if it is configured as clean, the file will be deleted.

Try to write the HTTP packet body into a memory buffer

语法: client_body_in_single_buffer on|off;

默认: client_body_in_single_buffer off;

Configuration block: http, server, location

The HTTP packet body in the user request is always stored in the memory buffer. Of course, if the size of the HTTP packet body exceeds the value set by client_body_buffer_size, the packet body will still be written to the disk file.

Memory buffer size for storing HTTP headers

语法: client_header_buffer_size size;

Default: client_header_buffer_size 1k;

Configuration block: http, server

The above configuration items define the memory buffer size allocated by Nginx when receiving the HTTP header part (including HTTP line and HTTP header) in the user request under normal circumstances. Sometimes, the HTTP header part in the request may exceed this size, then the buffer defined by large_client_header_buffers will take effect.

Memory buffer size for storing very large HTTP headers

语法: large_client_header_buffers number size;

Default: large_client_header_buffers 48k;

Configuration block: http, server

Defines the number of buffers and the size of each buffer for Nginx to receive an oversized HTTP header request. If the size of the HTTP request line (such as GET /index HTTP/1.1) exceeds the single buffer above, return "Request URI too large" (414). There are generally many headers in the request, and the size of each header cannot exceed the size of a single buffer, otherwise "Bad request" (400) will be returned. Of course, the sum of the request line and the request header cannot exceed the number of buffers * buffer size.

The memory buffer size for storing the HTTP packet body

语法: client_body_buffer_size size;

Default: client_body_buffer_size 8k/16k;

Configuration block: http, server, location

The above configuration items define the memory buffer size of Nginx receiving HTTP packets. That is to say, the HTTP packet body will receive it in the specified cache first, and then decide whether to write it to the disk. If the user request contains the HTTP header Content-Length, and the length of the identifier is smaller than the defined buffer size, Nginx will automatically reduce the memory buffer used by this request to reduce memory consumption.

Internet connection settings

Timeout for reading HTTP headers

Syntax: client_header_timeout time (default unit: seconds)

Default: client_header_timeout 60;

Configuration block: http, server, location

After the client establishes a connection with the server, it will begin to receive the HTTP header. During this process, if the byte sent by the client is not read within a time interval (timeout period), it will be considered timeout and returned to the client. 408 ("Request timed out") response

Timeout for reading the HTTP packet body

Syntax: client_body_timeout time (default unit: seconds);

Default: client_body_timeout 60;

Configuration block: http, server, location

Timeout for sending responses

Syntax: send_timeout time;

Default: send_timeout 60;

Configuration block: http, server, location

This timeout is the timeout for sending the response, that is, the Nginx server sent a data packet to the client, but the client has not received the data packet. If a connection exceeds the timeout defined by send_timeout, Nginx will close the connection

keepalive timeout

Syntax: keepalive_timeout time (default unit: seconds);

Default: keepalive_timeout 75;

Configuration block: http, server, location

After a keepalive connection is idle for a certain period of time (the default is 75 seconds), both the server and the browser will close the connection. Of course, the keepalive_timeout configuration item is used to constrain the Nginx server, and Nginx will also pass this time to the browser according to the specification, but each browser may have a different strategy for keeping alive.

The maximum number of requests allowed on a keepalive long connection

Syntax: keepalive_requests n;

Default: keepalive_requests 100;

Configuration block: http, server, location

A keepalive connection can only send up to 100 requests by default

Limitations on Client Requests

Limit user requests by HTTP method name

Syntax: limit_except method...{...}

Configuration block: location

Nginx limits user requests through the method name specified after limit_except. Possible values ​​for the method name include: GET, HEAD, POST, PUT, DELETE, MKCOL, COPY, MOVE, OPTIONS, PROPFIND, PROPPATCH, LOCK, UNLOCK, or PATCH. For example:

limit_except GET {
    allow 192.168.1.0/32;
    deny all;
}

Note; allowing the GET method implies allowing the HEAD method as well. Therefore, this code means that the GET method and the HEAD method are prohibited, but other HTTP methods are allowed.

The maximum size of the HTTP request body

Syntax: client_max_body_size size;

Default: client_max_body_size 1m;

Configuration block: http, server, location

When a browser sends a request with a large HTTP packet body, there will be a Content-Length field in its header, and client_max_body_size is used to limit the size of the value displayed by Content-Length. Therefore, this configuration of limiting the package body is very useful, because without waiting for Nginx to receive all the HTTP package bodies - which may take a long time - it can tell the user that the request is too large and cannot be accepted. For example, when a user tries to upload a 10GB file, after receiving the header, Nginx finds that the Content-Length exceeds the value defined by client_max_body_size, and directly sends a 413 (“Request EntityToo Large”) response to the client.

rate limit on requests

Syntax: limit_rate speed;

Default: limit_rate 0;

Configuration block: http, server, location, if

This configuration limits the number of bytes transferred per second on client requests. Speed ​​can use various units mentioned in section 2.2.4, and the default parameter is 0, which means no speed limit. For different clients, you can use the $limit_rate parameter to implement different rate limiting strategies. For example:

server {
    if ($slow) {
    	set $limit_rate 4k;
    }
}
limit_rate_after

Syntax: limit_rate_after time;

Default: limit_rate_after 1m;

Configuration block: http, server, location, if

This configuration means that Nginx starts to limit the rate after the length of the response sent to the client exceeds limit_rate_after. For example:

limit_rate_after 1m;
limit_rate 100k;

Configure a reverse proxy server with the HTTP proxy module

insert image description here

​ The process of forwarding requests when Nginx acts as a reverse proxy server

Basic configuration of reverse proxy

proxy_pass

Syntax: proxy_pass URL;

Configuration block: location, if

This configuration item reverse-proxyes the current request to the server specified by the URL parameter,

URL can be in the form of hostname or IP address plus port

proxy_pass http://localhost:8000/uri/;

You can also use the upstream block directly as shown in the load balancing section above

upstream backend {
	…
}
server {
    location / {
        proxy_pass http://backend;
    }
}

Can convert HTTP to more secure HTTPS

proxy_pass https://192.168.0.1;

Basic configuration of load balancing

upstream block

Syntax: upstream name{...}

Configuration block: http

The upstream block defines a cluster of upstream servers, which is convenient for proxy_pass in the reverse proxy. For example:

upstream backend {
    server backend1.example.com;
    server backend2.example.com;
    server backend3.example.com;
}
server {
        location / {
        proxy_pass http://backend;
    }
}
sever

Syntax: server name[parameters];

Configuration block: upstream

The server configuration item specifies the name of an upstream server, which can be a domain name, IP address port, UNIX handle, etc., followed by the following parameters.

weight=number: Set the weight forwarded to this upstream server, the default is 1.

upstream backend {
    server backend1.example.com weight=5;
    server 127.0.0.1:8080 max_fails=3 fail_timeout=30s;
    server unix:/tmp/backend3;
}
ip_hash

Syntax: ip_hash;

Configuration block: upstream

Calculate a key based on the client's IP address, take the key according to the number of upstream servers in the upstream cluster, and then forward the request to the corresponding upstream server based on the result of the modulus. This ensures that requests from the same client are only forwarded to the specified upstream server

upstream backend {
    ip_hash;
    server backend1.example.com;
    server backend2.example.com;
    server backend3.example.com down;
    server backend4.example.com;
}

The content of this article comes from the book "In-depth Understanding of Nginx Module Development and Architecture Analysis". This article is just a study note record. If there is any infringement, please contact me to delete the article.

Guess you like

Origin blog.csdn.net/qq798867485/article/details/131186829