The most complete nginx installation and upgrade security configuration in history

Source: DevOpSec Official Account
Author: DevOpSec

background

Nginx is a commonly used proxy service software. The proxy layer is usually relatively close to the user. The security of the proxy layer is very important. We need to configure and upgrade the security-related configuration and upgrade of the proxy layer in our daily work.

Here we choose to deploy openrestry. OpenResty is a web development platform with Nginx as the core, which can parse and execute Lua scripts, which is convenient for later web development based on nginx or self-developed WAF.

1. Download openrestry

Visit the official website https://openresty.org/cn/to download the latest version of openrestry

cd /root/
wget https://openresty.org/download/openresty-1.21.4.1.tar.gz

2. nginx compile security configuration

tar xvf openresty-1.21.4.1.tar.gz
cd /root/openresty-1.21.4.1/bundle/nginx-1.21.4/

# - 1.隐藏版本
vim src/core/nginx.h
#define NGINX_VERSION      "6666"
#define NGINX_VER          "FW/" NGINX_VERSION ".6"

#define NGINX_VAR          "FW"

# - 2.修改头部
vim  src/http/ngx_http_header_filter_module.c
# 49 static u_char ngx_http_server_string[] = "Server: FW" CRLF;

# - 3.修改错误页响应头部(response header)
vim src/http/ngx_http_special_response.c
# 22 "<hr><center>FW</center>" CRLF
# ...
# 29 "<hr><center>FW</center>" CRLF
# ...
# 36 "<hr><center>FW</center>" CRLF

3. Add three-party modules

3.1 Dynamic configuration of upstream modulesnginx_upstream_check_module

Check out the github code

cd /root
git clone https://github.com/yzprofile/ngx_http_dyups_module.git

3.2 Add upstream monitoring and checking modulenginx_upstream_check_module

Check out the github code

git clone https://github.com/yaoweibin/nginx_upstream_check_module.git

3.3 Add nginx monitoring modulenginx-module-vts

Check out the github code

https://github.com/vozlt/nginx-module-vts.git

4. Compile secure nginx

You need to patch nginx before compiling, because nginx-module-vtsmodule monitoring requires

切到nginx源码目录
cd /root/openresty-1.21.4.1/bundle/nginx-1.21.4/
打补丁
patch -p1 < /root/nginx_upstream_check_module/check_1.20.1+.patch 

compile secure nginx

cd /root/openresty-1.21.4.1/

./configure --prefix=/apps/nginx --with-http_realip_module  --with-http_v2_module --with-http_image_filter_module --with-http_iconv_module  --with-stream_realip_module --with-stream --with-stream_ssl_module --with-stream_geoip_module --with-http_slice_module --with-http_sub_module --add-module=/root/ngx_http_dyups_module --add-module=/root/nginx_upstream_check_module --with-http_stub_status_module --with-http_geoip_module --with-http_gzip_static_module --add-module=/root/nginx-module-vts

make

make install

If the following error is reported, check whether the patch has not been applied

/root/nginx-module-vts/src/ngx_http_vhost_traffic_status_display_json.c: In function ‘ngx_http_vhost_traffic_status_display_set_upstream_grou’:
/root/nginx-module-vts/src/ngx_http_vhost_traffic_status_display_json.c:604:61: error: ‘ngx_http_upstream_rr_peer_t’ {aka ‘struct ngx_http_upstream_rr_peer_s’} has no member named ‘check_index’; did you mean ‘checked’?
                 if (ngx_http_upstream_check_peer_down(peer->check_index)) {
                                                             ^~~~~~~~~~~
                                                             checked
make[2]: *** [objs/Makefile:3330: objs/addon/src/ngx_http_vhost_traffic_status_display_json.o] Error 1
make[2]: Leaving directory '/root/openresty-1.21.4.1/build/nginx-1.21.4'
make[1]: *** [Makefile:10: build] Error 2
make[1]: Leaving directory '/root/openresty-1.21.4.1/build/nginx-1.21.4'
make: *** [Makefile:9: all] Error 2
[[email protected] openrest

Solution:

yum install patch

cd /root/openresty-1.21.4.1/bundle/nginx-1.21.4/

patch -p1 < /root/nginx_upstream_check_module/check_1.20.1+.patch    

start nginx

启动:
 /apps/nginx/nginx/sbin/nginx -c /apps/nginx/nginx/conf/nginx.conf
 
 reload:
  /apps/nginx/nginx/sbin/nginx -s reload -c /apps/nginx/nginx/conf/nginx.conf
 

5. nginx upgrade

In our work, we will encounter nginx vulnerabilities such as openssl vulnerabilities and need to upgrade the nginx version, or we need to upgrade nginx because of certain features of nginx.
There are two ways to upgrade (here we are mainly talking about the deployment in the virtual machine, and the container can be re-imaged):
one is to open a new virtual machine to directly upgrade the nginx version, and then copy the nginx configuration to start. After verifying that there is no problem Mount it on the LB, and gradually replace the old nginx;
the second is to upgrade the original machine. Here we mainly talk about the second method.
Upgrade steps:

前提: 
1. 有多台nginx,且从LB上摘掉一台不影响服务
2. pid路径: /data/data/nginx/conf/nginx.pid;
3. conf目录路径独立: /data/data/nginx/conf/


升级步骤:
1. 从LB上摘除要升级的nginx,观察nginx日志确保没有流量后做下一步动作
2. configure 时指定新的./configure --prefix=/apps/nginx_new 目录
3. 安装完后把nginx_new 目录下的conf 做软连指向/data/data/nginx/conf/
4. nginx reload :  /apps/nginx_new/nginx/sbin/nginx -s reload -c /data/data/nginx/conf//nginx.conf
5. 验证升级后的nginx,如果没有问题然后挂载到LB上,继续重复上述步骤完成其他nginx升级

6. nginx security configuration

6.1 Information disclosure, turn off nginx version number display

http{
server_tokens off
....

6.2 Disable unnecessary Nginx modules

The automatically installed Nginx will have many built-in modules, not all modules are required, and non-essential modules can be disabled, such as autoindex module, the following shows how to disable

# ./configure --without-http_autoindex_module
# make
# make install

6.3 Control resources and limits

To prevent potential DOS attacks on Nginx, a buffer size limit can be set for all clients, configured as follows:

## Start: Size Limits & Buffer Overflows ##
client_body_buffer_size  1K;
client_header_buffer_size 1k;
client_max_body_size 1k;
large_client_header_buffers 2 1k;
## END: Size Limits & Buffer Overflows ##

client_body_buffer_size 1k;: (default 8k or 16k) This instruction can specify the buffer size of the connection request entity. If connection requests exceed the value specified by the cache, all or part of those request entities will attempt to write to a temporary file.
client_header_buffer_size 1k;: Specifies the buffer size of the client request header. In most cases, a request header will not be larger than 1k, but if there is a larger cookie from the wap client it may be larger than 1k, Nginx will allocate a larger buffer to it, this value can be set in large_client_header_buffers .
client_max_body_size 1k;: The directive specifies the maximum request entity size that the client is allowed to connect, which appears in the Content-Length field of the request header. If the request is larger than the specified value, the client will receive a "Request Entity Too Large" (413) error. Remember, browsers don't know how to display this error.
large_client_header_buffers 2 1k;: Specifies the number and size of buffers used by some relatively large request headers on the client. The request field cannot be larger than a buffer size. If the client sends a larger header, nginx will return "Request URI too large" (414). Also, the longest field in the request header cannot be larger than a buffer, otherwise the server will return "Bad request" (400). Buffers are only split when needed. The default size of a buffer is the size of the paging file in the operating system, usually 4k or 8k. If a connection request finally changes the state to keep-alive, the buffer it occupies will be released.

You also need to control the timeout to improve server performance and disconnect from the client, the configuration is as follows:

## Start: Timeouts ##
client_body_timeout   10;
client_header_timeout 10;
keepalive_timeout     5 5;
send_timeout          10;
## End: Timeouts ##

client_body_timeout 10;: The directive specifies the timeout for reading the request entity. The timeout here means that a request entity does not enter the reading step. If the connection exceeds this time and the client does not respond, Nginx will return a "Request time out" (408) error.
client_header_timeout 10;: The directive specifies the timeout for reading the client request header. The timeout here means that a request header does not enter the reading step. If the connection exceeds this time and the client does not respond, Nginx will return a "Request time out" (408) error.
keepalive_timeout 5 5; : The first value of the parameter specifies the timeout period of the long connection between the client and the server, and the server will close the connection if the timeout is exceeded. The second value of the parameter (optional) specifies the time value of Keep-Alive: timeout=time in the response header. This value allows some browsers to know when to close the connection so that the server does not need to close it repeatedly. If this parameter is not specified , nginx will not send Keep-Alive information in the response header. (But this does not refer to how to "Keep-Alive" a connection) The two values ​​of the parameter can be different.
send_timeout 10;: Specifies the timeout period after the response is sent to the client. Timeout means that it has not entered the fully established state and only completed two handshakes. If the client does not respond within this time, nginx will close the connection.

6.4 Disable all unnecessary HTTP methods

Disable all unnecessary HTTP methods. The following settings mean only GET, HEAD, and POST methods are allowed, and methods such as DELETE and TRACE are filtered out.

location / {
limit_except GET HEAD POST { deny all; }
}

Another method is to set it in the server block, but this is set globally, pay attention to evaluate the impact

if ($request_method !~ ^(GET|HEAD|POST)$ ) {
    return 444; }

6.5 Preventing Host Header Attacks

Add a default server. When the host header is modified and cannot match the server, it will jump to the default server, and the default server will directly return a 403 error.

server {
       listen 80 default;

       server_name _;

       location / {
            return 403;
       }
}

6.6 Configure SSL and cipher suites

Nginx allows the use of insecure old SSL protocols by default, ssl_protocols TLSv1 TLSv1.1 TLSv1.2, it is recommended to make the following changes:

ssl_protocols TLSv1.2 TLSv1.3;

In addition, to specify cipher suites, you can ensure that the server-side configuration items are used during the TLSv1 handshake to enhance security.

ssl_prefer_server_ciphers on

6.7 Prevent image hotlinking

Image or HTML hotlinking means that someone directly uses the image URL of your website to display on his website. The end result, you need to pay extra for broadband. This is usually on forums and blogs. I strongly recommend that you block and prevent hotlinking.

location /images/ {
  valid_referers none blocked www.example.com example.com;
   if ($invalid_referer) {
     return   403;
   }
}

For example: redirect and display the specified image.

valid_referers blocked www.example.com example.com;
if ($invalid_referer) {
rewrite ^/images/uploads.*.(gif|jpg|jpeg|png)$ http://www.examples.com/banned.jpg last
}

6.8 Directory restrictions

You can set access permissions for specified directories. All website directories should be configured one by one, allowing only necessary directory access.

You can restrict access to directories by IP address

location /docs/ {
  ## block one workstation
  deny    192.168.1.1;
 
  ## allow anyone in 192.168.1.0/24
  allow   192.168.1.0/24;
 
  ## drop rest of the world
  deny    all;
}

You can also password protect directories
First create a password file and add the "user" user

mkdir /app/nginx/nginx/conf/.htpasswd/
htpasswd -c /app/nginx/nginx/conf/.htpasswd/passwd user

Edit nginx.conf, add the directory to be protected

location ~ /(personal-images/.*|delta/.*) {
  auth_basic  "Restricted";
  auth_basic_user_file   /usr/local/nginx/conf/.htpasswd/passwd;
}

Once the password file has been generated, you can also add users who are allowed access with the following command

htpasswd -s /usr/local/nginx/conf/.htpasswd/passwd userName

6.9 Deny some User-Agents

Block some User-Agents
You can easily block User-Agents such as scanners, bots, and spammers who abuse your server.

## Block download agents ##
if ($http_user_agent ~* LWP::Simple|BBBike|wget) {
return 403;
}
##

6.10 nginx go to external network IP

If nginx has a vulnerability, there may be a remote execution behavior. Download the attack tool to the nginx machine through ip, and use the nginx machine as a springboard for attacks.
Proxy nginx through LB, the traffic first passes through LB and then to nginx, do not send nginx directly through the external network ip.

6.11 Configure a reasonable response header

To further enhance the performance of Nginx web, several different response headers can be added.
X-Frame-Options
The X-Frame-Options HTTP response header can be used to indicate whether the browser should be allowed \<frame\>to \<iframe\>render the page in or . This prevents clickjacking attacks.
Add to the configuration file:

add_header X-Frame-Options "SAMEORIGIN";

Strict-Transport-Security
HTTP Strict Transport Security, referred to as HSTS. It allows an HTTPS website, requires the browser to always access it through HTTPS, and rejects requests from HTTP at the same time, the operation is as follows:

add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload";

CSP
Content Security Policy (CSP) protects your website from being attacked by means such as XSS, SQL injection, etc., the operation is as follows:

add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;

When serving user-provided content, include the X-Content-Type-Options: nosniff header option, in conjunction with the Content-Type: header option,
to disable content-type detection in some browsers.

add_header X-Content-Type-Options nosniff;

X-XSS-Protection: means to enable XSS filtering (disable filtering is X-XSS-Protection: 0), mode=block means to stop rendering the page if an XSS attack is detected

add_header X-XSS-Protection "1; mode=block";

6.12 Full site https

redirect all http to https

server {
  listen 80 default_server;
  listen [::]:80 default_server;
  server_name .example.com;
  return 301 https://$host$request_uri;
}

6.13 Controlling the Number of Concurrent Connections

The number of concurrent connections of an IP can be limited by the ngx_http_limit_conn_module module

http {
    limit_conn_zone $binary_remote_addr zone=limit1:10m;

    server {
        listen 80;
        server_name example.com;
           
        root /apps/project/webapp;
        index index.html;
        location / {
            limit_conn limit 10;
        }
        access_log /data/log/nginx/nginx_access.log main;
    }
}

limit_conn_zone: Set the parameters of the shared memory space that saves the state of each key (such as binaryremoteaddr), zone = space name: the calculation of size is related to variables, such as the parameters of the shared memory space of binary_remote_addr) state, zone=space name: size Computations are relative to variables, such asbinaryremoteadd r ) state shared memory space parameters, zone n e=space name:The calculation of the size is related to variables. For example, the size of the binary_remote_addr variable is fixed at 4 bytes for recording IPV4 addresses, while it is fixed at 16 bytes for recording IPV6 addresses. The storage state occupies 32 or 64 bytes on a 32-bit platform, but on a 64-bit platform Occupies 64 bytes. 1m of shared memory space can save about 32,000 32-bit states and 16,000 64-bit states
limit_conn: Specify a shared memory space that has been set (for example, the space whose name is limit1), and each given key value Maximum number of connections

The above example means that only 10 connections are allowed at the same time for the same IP

When multiple limit_conn directives are configured, all connection limits will take effect

http {
    limit_conn_zone $binary_remote_addr zone=limit1:10m;
    limit_conn_zone $server_name zone=limit2:10m;
    
    server {
        listen 80;
        server_name example.com;
        
        root /data/project/webapp;
        index index.html;
        
        location / {
            limit_conn limit1 10;
            limit_conn limit2 2000;
        }
    }
}

The above configuration will not only limit the number of connections from a single IP source to 10, but also limit the total number of connections to a single virtual server to 2000

6.14 Connection authority control

In fact, the maximum number of connections of nginx is the total number of worker_processes multiplied by worker_connections.

In other words, the following configuration is 4X65535. Generally speaking, we will emphasize that worker_processes is set to be equal to the number of cores, and worker_connections is not required. But at the same time, this setting actually gives space to the attacker. The attacker can initiate so many connections at the same time and mess up your server. Therefore, we should configure these two parameters more reasonably.

user  www;
worker_processes  4;
error_log  /data/log/nginx/nginx_error.log  crit;
pid        /data/data/nginx/conf/nginx.pid;
events {
    use epoll;
    worker_connections 65535;
}

However, it is not completely impossible to limit. Starting from nginx0.7, two new modules have been released:

HttpLimitReqModul:    限制单个 IP 每秒请求数
HttpLimitZoneModule:     限制单个 IP 的连接数

These two modules must be defined at the http layer first, and then restricted in the context of location, server, and http. They use the leaky bucket algorithm that restricts single ip access, that is to say, 503 errors will be reported if the defined limit is exceeded, so All the cc attacks that broke out were restrained. Of course, sometimes there may be dozens of people visiting the website with the same IP in a certain company, which may be accidentally injured, and it is necessary to do a good job of 503 error callback.

Look at HttpLimitReqModul first:

http {
    limit_req_zone $binary_remote_addr zone=test_req:10m rate=20r/s;
     …
     server {
         …
         location /download/ {
            limit_req zone=test_req burst=5 nodelay;
         }
     }
}

The above http layer is the definition. This is a limit_req_zone space named test_req, which is used to store session data. The size is 10M memory, and 1M can store about 16,000 ip sessions. You can set as many as you want. With binaryremoteaddr as the key, this definition is the client IP, which can be changed to binary_remote_addr as the key, this definition is the client IP, which can be changed tobinaryremoteadd r is key , _This definition is the client IP , which can be changed to server_name and others, and the average number of requests per second is limited to 20. If it is written as 20r/m, it is per minute, and it also depends on your visit volume .

The following location layer applies this restriction. Corresponding to the above definition, the request for accessing the download folder is limited to no more than 20 requests per second for each IP, and the number of burst buckets is 5. Brust means that if the first, 19 requests in 2, 3, and 4 seconds, and 25 requests in the 5th second are allowed. But if you make 25 requests in the first second, and more than 20 requests in the second second return a 503 error. nodelay, if this option is not set, when there are 25 requests in the first second, 5 requests will be executed in the second second, if nodelay is set, 25 requests will be executed in the first second.

As far as this limitation definition is concerned, limiting the number of requests for each IP has an obvious effect on massive cc request attacks. For example, it is more obvious to limit to 1r/s one request per second, but as mentioned at the beginning, For large companies with multiple people accessing the same IP at the same time, it is inevitable that accidental injuries will occur, so more consideration should be given.

Then look at HttpLimitZoneModule:

http {
    limit_conn_zone test_zone $binary_remote_addr 10m;
    server {
        location /download/ {
            limit_conn test_zone 10;
            limit_rate 500k;
        }
    }
}

Similar to the above, the upper http layer is the general definition. This is a limit_conn_zone space named test_zone, the size is also 10M, and the key is the client IP address. However, there is no limit to the number of times, so change the definition below.

The following location layer is really defined, because the key definition is the client ip, so limit_conn is a limit of 10 connections per IP, if it is $server_name, it is 10 connections per domain name. Then the following limit_rate is to limit the bandwidth of one connection. If one ip has two connections, it is 500x2k, here is 10, that is, the maximum speed of 5000K can be given to this ip.

6.15 Regular upgrades

Nginx itself and the third-party class library used by nginx may have major vulnerabilities with the development of time and technology iterations. We are responsible for nginx-related services and need to regularly pay attention to nginx version updates and related vulnerabilities, and selectively upgrade.

Guess you like

Origin blog.csdn.net/linuxxin/article/details/129376658