NGINX installation and operation notes

Table of contents

1. Introduction to NGINX... 1

2. NGINX installation... 3

1. Download... 3

2. Unzip... 3

3. Compile... 4

4. Install... 5

3. NGINX startup... 5

1. Manual start... 5

2. System service startup... 6

Four, NGINX configuration... 7

1. Nginx basic configuration... 7

2. Nginx forwards dynamic services... 9

3. Nginx forwards static resources... 10

4. Nginx supports HTTPS access... 10

5. Header parameters of Nginx log printing request... 11

6. Nginx prevents DoS attack, CC attack... 13

Five, Nginx log segmentation... 14

Six, Nginx monitoring... 16

a. Install the nginx-module-vts module... 17

b. Install nginx-vts-exporter 17

c. View the Nginx monitoring panel... 17

d. Traffic monitoring tool iftop. 18

e. Project log management... 20

Seven, Nginx high concurrency (load balancing)... 21

1. The principle of high concurrency of Nginx... 21

2. The role of Nginx high concurrency... 21

3. Nginx high concurrency optimization... 22

4. Nginx high concurrency and resource requirements... 22

5. Nginx load balancing configuration... 22

a. Load balancing configuration... 22

b. Load balancing strategy... 23

Eight, Nginx high availability... 24

1. Working principle of Keepalived... 24

2. Keepalived function... 25

3. Install Keepalived. 27

4. Configure Keepalived. 28

1 ) Modify the master NGINX configuration ... 28

2 ) Modify Backup NGINX configuration ... 29

5. Nginx service detection... 30

Nine, Nginx handles CC attacks... 31

1. CC attack principle... 31

2. Types of CC attacks... 32

3、慢速攻击及Apache防慢速攻击... 32

4、Nginx防CC攻击... 34

十、Nginx禁止特定IP地址访问... 43

一、NGINX介绍

Nginx是一个高性能的HTTP和反向代理web服务器,同时也提供了邮件IMAP/POP3/SMTP服务。Nginx基于REST(Representational State Transfer)风格,REST又被唤作表现层状态转换。而REST风格正是基于HTTP协议运行的,HTTP协议又被称为无状态协议,HTTP协议的七种常用动作:GET、POST、PUT、PATCH、DELETE、HEAD、OPTIONS,一个资源可能会随着需求的变化而经历一个资源创建、修改、查询、删除等过程。

官网:nginx: download

① 正向代理与反向代理

  • 正向代理:正向代理服务器位于客户端和目标服务器之间,为了从目标服务器取得资源,客户端向代理服务器发送一个请求并指定目标服务器,然后代理服务器向目标服务器转交请求并将获得的内容返回给客户端。客户端才能使用正向代理。

  • 反向代理:反向代理服务器位于客户端与目标服务器之间,但对于客户端而言,反向代理服务器就相当于目标服务器,即客户端直接访问反向代理服务器就可以获得目标服务器的资源。客户端不需要知道目标服务器的地址,也无须在客户端做任何设定。反向代理服务器通常可用来作为web加速,即使用反向代理作为web服务器的前置机来降低网络和服务器的负载,提高访问效率。

② Nginx在项目中常用架构图

 

③ Nginx优势

a. 拥有自己的函数库,除了zlib,PCRE和OpenSSL之外,标准模块只使用系统C库函数

b. 占用内存少(在3W并发连接中,开启的10个nginx进程约消耗内存大小150M)

c. 高并发能力强(支撑5W并发连接,在生产环境中能到2-3W并发连接数)配置简单,价格开源免费

d. 节省带宽(支持GZIP压缩传输,可以添加浏览本地缓存的HEAD头)

e. 支持Rewriter重写(能够根据域名,URL的不同,将HTTP请求分到不同的后端服务器群组)

二、NGINX安装

1. 下载

   从nginx官网选择合适的版本nginx下载到本地,然后通过文件上传工具上传到linux服务器。  

下载路径:/data/nginx-1.16.1.tar.gz

2. 解压

  解压:tar -zxvf nginx-1.16.1.tar.gz

        chown -R root:root nginx-1.16.1/   ——(最好修改所有者权限)

根据磁盘空间大小确定nginx安装路径:/data/nginx

3. 编译

[root@hjr23 ~]# cd /data/nginx-1.16.1

[root@hjr23 nginx-1.16.1]# ./configure --prefix=/data/nginx  ——nginx安装路径

--with-http_gzip_static_module

--with-http_stub_status_module

--with-http_ssl_module           ——实现nginx的HTTPS访问方式需要加入的模块

--with-pcre

--with-file-aio

--with-http_realip_module

--add-module=/data/nginx-module-vts-master    ——实现nginx监控需要加入模块

## 如需在已部署的nginx的服务器上添加某个模块,那么只需重新编译再执行make就行 (不要执行make install,否则会覆盖之前的安装),然后将编译好的nginx覆盖掉原有nginx即可:

   cd /data/nginx/sbin

   mv nginx nginx_bak

cp /data/nginx-1.16.1/objs/nginx   /data/nginx/sbin/

4. 安装

  [root@hjr23 nginx-1.16.1]#make

  [root@hjr23 nginx-1.16.1]#make install

  或者同时执行:make && make install

##如果在nginx编译安装过程中出现缺少一些依赖包的情况,可根据实际情况进行依赖包安装,根据以上configure指定参数模块,需要安装一下依赖包:

yum -y install gcc zlib zlib-devel pcre pcre-devel openssl openssl-devel

  查看nginx服务:

   ll /data/nginx/

三、NGINX启动

1. 手动启动

[root@hjr23 nginx]# /data/nginx/sbin/nginx   ——启动

[root@hjr23 nginx]# /data/nginx/sbin/nginx -s reload  ——重载

[root@hjr23 nginx]# /data/nginx/sbin/nginx -s stop    ——停止

[root@hjr23 nginx]# /data/nginx/sbin/nginx -V        ——查看nginx版本

   查看进程和端口:ps -ef | grep nginx

                   netstat -anp | grep 进程ID

  ## nginx.conf中配置信息:

worker_processes   8

nginx server listen  9091 和 9092

2. 系统服务启动

 > 将nginx服务加入系统服务并设置为开机自启

[root@centos7-min7 system]# cat /etc/systemd/system/nginx.service

[Unit]

Description=nginx

After=network.target

[Service]

Type=forking

#PIDFile=/md5/nginx/logs/nginx.pid

ExecStart=/md5/nginx/sbin/nginx

ExecReload=/md5/nginx/sbin/nginx -s reload

ExecStop=/md5/nginx/sbin/nginx -s stop

Restart=on-failure

RestartSec=5

#PrivateTemp=true

[Install]

WantedBy=multi-user.target

systemctl daemon-reload

systemctl start nginx.service

systemctl enable nginx.service

reboot

systemctl status nginx.service

Four, NGINX configuration

1. Nginx basic configuration

# vi /data/nginx/conf/nginx.conf —— nginx configuration file

user  root;

worker_processes 8; ——CPU core number lscpu

events {

    worker_connections  8192;  —— 8*1024=8192

}

http {

    server_tokens off; ——Hide the nginx version from the outside world

    include       mime.types;

    vhost_traffic_status_zone;

    vhost_traffic_status_filter_by_host on; ——Information configured for nginx monitoring

  

default_type  application/octet-stream;

log_format main '{"time_local":"$time_local","request":"$request","http_host": "$http_host", "source_addr": "$http_x_forwarded_for", "http_code": "$status", "bytes_sent": "$bytes_sent","hostname":"$hostname", "player_id":"$upstream_http_player_id","request_body":"$request_body","remote_addr":"$remote_addr"}';                    ——nginx日志打印配置

access_log  logs/access.log  main;

sendfile        on;

#tcp_nopush     on;

keepalive_timeout  65;

#gzip  on;

##server{                     ——处理header头攻击

##    listen 9091 default;

##    server_name _;

##    return 500;

##}

server {

        #listen       9092;

        #server_name  ****;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        # redirect server error pages to the static page /50x.html

        error_page   500 502 503 504  /50x.html;

        location = /50x.html {

            root   html;

        }

}

}

 

2. Nginx forwards dynamic services

Clients access services forwarded by Nginx.

http{

  upstream admins{

      server 192.168.0.72:8083 weight=5; ——load proportion

}

server {

  listen       9091 ssl;

  server_name  ****.com;

  ssl_certificate      /data/nginx/ssl/***.cer;

  ssl_certificate_key  /data/nginx/ssl/***.pem;

  ssl_session_cache    shared:SSL:1m;

  ssl_session_timeout  5m;

  ssl_ciphers  HIGH:!aNULL:!MD5;

  ssl_prefer_server_ciphers  on;

      #Management side

      location /admin {

         proxy_set_header X-Real-IP $remote_addr;

         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

         proxy_set_header Host $http_host;

         proxy_set_header X-Nginx-Proxy true;

         proxy_pass http://admins/admin;

         proxy_redirect off;

      }

   }

}

3.  Nginx转发静态资源

客户端请求的静态资源可交给Nginx。但Nginx只能转发Nginx服务器上的静态资源。

 

动静分离 :① 把静态资源放在nginx代理服务器上

② 把静态文件独立成单独的域名,放在独立的服务器上(主推方案)

4.  Nginx 支持HTTPS访问方式

Nginx支持以HTTPS协议转发服务,需要SSL证书,可申请正式证书也可自行生成证书,生产环境需要正式证书。

① 生成SSL证书和密钥文件

使用openssl生成https证书和密钥文件

mkdir /usr/local/nginx/conf/ssl

#创建服务器证书密钥文件 server.key

openssl genrsa -des3 -out server.key 2048 --------------(1qaz@WSX3edc)

#创建服务器证书的申请文件 server.csr

openssl req -new -key server.key -out server.csr

#备份一份服务器密钥文件

cp server.key server.key.org

#去除文件口令

openssl rsa -in server.key.org -out server.key

#生成证书文件server.crt

openssl x509 -req -days 3650 -in server.csr -signkey server.key -out server.crt

② 修改nginx.conf

server {

       listen       9092 ssl;

       server_name  ****.com;

       ssl_certificate      /data/nginx/ssl/***.cer;  ——SSL证书

       ssl_certificate_key  /data/nginx/ssl/***.pem; ——SSL密钥

       ssl_session_cache    shared:SSL:1m;

       ssl_session_timeout  5m;

       ssl_ciphers  HIGH:!aNULL:!MD5;

       ssl_prefer_server_ciphers  on;

       location /status {

          vhost_traffic_status_display;

          vhost_traffic_status_display_format html;

       }

 }

5.  Nginx日志打印请求的头参数

① 安装LuaJIT

lua(www.lua.org)是为了嵌入其它应用程序而开发的一个脚本语言,luajit(www.luajit.org)是lua Just-In-Time译为运行时编译。nginx获取请求头参数就是使用LuaJIT提供的功能实现的。

1、安装LuaJIT

wget http://luajit.org/download/LuaJIT-2.0.5.tar.gz

tar -zxvf LuaJIT-2.0.5.gz && cd LuaJIT-2.0.5

make && make install PREFIX=/usr/local/luajit

#Import two environment variables

export LUAJIT_LIB=/usr/local/luajit/lib

export LUAJIT_INC=/usr/local/luajit/include/luajit-2.0

2. Download ngx_devel_kit, and lua-nginx-module

wget https://github.com/simplresty/ngx_devel_kit/archive/v0.3.1rc1.tar.gz

tar -xzvf ngx_devel_kit-0.3.1rc1.tar.gz

wget https://github.com/openresty/lua-nginx-module/archive/v0.10.14rc3.tar.gz

tar -xzvf v0.10.14rc3.tar.gz

3. Enter the nginx source data directory and recompile nginx

./configure --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module --with-threads --with-stream --add-module=/root/nginx_modules/lua-nginx-module-0.10.14rc3 --add-module=/root/nginx_modules/ngx_devel_kit-0.3.1rc1

make

[root@sjjy03 nginx-1.16.1]# cp objs/nginx /usr/local/nginx/sbin/

[root@gp-master sbin]# ./nginx

./nginx: error while loading shared libraries: libluajit-5.1.so.2: cannot open shared object file: No such file or directory

vi /etc/ld.so.conf

新增一条 /usr/local/luajit/lib

保存文件,执行ldconfig ,即可。

② 修改nginx.conf配置

1、在http{}中自定义access.log的日志格式

log_format log_req_resp '$remote_addr - $remote_user [$time_local] "$request"  $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_time '

 'req_header:"$req_header" resp_header:"$resp_header" - $server_name - $server_port - $server_protocol ';

2、获取http请求头与返回头

 3、获取https请求头与返回头

6.  Nginx防DoS攻击、CC攻击

DoS是Denial of Service的简称,即拒绝服务,造成DoS的攻击行为被称为DoS攻击,其目的是使计算机或网络无法提供正常的服务。最常见的DoS攻击有计算机网络宽带攻击和连通性攻击。

DoS attack refers to the deliberate attack on the flaws in the implementation of network protocols or the brutal depletion of the resources of the attacked object directly through brutal means. The purpose is to make the target computer or network unable to provide normal services or resource access, so that the target system service system stops responding. Even crashing, which does not include compromising the target server or target network device in this attack. These service resources include network bandwidth, file system space capacity, open processes or allowed connections. This kind of attack will lead to the lack of resources, no matter how fast the processing speed of the computer is, how big the memory capacity is, and how fast the network bandwidth is, the consequences of this kind of attack cannot be avoided.

Most DoS attacks still require considerable bandwidth, and it is difficult for individual hackers to use high-bandwidth resources. To overcome this shortcoming, DoS attackers develop distributed attacks. Attackers simply use tools to gather a lot of network bandwidth to launch a large number of attack requests on the same target at the same time. This is a DDoS (Distributed Denial of Service) attack.

The attacker uses the proxy server to generate a legitimate request directed to the victim host to achieve DDOS and camouflage: CC (Challenge Collapsar).

Practical test reference: https://www.cnblogs.com/wpjamer/p/9030259.html

Five, Nginx log segmentation

[root@hjr23 logs]# cat nginx_log.sh

#!/bin/bash

##Set the log file storage directory

LOG_HOME="/data/nginx/logs"

## backup file name

LOG_NAME_BAK="$(date -d yesterday +%Y%m%d%H%M)".access.log

## Rename the log file

mv ${LOG_HOME}/access.log ${LOG_HOME}/${LOG_NAME_BAK}.log

## Signal the nginx master process to reopen the log

##USR signal explanation USR1 is also usually used to tell the application to reload the configuration file

kill -USR1 `cat /data/nginx/logs/nginx.pid`

##crontab -l

##crontab -e

##0 2 * * * export DISPLAY=:0; sh /data/nginx/logs/nginx_log.sh

#find ${LOG_NAME} -type f -mtime +30 ".log" -exec rm -rf {}\;

find /data/nginx/logs/ -type f -mtime +30 | grep ".log.log" | xargs rm -rf {};

Timing task execution depends on crond service

 

》Common statistical analysis commands for nginx logs

IP related statistics

Statistical IP visits (number of independent ip visits):

awk '{print $1}' access.log | sort -n | uniq | wc -l

Check the IP visits in a certain period of time (points 4-5):

grep "07/Apr/2017:0[4-5]" access.log | awk '{print $1}' | sort | uniq -c| sort -nr | wc -l

View the top 100 most visited IPs:

awk '{print $1}' access.log | sort -n |uniq -c | sort -rn | head -n 100

View IPs with more than 100 visits:

awk '{print $1}' access.log | sort -n |uniq -c |awk '{if($1 >100) print $0}'|sort -rn

Query the detailed access status of an IP, sorted by access frequency:

grep '127.0.01' access.log |awk '{print $7}'|sort |uniq -c |sort -rn |head -n 100

Page Access Statistics

View the most frequently visited pages (TOP100):

awk '{print $7}' access.log | sort |uniq -c | sort -rn | head -n 100

View the most frequently visited pages ([excluding php pages] (TOP100):

grep -v ".php" access.log | awk '{print $7}' | sort |uniq -c | sort -rn | head -n 100

View pages with more than 100 page visits:

cat access.log | cut -d ' ' -f 7 | sort |uniq -c | awk '{if ($1 > 100) print $0}' | less

View the most recent 1000 records, the most visited pages:

tail -1000 access.log |awk '{print $7}'|sort|uniq -c|sort -nr|less

Statistics of requests per second

Count the number of requests per second, the time point of top100 (accurate to the second)

awk '{print $4}' access.log |cut -c 14-21|sort|uniq -c|sort -nr|head -n 100

Statistics per minute

Count the number of requests per minute, the time point of top100 (accurate to the minute)

awk '{print $4}' access.log |cut -c 14-18|sort|uniq -c|sort -nr|head -n 100

Hourly request statistics

Count the number of requests per hour, the time point of top100 (accurate to the hour)

awk '{print $4}' access.log |cut -c 14-15|sort|uniq -c|sort -nr|head -n 100

performance analysis

Add $request_time to the last field in nginx log

List pages with transfer times longer than 3 seconds, displaying the first 20

cat access.log|awk '($NF > 3){print $7}'|sort -n|uniq -c|sort -nr|head -20

List the pages whose php page request time exceeds 3 seconds, count the number of occurrences, and display the first 100 pages

cat access.log|awk '($NF > 3 && $7~/\.php/){print $7}'|sort -n|uniq -c|sort -nr|head -100

View the current number of TCP connections

netstat -tan | grep "ESTABLISHED" | grep ":80" | wc -l

Use tcpdump to sniff access to port 80 to see who is the highest

tcpdump -i eth0 -tnn dst port 80 -c 1000 | awk -F"." '{print $1"."$2"."$3"."$4}' | sort | uniq -c | sort -nr

Six, Nginx monitoring

Nginx obtains certain index data of nginx through the nginx-module-vts module, and Prometheus collects nginx information through the nginx-vts-exporter component.

  1. Install the nginx-module-vts module

./configure --prefix= /data/nginx --with-http_gzip_static_module --with-http_stub_status_module --with-http_ssl_module --with-pcre --with-file-aio --with-http_realip_module --add-module=/data/nginx-module-vts

make

## If you need to add a module to the deployed nginx server, just recompile and execute make (do not execute make install, otherwise the previous installation will be overwritten), and then overwrite the compiled nginx nginx can:

      cd /data/nginx/sbin

      mv nginx nginx_bak

cp /data/nginx-1.16.1/objs/nginx   /data/nginx/sbin/

  • Modify the nginx.conf file

http{

    vhost_traffic_status_zone;

    vhost_traffic_status_filter_by_host on;

location /status {

vhost_traffic_status_display;

vhost_traffic_status_display_format html;

    }

}

     

b. Install nginx-vts-exporter

cat /etc/systemd/system/nginx-vts-exporter.service

[Unit]

Description=nginx_exporter

After=network.target

[Service]

Type=simple

User=root

ExecStart=/data/nginx-vts-exporter/nginx-vts-exporter -nginx.scrape_uri=https://****:9092/status/format/json

Restart=on-failure

[Install]

WantedBy=multi-user.target

##systemctl enable nginx-vts-exporter

##systemctl start nginx-vts-exporter

c. View the Nginx monitoring panel

https://****:9091/status

Import nginx monitoring panel nginx-vts-exporter 2949 on grafana and view the panel

 

d. traffic monitoring tool iftop

    [root@centos7-vpn opt]# yum install flex byacc libpcap ncurses ncurses-devel libpcap-devel

    [root@centos7-vpn opt]# rpm -ivh iftop-1.0-0.pre3.el7.rf.x86_64.rpm

   

 

   [root@centos7-vpn opt]# iftop -i ens33

 

e. Project log management

> Projects deployed by the root user

[root@hjr24 logs]# cat /data/logs/log.sh

#!/bin/bash

backup='/data/logs'

find ${backup} -type f -mtime +10 -exec rm -rf {} \;

[root@hjr24 logs]# crontab -l

0 5 * * * /usr/sbin/ntpdate -u cn.pool.ntp.org

0 22 * * * sh /data/logs/log.sh

10 22 * * * sh /data/logs/log.sh

>普通用户部署的项目

[hjrypt@hjr25 logs]$ cat log.sh

#!/bin/bash

backup='/data/logs'

find ${backup} -type f -mtime +10 -exec rm -rf {} \;

[root@hjr25 logs]# crontab -u hjrypt -e

[hjrypt@hjr25 logs]$ crontab -l

00 20 * * * sh /data/logs/log.sh

七、Nginx高并发(负载均衡)

1. Nginx高并发的原理

大量用户通过浏览器发送程序访问请求,Nginx先收到用户的请求,再根据负载均衡策略将请求转发给不同服务器上的程序,程序处理完毕用户请求后将结果返回给Nginx,最后由Nginx将结果转发到相应的用户浏览器。

如图所示:

2. Nginx高并发的作用

针对大量用户在同一时间访问程序,而程序处理请求所消耗的资源与反应速度不能满足生产场景需求的问题,Nginx的高并发功能提供了一种“横向”解决方案。具体的办法是采用负载均衡策略,把程序部署在几台不同的机器上,根据机器性能分配请求负载量。Nginx按照负载量向机器转发用户请求,几台机器同时并行处理各自收到的请求,处理并返回结果,提高了请求反应速度,在一定程度上避免了服务器资源耗尽,崩溃等重大问题。

3. Nginx高并发的优化

① 优化linux内核

   调整/etc/sysctl.conf 中相关参数

② 优化nginx

   调整Nginx配置文件nginx.conf中相关参数

worker_processes   = CPU核数 * 2

worker_rlimit_nofile  = (ulimit -n) worker进程最多能打开的文件句柄数

events {

    use epoll;

   worker_connections 65535;   每个进程允许的最多连接数

    multi_accept on;

}

③ 扩展服务器的CPU和内存

4. Nginx高并发的并发量与资源要求

单位时间(keepalive_timeout)内nginx最大并发量C

C=worker_processes * worker_connections/2

而每秒的并发量CS

CS=worker_processes * worker_connections/(2* keepalive_timeout)

负载均衡使用的服务器最少资源可根据生产活动中的最大并发量计算得到。

5. Nginx负载均衡配置

a. 负载均衡配置

http{

upstream dockers{

server 192.168.20.77:9000 weight=1;

server 192.168.20.78:9000 weight=2;

}

server {

listen 8081;

server_name 192.168.20.77;

location / {

proxy_pass http://dockers;

proxy_redirect off;

}

...

}

b. 负载均衡策略

1、轮询法

2、加权轮询法(weight不同)

http{

upstream tomcats{

server tomcat-01:8080 weight=1;

server tomcat-02:8080 weight=1;

server tomcat-02:8080 weight=2; 负载比重越大,接收处理的请求越多

}

}

3、源地址哈希法

根据获取客户端的IP地址,通过哈希函数计算得到一个数值,用该值对服务器列表的大小进行取摸运算,得到的结果便是客户端要访问服务器的序号。当服务器列表不变时,同一个IP的客户端访问的是同一个服务器,可以解决session问题。

upstream tomcats{

ip_hash;

server tomcat-01:8080 weight=1;

server tomcat-02:8080 weight=1; }

4、最小连接数法

According to the current connection status of the backend server, dynamically select the one with the least backlog of connections to process the current request, so as to improve the utilization efficiency of backend services as much as possible.

upstream tomcats{

least_conn;

server tomcat-01:8080 weight=1;

server tomcat-02:8080 weight=1; }

5、fair

Intelligently perform load balancing according to page size and loading time, and give priority to those with short response time according to the corresponding time of the server

Nginx itself does not support fair, the upstream_fair module must be installed

upstream tomcats{

fair;

server tomcat-01:8080 weight=1;

server tomcat-02:8080 weight=1; }

6、url_hash

Distribute requests according to the hash result of the visited URL, so that each URL is directed to a backend server

Nginx itself does not support hash, the hash package of nginx must be installed

upstream tomcats{

hash $request_uri;

server tomcat-01:8080 weight=1;

server tomcat-02:8080 weight=1; }

Eight, Nginx high availability

1. Working principle of Keepalived

Layer 3, 4, and 5 work on the IP layer, TCP layer, and application layer of the IP/TCP protocol stack. The principles are as follows:

Layer3: When Keepalived works in Layer3 mode, Keepalived will periodically send an ICMP packet to the servers in the server group (that is, the Ping program we usually use). If the IP address of a service is found to be inactive, Keepalived will report This server fails and removes it from the server farm. A typical example of this situation is that a server is illegally shut down . The way of Layer3 is to use whether the IP address of the server is valid as the standard for whether the server is working normally or not.

Layer4: Layer4 is easy if you understand the way of Layer3. Layer4 mainly determines whether the server is working normally or not based on the status of the TCP port. For example, the service port of the web server is generally 80. If Keepalived detects that port 80 is not started, Keepalived will remove this server from the server group.

Layer5:Layer5对指定的URL执行HTTP GET。然后使用MD5算法对HTTP GET结果进行求和。如果这个总数与预期值不符,那么测试是错误的,服务器将从服务器池中移除。该模块对同一服务实施多URL获取检查。如果您使用承载多个应用程序服务器的服务器,则此功能很有用。此功能使您能够检查应用程序服务器是否正常工作。MD5摘要是使用genhash实用程序(包含在keepalived软件包中)生成的。

注意:在同一台机器上能起两个nginx服务,但不能使用keepalived实现ng高可用。 keepalived需要至少两台机器。

2、Keepalived作用

Keepalived的作用是检测服务器的状态,如果有一台web服务器宕机,或工作出现故障,Keepalived将检测到,并将有故障的服务器从系统中剔除,同时使用其他服务器代替该服务器的工作,当服务器工作正常后Keepalived自动将服务器加入到服务器群中,这些工作全部自动完成,不需要人工干涉,需要人工做的只是修复故障的服务器。

高并发指标:

① 响应时间:系统对请求作出响应的时间

② 吞吐量:单位时间内处理的请求数量

③ QPS :每秒响应请求数

互联网分布式架构设计,提高系统并发能力的方式:

1、垂直扩展(增强单机硬件性能 | 提升单机架构性能)

2、水平扩展(添加服务器数量)

高并发下Nginx配置限流方式,前两种只能对客户端(即单一IP限流)

1、limit_conn_zone

2、limit_req_zone

3、ngx_http_upstream_module

 

 

 

 

 

 

3、安装Keepalived

下载keepalived地址:Keepalived for Linux或者直接通过Yum安装: yum install keepalived -y

Keepalived的所有功能是配置keepalived.conf文件来实现的。

cd /opt/

tar -zxvf keepalived-2.0.20.tar.gz -C /opt/

chown root:root keepalived-2.0.20

cd keepalived-2.0.20

./configure --prefix=/usr/local/keepalived

make && make install

将Keepalived安装成Linux系统服务:

mkdir /etc/keepalived

复制配置文件:

cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/

复制keepalived脚本文件:

cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/

cp /opt/keepalived-2.0.20/keepalived/etc/init.d/keepalived /etc/init.d/ -----从解压的安装包拷贝

ln -s /usr/local/keepalived/sbin/keepalived /sbin/

ln -s /usr/local/keepalived/sbin/keepalived /usr/sbin/

chkconfig keepalived on | systemctl enable keepalived.service ----------设置开机重启

systemctl start keepalived

systemctl status keepalived

4、配置Keepalived

另外需要注意的一点是,keepalived启动时不会检查配置文件的语法是否正确,所以我们在编写配置文件时要特别小心,别写错了,否则会出现一些意想不到的现象。

配置NGINX的主备自动重启。

对配置文件进行修改:vim /etc/keepalived/keepalived.conf

  1 ) Modify the master NGINX configuration

global_defs {

router_id node7 #hostname

#vrrp_strict ——Need to comment out this, VIP can ping through

}

vrrp_script chk_nginx{

script "/etc/keepalived/nginx_check.sh"

interval 2 #check nginx status every 2 seconds

weight -20 #priority subtract 20 when failure

fall 3

rise 2

}

vrrp_instance VI_1 {

state MASTER

interface ens33 # Bind VIP's network interface

virtual_router_id 51 #The ID number of the virtual router, the settings of the active and standby nodes are the same

priority 100 #Node priority, 0-254, MASTER is higher than BACKUP

advert_int 1 #Multicast information sending time interval, the settings of the active and standby nodes are the same

authentication { #Verify that the primary and secondary node settings of the verification information are consistent

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

#192.168.200.16

#192.168.200.17

192.168.20.10/24 #Primary node 192.168.20.77 Standby node 192.168.20.78

}

track_script{

chk_nginx #nginx survival status detection script

}

}

  2 ) Modify the Backup NGINX configuration

global_defs {

router_id node8 #hostname

#vrrp_strict ——Need to comment out this, VIP can ping through

}

vrrp_script chk_nginx{ - same as MASTER

script "/etc/keepalived/nginx_check.sh"

interval 2 #check nginx status every 2 seconds

weight -20 #priority subtract 20 when failure

fall 3

rise 2

}

vrrp_instance VI_1 {

state BACKUP

interface ens33

virtual_router_id 51

priority 90

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.20.10/24

}

track_script{

chk_nginx #nginx survival status detection script

}

}

5. Nginx service detection

The nginx_check.sh script is copied to the /etc/keepalived/ folder of the two machines .

Grant executable permission: chmod +x /etc/keepalived/nginx_check.sh

#!/bin/bash

A=`ps -C nginx --no-header | wc -l`

if [ $A -eq 0 ];then

/usr/local/nginx/sbin/nginx #Try to restart nginx

sleep 2 # sleep for 2 seconds

if [ `ps -C nginx --no-header | wc -l` -eq 0 ];then

systemctl stop keepalived #Failed to start, kill the keepalived service. Migrate vip to other backup nodes

fi

fi

After starting nginx of 2 machines . We start keepalived on both machines

/usr/local/nginx/sbin/nginx

systemctl start keepalived

keepalived启动失败,查看日志记录:vi /var/log/messages ps -ef | grepnginx ps -ef | grepkeepalived

看一下两台机器的ip a 命令下都会出现一个虚拟ip。测试再不关闭Keepalived下,killNGINX,再观察是否重启。关闭Keepalived下,killNGINX,再观察是否重启。

通过systemctl start keepalived 将MASTER与BACKUP机器上的keepalived服务都启动,

ng主备机器配置如下:

server {

listen 80;

server_name 192.168.20.10; —— VIP

当ng服务挂掉时,只要MASTER 上的keepalived活着,它就会去把MASTER上的nginx拉起来,如果ng起不来,MASTER上的keepalived自行断掉,切换至BACKUP的keepalived,由BACKUP支持ng继续提供对外服务。

###

同一台机器上启用两个nginx服务

[root@node7 keepalived]# cat nginx_check.sh

#!/bin/bash

A8081=`netstat -anp | grep 8081 | awk '{print $7}'`

B8081=${A8081%%/*}

echo $B8081

ng1=`ps -C $B8081 --no-header | wc -l`

if [ $ng1 -eq 0 ];then

/usr/local/nginx/sbin/nginx #尝试重新启动nginx1

fi

A8082=`netstat -anp | grep 8082 | awk '{print $7}'`

B8082=${A8082%%/*}

ng2=`ps -C $B8082 --no-header | wc -l`

if [ $ng2 -eq 0 ];then

/usr/local/nginx2/sbin/nginx #尝试重新启动nginx2

fi

九、Nginx处理CC攻击

1、CC攻击原理

DoS是Denial of Service的简称,即拒绝服务,造成DoS的攻击行为被称为DoS攻击,其目的是使计算机或网络无法提供正常的服务。最常见的DoS攻击有计算机网络宽带攻击和连通性攻击。

DoS攻击是指故意的攻击网络协议实现的缺陷或直接通过野蛮手段残忍地耗尽被攻击对象的资源,目的是让目标计算机或网络无法提供正常的服务或资源访问,使目标系统服务系统停止响应甚至崩溃,而在此攻击中并不包括侵入目标服务器或目标网络设备。这些服务资源包括网络带宽,文件系统空间容量,开放的进程或者允许的连接。这种攻击会导致资源的匮乏,无论计算机的处理速度多快、内存容量多大、网络带宽的速度多快都无法避免这种攻击带来的后果。

大多数的DoS攻击还是需要相当大的带宽的,而以个人为单位的黑客们很难使用高带宽的资源。为了克服这个缺点,DoS攻击者开发了分布式的攻击。攻击者简单利用工具集合许多的网络带宽来同时对同一个目标发动大量的攻击请求,这就是DDoS(Distributed Denial of Service)攻击。

攻击者借助代理服务器生成指向受害主机的合法请求,实现DDOS和伪装就叫:CC(Challenge Collapsar)。

2、CC攻击种类

CC攻击的种类有三种,直接攻击,代理攻击,僵尸网络攻击,直接攻击主要针对有重要缺陷的 WEB 应用程序,一般说来是程序写的有问题的时候才会出现这种情况,比较少见。僵尸网络攻击有点类似于 DDOS 攻击了,从 WEB 应用程序层面上已经无法防御,所以代理攻击是CC 攻击者一般会操作一批代理服务器,比方说 100 个代理,然后每个代理同时发出 10 个请求,这样 WEB 服务器同时收到 1000 个并发请求的,并且在发出请求后,立刻断掉与代理的连接,避免代理返回的数据将本身的带宽堵死,而不能发动再次请求,这时 WEB 服务器会将响应这些请求的进程进行队列,数据库服务器也同样如此,这样一来,正常请求将会被排在很后被处理,就象本来你去食堂吃饭时,一般只有不到十个人在排队,今天前面却插了一千个人,那么轮到你的机会就很小很小了,这时就出现页面打开极其缓慢或者白屏。

3、慢速攻击及Apache防慢速攻击

这个攻击的基本原理如下:对任何一个开放了HTTP访问的服务器HTTP服务器,先建立了一个连接,指定一个比较大的content-length,然后以非常低的速度发包,比如1-10s发一个字节,然后维持住这个连接不断开。如果客户端持续建立这样的连接,那么服务器上可用的连接将一点一点被占满,从而导致拒绝服务。

和CC攻击一样,只要Web服务器开放了Web服务,那么它就可以是一个靶子,HTTP协议在接收到request之前是不对请求内容作校验的,所以即使你的Web应用没有可用的form表单,这个攻击一样有效。

在客户端以单线程方式建立较大数量的无用连接,并保持持续发包的代价非常的低廉。实际试验中一台普通PC可以建立的连接在3000个以上。这对一台普通的Web server,将是致命的打击。更不用说结合肉鸡群做分布式DoS了。

鉴于此攻击简单的利用程度、拒绝服务的后果、带有逃逸特性的攻击方式,这类攻击一炮而红,成为众多攻击者的研究和利用对象。

使用较多的慢速攻击工具有:Slowhttptest和Slowloris

慢速攻击的分类:

① Slow headers:Web应用在处理HTTP请求之前都要先接收完所有的HTTP头部,因为HTTP头部中包含了一些Web应用可能用到的重要的信息。攻击者利用这点,发起一个HTTP请求,一直不停的发送HTTP头部,消耗服务器的连接和内存资源。抓包数据可见,攻击客户端与服务器建立TCP连接后,每30秒才向服务器发送一个HTTP头部,而Web服务器再没接收到2个连续的\r\n时,会认为客户端没有发送完头部,而持续的等等客户端发送数据。

② Slow body:攻击者发送一个HTTP POST请求,该请求的Content-Length头部值很大,使得Web服务器或代理认为客户端要发送很大的数据。服务器会保持连接准备接收数据,但攻击客户端每次只发送很少量的数据,使该连接一直保持存活,消耗服务器的连接和内存资源。抓包数据可见,攻击客户端与服务器建立TCP连接后,发送了完整的HTTP头部,POST方法带有较大的Content-Length,然后每10s发送一次随机的参数。服务器因为没有接收到相应Content-Length的body,而持续的等待客户端发送数据。

③ Slow read: The client establishes a connection with the server and sends an HTTP request. The client sends a complete request to the server, and then keeps this connection, reading the Response at a very low speed. For example, the client does not read the response for a long time. To read any data, by sending Zero Window to the server, the server will mistakenly think that the client is busy, and will not read a byte until the connection is about to time out, so as to consume the connection and memory resources of the server. The packet capture data can be seen. After the client sends the data to the server, when the server sends a response, it receives a ZeroWindow prompt from the client (indicating that it has no buffer for receiving data), and the server has to continuously send ZeroWindowProbe packets to the client, asking Whether the client can receive data.

The slow attack mainly uses the characteristics of the server with thread-based architecture. This kind of server will open a thread for each new connection, and it will wait for the entire HTTP header to be received before releasing the connection. For example, Apache will have a timeout to wait for this incomplete connection (the default is 300s), but once the data sent by the client is received, the timeout will be reset. Because of this, an attacker can easily maintain a connection, because the attacker only needs to send a character just before the timeout to extend the timeout. The client only needs a few resources to open multiple connections, which in turn takes up a lot of resources on the server.

It has been verified that Apache and httpd adopt thread-based architecture, which are vulnerable to slow attacks. And another kind of event-based server, such as nginx and lighttpd, is not easy to be attacked by slow speed.

Three Ways to Protect Apache Server from Slow Attacks

  • mod_reqtimeout

After Apache2.2.15, this module has been included by default, and users can configure the timeout and minimum rate for receiving HTTP headers and HTTP bodies from a client. If a client cannot send the header or body data within the configured time, the server will return a 408 REQUEST TIME OUT error. The configuration file is as follows:

< IfModule mod_reqtimeout.c >

RequestReadTimeout header=20-40,MinRate=500 body=20,MinRate=500

< /IfModule >

  • mod_qos

   A service quality control module of Apache, users can configure HTTP request thresholds of various granularities, the configuration file is as follows:

< IfModule mod_qos.c >

/# handle connections from up to 100000 different IPs

QS_ClientEntries 100000

/# allow only 50 connections per IP

QS_SrvMaxConnPerIP 50

/# limit maximum number of active TCP connections limited to 256

MaxClients 256

/# disables keep-alive when 180 (70%) TCP connections are occupied

QS_SrvMaxConnClose 180

/# minimum request/response speed (deny slow clients blocking the server, keeping connections open without requesting anything

QS_SrvMinDataRate 150 1200

< /IfModule >

  • mod_security

An open source WAF module has rules specifically for slow attack protection. The configuration is as follows:

SecRule RESPONSE_STATUS “@streq 408” “phase:5,t:none,nolog,pass, setvar:ip.slow_dos_counter=+1, expirevar:ip.slow_dos_counter=60, id:’1234123456′”

SecRule IP:SLOW_DOS_COUNTER “@gt 5” “phase:1,t:none,log,drop,

msg:’Client Connection Dropped due to high number of slow DoS alerts’, id:’1234123457′”

传统的流量清洗设备针对CC攻击,主要通过阈值的方式来进行防护,某一个客户在一定的周期内,请求访问量过大,超过了阈值,清洗设备通过返回验证码或者JS代码的方式。这种防护方式的依据是,攻击者们使用肉鸡上的DDoS工具模拟大量http request,这种工具一般不会解析服务端返回数据,更不会解析JS之类的代码。因此当清洗设备截获到HTTP请求时,返回一段特殊JavaScript代码,正常用户的浏览器会处理并正常跳转不影响使用,而攻击程序会攻击到空处。

而对于慢速攻击来说,通过返回验证码或者JS代码的方式依然能达到部分效果。但是根据慢速攻击的特征,可以辅助以下几种防护方式:1、周期内统计报文数量。一个TCP连接,HTTP请求的报文中,报文过多或者报文过少都是有问题的,如果一个周期内报文数量非常少,那么它就可能是慢速攻击;如果一个周期内报文数量非常多,那么它就可能是一个CC攻击。2、限制HTTP请求头的最大许可时间。超过最大许可时间,如果数据还没有传输完成,那么它就有可能是一个慢速攻击。

4、Nginx防CC攻击

Nginx配置

$binary_remote_addr 表示客户端IP地址

zone 表示漏桶的名字

rate 表示nginx处理请求的速度有多快

burst 表示峰值

nodelay 表示是否延迟处理请求,还是直接503给返回客户端,超出rate设置的情况下。

详细的可以参考官方说明文档:Module ngx_http_limit_req_module

在Nginx中有2个模块可以实现对客户端请求进行限制,当请求频率达到限制将进行拦截并返回503状态码。通过这2个模块可以达到一定的攻击防护作用:

1、ngx_http_limit_conn_module:限制同一时间连接数,即并发连接数限制

2、ngx_http_limit_req_module:限制一定时间内的请求数,优先级高于ngx_http_limit_conn_module

安装ab模拟请求

这里我们需要Apache Benchmark这个小工具来生成请求

  • 安装ab

Download - The Apache HTTP Server Project 下载 ab(Apache Benchmark)

[root@centos7-vpn opt]# tar -zxvf httpd-2.4.46.tar.gz

[root@centos7-vpn opt]# chown -R root.root httpd-2.4.46

[root@centos7-vpn httpd-2.4.46]# ./configure

安装 APR(Apache Portable Runtime library) 

[root@centos7-vpn opt]# tar -zxvf apr-1.4.5.tar.gz

[root@centos7-vpn opt]# chown root.root apr-1.4.5

[root@centos7-vpn apr-1.4.5]# ./configure && make && make install

安装了APR之后再重试安装HTTPD,又提示APR-util not found

安装 APR-utilApache Portable Runtime Utility library   

[root@centos7-vpn opt]# wget http://archive.apache.org/dist/apr/apr-util-1.3.12.tar.gz

Installation (you need to specify the build path of APR , note that it is not the installation path, but the path generated by decompressing the APR package)

[root@centos7-vpn opt]# tar -zxvf apr-util-1.3.12.tar.gz

[root@centos7-vpn opt]# chown -R root.root apr-util-1.3.12

[root@centos7-vpn apr-util-1.3.12]# ./configure --with-apr=/opt/apr-1.4.5

[root@centos7-vpn apr-util-1.3.12]# make && make install

Both APR and APR-util are installed OK , now you can install HTTP service.

After installation, the ab test tool is under the default installation path /bin of HTTPD .

[root@centos7-vpn httpd-2.4.46]# ./configure

[root@centos7-vpn httpd-2.4.46]# make && make install

 

 http://192.168.16.40:8090/

  • Pressure test on Apache ab

吞吐率(Requests per second)
概念:服务器并发处理能力的量化描述,单位是reqs/s,指的是某个并发用户数下单位时间内处理的请求数。某个并发用户数下单位时间内能处理的最大请求数,称之为最大吞吐率。
计算公式:总请求数 / 处理完成这些请求数所花费的时间,即
Request per second = Complete requests / Time taken for tests

并发连接数(The number of concurrent connections)
概念:某个时刻服务器所接受的请求数目,简单的讲,就是一个会话。

并发用户数(The number of concurrent users,Concurrency Level)
概念:要注意区分这个概念和并发连接数之间的区别,一个用户可能同时会产生多个会话,也即连接数。

用户平均请求等待时间(Time per request)
计算公式:处理完成所有请求数所花费的时间/ (总请求数 / 并发用户数),即
Time per request = Time taken for tests /( Complete requests / Concurrency Level)

服务器平均请求等待时间(Time per request: across all concurrent requests)
计算公式:处理完成所有请求数所花费的时间 / 总请求数,即
Time taken for / testsComplete requests
可以看到,它是吞吐率的倒数。
同时,它也=用户平均请求等待时间/并发用户数,即
Time per request / Concurrency Level

[root@centos7-vpn bin]# ./ab –help

[root@centos7-vpn nginx]# ./ab -t 30 -c 1 http://192.168.16.40:8090/

This is ApacheBench, Version 2.3 <$Revision: 1879490 $>

Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/

Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.16.40 (be patient)

Completed 5000 requests

Completed 10000 requests

Completed 15000 requests

Completed 20000 requests

Completed 25000 requests

Completed 30000 requests

Completed 35000 requests

Completed 40000 requests

Completed 45000 requests

Completed 50000 requests

Finished 50000 requests

Server Software:        nginx/1.18.0

Server Hostname:        192.168.16.40

Server Port:            8090

Document Path:          /

Document Length:        612 bytes

Concurrency Level:      1

Time taken for tests:   9.272 seconds

Complete requests:      50000

Failed requests:        49989

   (Connect: 0, Receive: 0, Length: 49989, Exceptions: 0)

Non-2xx responses:      49989

Total transferred:      34401727 bytes

HTML transferred:       24701298 bytes

Requests per second:    5392.40 [#/sec] (mean)

Time per request:       0.185 [ms] (mean)

Time per request:       0.185 [ms] (mean, across all concurrent requests)

Transfer rate:          3623.20 [Kbytes/sec] received

Connection Times (ms)

              min  mean[+/-sd] median   max

Connect:        0    0   0.0      0       1

Processing:     0    0   0.1      0      11

Waiting:        0    0   0.1      0      11

Total:          0    0   0.1      0      11

Percentage of the requests served within a certain time (ms)

  50%      0

  66%      0

  75%      0

  80%      0

  90%      0

  95%      0

  98%      0

  99%      0

 100%     11 (longest request)

10. Nginx prohibits access to specific IP addresses

Guess you like

Origin blog.csdn.net/Wemesun/article/details/126383342