[Nginx] Practical application (server-side cluster construction, download site, user authentication module)

Nginx implements server-side cluster construction

Nginx and Tomcat deployment

Nginx is very high-performance in high-concurrency scenarios and processing static resources, but in actual projects, in addition to static resources, there are background business code modules. Generally, background business will be deployed on web servers such as Tomcat, weblogic, or websphere. So how to use Nginx to receive user requests and forward the requests to the background web server?

insert image description here

Step analysis:

1.准备Tomcat环境,并在Tomcat上部署一个web项目
2.准备Nginx环境,使用Nginx接收请求,并把请求分发到Tomat上

Environment preparation (Tomcat)

Browser access:

http://192.168.200.146:8080/demo/index.html

insert image description here

Obtain the link address of the dynamic resource:

http://192.168.200.146:8080/demo/getAddress

Here Tomcat is used as the background web server

(1) Prepare a Tomcat on Centos

1.Tomcat官网地址:https://tomcat.apache.org/
2.下载tomcat,本次使用的是apache-tomcat-8.5.59.tar.gz
3.将tomcat进行解压缩
mkdir web_tomcat
tar -zxf apache-tomcat-8.5.59.tar.gz -C /web_tomcat

(2) Prepare a web project and package it as war

1.将资料中的demo.war上传到tomcat8目录下的webapps包下
2.将tomcat进行启动,进入tomcat8的bin目录下
./startup.sh

(3) Start tomcat for access testing.

静态资源: http://192.168.200.146:8080/demo/index.html
动态资源: http://192.168.200.146:8080/demo/getAddress

Environment preparation (Nginx)

(1) Use the reverse proxy of Nginx to forward the request to Tomcat for processing.

upstream webservice {
	server 192.168.200.146:8080;
}
server{
    listen		80;
    server_name localhost;
    location /demo {
    	proxy_pass http://webservice;
    }
}

(2) Start the access test

insert image description here

After learning this, you may be confused. Why do you need to add an additional nginx when you can access it directly through tomcat? Doesn’t this increase the complexity of the system? Next, we will analyze this problem from two aspects
:

  • Use Nginx to achieve dynamic and static separation

  • Use Nginx to build a Tomcat cluster

Nginx realizes dynamic and static separation

What is static and dynamic separation?

  • Action: business processing of background applications

  • Static: Static resources of the website (html, javaScript, css, images and other files)

Dynamic and static separation means that dynamic requests and static requests are processed separately, dynamic requests are handed over to the application server, and static requests are returned directly by the web server , which has several advantages:

  1. Reduce the pressure on the application server and let it focus on handling dynamic requests.
  2. Static resources can be directly cached and returned by the web server, which is faster.
  3. Different servers can be used to process dynamic requests and static requests separately to optimize performance.

Nginx can achieve dynamic and static separation very well, the principle is:

  1. First, distinguish dynamic requests from static requests. This can be achieved through regular expressions or parameters in the location directive. like:
    location ~* \.(jpg|gif|png)$ {   # 匹配静态资源请求
        ...
    } 
    
    location /api/ {    # 匹配动态请求
        ...    
    }
    
  2. Static requests are directly returned by Nginx, which can directly obtain resources and return them through files passed in from the local file system or remote FastCGI/proxy.
  3. Dynamic requests are proxied or forwarded to the application server for processing. Nginx forwards the dynamic request to the application server through FastCGI/proxy_pass, etc., and then returns the application server response to the client.
  4. Nginx can also enable caching for static resources, which can further improve the return speed and reduce the pressure on the application server.
  5. Use Keepalived etc. to implement hot standby to ensure high availability.

So in general, by identifying different types of requests and assigning them to different servers for processing, Nginx can maximize the advantages of each server and obtain higher performance and efficiency. This is the key to Nginx's separation of dynamic and static.

Through dynamic and static separation, a web service can use the architecture of Nginx + application server + cache + static resource server, so that each part can work efficiently and stably. This is the basic architecture choice for many high-traffic websites.

How to achieve dynamic and static separation?

  • There are many ways to realize dynamic and static separation. For example, static resources can be deployed on CDN, Nginx and other servers, and dynamic resources can be deployed on Tomcat, weblogic or websphere. We use Nginx+Tomcat here to achieve dynamic and static separation.

demand analysis

insert image description here

Steps to achieve dynamic and static separation

1. Delete all the static resources in the demo.war project, and repackage to generate a war package, which is provided in the data.

2. Deploy the war package to tomcat and delete the previously deployed content

进入到tomcat的webapps目录下,将之前的内容删除掉
将新的war包复制到webapps下
将tomcat启动

3. Create the following directory on the server where Nginx is located, and put the corresponding static resources into the specified location

insert image description here

The content of the index.html page is as follows:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Title</title>
    <script src="js/jquery.min.js"></script>
    <script>
        $(function(){
      
      
           $.get('http://192.168.200.133/demo/getAddress',function(data){
      
      
               $("#msg").html(data);
           });
        });
    </script>
</head>
<body>
    <img src="images/logo.png"/>
    <h1>Nginx如何将请求转发到后端服务器</h1>
    <h3 id="msg"></h3>
    <img src="images/mv.png"/>
</body>
</html>

4. Configure the access of static resources and dynamic resources of Nginx

upstream webservice{
   server 192.168.200.146:8080;
}
server {
        listen       80;
        server_name  localhost;

        #动态资源
        location /demo {
                proxy_pass http://webservice;
        }
        #静态资源
        location ~/.*\.(png|jpg|gif|js){
                root html/web;
                gzip on;
        }

        location / {
            root   html/web;
            index  index.html index.htm;
        }
}

5. Start the test, visit http://192.168.200.133/index.html

insert image description here

If at a certain point in time, the Tomcat server is down due to some reason, we visit Nginx again, and we will get the following effect. The user can still see the page, but the statistics of the number of visits is missing.

insert image description here

Nginx implements Tomcat cluster construction

When using Nginx and Tomcat to deploy projects, we use an Nginx server and a Tomcat server, and the renderings are as follows:

insert image description here

Then the problem comes, if Tomcat really goes down, the whole system will be incomplete, so how to solve the above problem, one server is easy to go down, then build a few more Tomcat servers, this will improve the availability of the back-end server. This is what we often call a cluster. Building a Tomcat cluster requires the knowledge of Nginx's reverse proxy and load balancing. How to implement it? Let's analyze the principle first.

insert image description here

Nginx can implement Tomcat cluster very well, the steps are as follows:

  1. Install Tomcat and configure cluster nodes. Configure server.xml on each Tomcat server, set different ports (such as 8080, 8081) and set session configuration as sticky related parameters.
  2. Install Nginx, and set the number of worker_processes to be the same as the number of Tomcat servers. This enables maximum utilization of server resources.
  3. Set the upstream proxy in Nginx to point to the Tomcat cluster node. like:
    upstream tomcat_cluster {
        server localhost:8080;
        server localhost:8081;
    }
    
  4. Set the proxy location to proxy the request to the Tomcat cluster. like:
    location / {
        proxy_pass http://tomcat_cluster;
    }
    
  5. Configure the sticky module or cookie of Nginx to keep the session to the same Tomcat. like:
    upstream tomcat_cluster {
        ip_hash;  # 开启ip_hash,使同一IP的请求定向到同一服务器
        server localhost:8080;
        server localhost:8081;
    }
    
    //或者像下面这样
    
    upstream tomcat_cluster {
    	server localhost:8080;
        server localhost:8081;
    }
    
    
    location / {
        proxy_pass http://tomcat_cluster;
        sticky cookie srv_id expires=1h domain=.example.com; # 按cookie路由
    } 
    

Well, after completing the deployment of the above environment, we have solved the high availability of Tomcat, one server is down, and the other two provide services to the outside world. At the same time, the background server can also be continuously updated. But a new problem has arisen. In the above environment, if Nginx is down, then the entire system will provide services to the outside world. How to solve this?

Nginx High Availability Solution

In view of the problems mentioned above, let's analyze what problems we need to face in order to solve the above problems?

insert image description here

More than two Nginx servers are required to provide external services. In this way, one of them is down, and the other one can also provide external services. However, if there are two Nginx servers, there will be two IP addresses. Which server should the user visit? How does the user know which one is good and which one is down?

Keepalived

Use Keepalived to solve the problem. The Keepalived software is written in C and was originally designed for LVS load balancing software. The Keepalived software mainly implements high-availability functions through the VRRP protocol.

VRRP

insert image description here

VRRP (Virtual Router Redundancy Protocol) virtual router redundancy protocol is a protocol to achieve high availability of routers.

VRRP works as follows:

  1. VRRP usually configures a group of routers in the same LAN, one of the routers in this group is the master router, and the others are backup routers.
  2. VRRP uses the multicast function to communicate among this group of routers, and selects a master router through VRRP messages, and other routers as backups.
  3. When the main router is available, it is responsible for all the routing functions of the router group, and the backup router is in the inactive state.
  4. If the main router fails, one of the backup routers will take over the function of the main router and become the new main router, and the other routers will adjust to the backup state. ()
  5. When the master router recovers, it will send a preemptive VRRP message to announce that it will take over the function of the master router again, and the Backup router will return to the backup state.
  6. VRRP uses detectors to continuously detect the main router. If no detection is made within a certain period of time, a new election process will be started to select a new main router.
  7. Through VRRP, the entire group of routers is like a virtual router for the host. No matter how the actual main router changes, the default gateway address of the host remains unchanged.

Therefore, the main function of VRRP is to detect router failures and perform automatic switching, and provide highly available default gateway services for hosts in the LAN. It can realize router status monitoring and automatic failover, with minimal impact on the network.

After using Keepalived, the solution is as follows:

insert image description here

Environment build

Environmental preparation

VIP IP CPU name Master-slave
192.168.200.133 keepalived1 Master
192.168.200.222
192.168.200.122 keepalived2 Backup

Installation of keepalived

步骤1:从官方网站下载keepalived,官网地址https://keepalived.org/
步骤2:将下载的资源上传到服务器
	keepalived-2.0.20.tar.gz
步骤3:创建keepalived目录,方便管理资源
	mkdir keepalived
步骤4:将压缩文件进行解压缩,解压缩到指定的目录
	tar -zxf keepalived-2.0.20.tar.gz -C keepalived/
步骤5:对keepalived进行配置,编译和安装
	cd keepalived/keepalived-2.0.20
	./configure --sysconf=/etc --prefix=/usr/local
	make && make install

After the installation is complete, there are two files that we need to know, one is /etc/keepalived/keepalived.conf(the system configuration file of keepalived, which we mainly operate on), and the other is in the /usr/local/sbin directory keepalived, which is the system configuration script used to start and close keepalived

Introduction to Keepalived configuration file

Open the keepalived.conf configuration file

It will be divided into three parts, the first part is global configuration, the second part is vrrp related configuration, and the third part is LVS related configuration.
This course mainly uses keepalived to achieve high-availability deployment, and LVS is not used, so we focus on the first two parts

global全局部分:
global_defs {
   #通知邮件,当keepalived发送切换时需要发email给具体的邮箱地址
   notification_email {
     [email protected]
     [email protected]
   }
   #设置发件人的邮箱信息
   notification_email_from [email protected]
   #指定smpt服务地址
   smtp_server 192.168.200.1
   #指定smpt服务连接超时时间
   smtp_connect_timeout 30
   #运行keepalived服务器的一个标识,可以用作发送邮件的主题信息
   router_id LVS_DEVEL
   
   #默认是不跳过检查。检查收到的VRRP通告中的所有地址可能会比较耗时,设置此命令的意思是,如果通告与接收的上一个通告来自相同的master路由器,则不执行检查(跳过检查)
   vrrp_skip_check_adv_addr
   #严格遵守VRRP协议。
   vrrp_strict
   #在一个接口发送的两个免费ARP之间的延迟。可以精确到毫秒级。默认是0
   vrrp_garp_interval 0
   #在一个网卡上每组na消息之间的延迟时间,默认为0
   vrrp_gna_interval 0
}
VRRP部分,该部分可以包含以下四个子模块
1. vrrp_script
2. vrrp_sync_group
3. garp_group
4. vrrp_instance
我们会用到第一个和第四个,
#设置keepalived实例的相关信息,VI_1为VRRP实例名称
vrrp_instance VI_1 {
    state MASTER  		#有两个值可选MASTER主 BACKUP备
    interface ens33		#vrrp实例绑定的接口,用于发送VRRP包[当前服务器使用的网卡名称]
    virtual_router_id 51#指定VRRP实例ID,范围是0-255
    priority 100		#指定优先级,优先级高的将成为MASTER
    advert_int 1		#指定发送VRRP通告的间隔,单位是秒
    authentication {	#vrrp之间通信的认证信息
        auth_type PASS	#指定认证方式。PASS简单密码认证(推荐)
        auth_pass 1111	#指定认证使用的密码,最多8位
    }
    virtual_ipaddress { #虚拟IP地址设置虚拟IP地址,供用户访问使用,可设置多个,一行一个
        192.168.200.222
    }
}

The configuration content is as follows:

server 1

global_defs {
   notification_email {
        [email protected]
        [email protected]
   }
   notification_email_from [email protected]
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id keepalived1
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.200.222
    }
}

server 2

! Configuration File for keepalived

global_defs {
   notification_email {
        [email protected]
        [email protected]
   }
   notification_email_from [email protected]
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id keepalived2
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.200.222
    }
}

access test

  1. Before starting keepalived, let's use the command ip ato check the IP status of the two servers 192.168.200.133 and 192.168.200.122.

    insert image description here

  2. Start the keepalived of the two servers separately

cd /usr/local/sbin
./keepalived

Again by ip alooking at the ip

insert image description here

  1. After turning off keepalived on the 192.168.200.133 server, check the ip again

insert image description here

Through the above tests, we will find that the virtual IP (VIP) will be on the MASTER node. When the keepalived on the MASTER node fails, because BACKUP cannot receive the VRRP status pass message sent by MASTER, it will be directly upgraded to MASTER. VIP will also "drift" to the new MASTER.

What does the above test have to do with Nginx?

We start the keepalived of server 192.168.200.133 again, because its priority is higher than that of server 192.168.200.122, so it will become MASTER again, VIP will also "drift" past, and then we visit again through the browser:

http://192.168.200.222/

insert image description here

If the keepalived of the 192.168.200.133 server is turned off, visit the same address again

insert image description here

After the effect is realized, we will find that if we want to switch the vip, we must turn off keepalived on the server, and when to turn off keepalived? It should be after the nginx of the server where the keepalived is located has a problem, turn off keepalived, and then let the VIP execute another server, but now all the operations are done manually. How can we let the system automatically judge whether the nginx of the current server is started correctly? If not, let the VIP automatically "drift". solve?

keepalived之vrrp_script

Keepalived can only monitor network failures and keepalived itself, that is, switch when there is a network failure or keepalived itself has a problem. But these are not enough. We also need to monitor other businesses on the server where keepalived is located, such as Nginx. If Nginx is abnormal, only keepalived remains normal, and the normal work of the system cannot be completed. Therefore, it is necessary to decide whether to switch between active and standby according to the running status of the business process. At this time, we can detect and monitor the business process by writing scripts.

Implementation steps:

  1. Add the corresponding configuration in the keepalived configuration file like
vrrp_script 脚本名称
{
    script "脚本位置"
    interval 3 #执行时间间隔
    weight -20 #动态调整vrrp_instance的优先级
}
  1. write script

ck_nginx.sh

#!/bin/bash
num=`ps -C nginx --no-header | wc -l`
if [ $num -eq 0 ];then
 /usr/local/nginx/sbin/nginx
 sleep 2
 if [ `ps -C nginx --no-header | wc -l` -eq 0 ]; then
  killall keepalived
 fi
fi

The Linux ps command is used to display the status of the current process.

-C(command): All processes of the specified command

--no-header exclude headers

  1. Set permissions for script files
chmod 755 ck_nginx.sh
  1. Add the script to
vrrp_script ck_nginx {
   script "/etc/keepalived/ck_nginx.sh" #执行脚本的位置
   interval 2		#执行脚本的周期,秒为单位
   weight -20		#权重的计算方式
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 10
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.200.111
    }
    track_script {
      ck_nginx
    }
}
  1. If the effect does not come out, you can use tail -f /var/log/messagesView log information to find the corresponding error message.
  2. test

Questions to think about:

Usually, if the master service dies, the backup will become the master, but when the master service is good again, the master will seize the VIP at this time, so there will be two switchings, which is not good for a busy website. So we need to add nopreempt in the configuration file, but this parameter can only be used when the state is backup, so when we use HA, it is best to set the state of the master and backup to backup to let them compete through priority.

Nginx make download site

First of all, we must first understand what is a download site?

Let's take a look at a website first. http://nginx.org/download/When we first started learning Nginx, we showed you such a website. The website is mainly used to provide users to download related resources. It is called a download website.

insert image description here

How to make a download site:

Nginx is implemented using the module ngx_http_autoindex_module, which processes requests ending with a slash ("/") and generates a directory listing.

This module will be automatically loaded when nginx is compiled, but the module is disabled by default, we need to use the following commands to complete the corresponding configuration

(1) autoindex: enable or disable directory listing output

grammar autoindex on|off;
Defaults autoindex off;
Location http、server、location

(2) autoindex_exact_size: Corresponding to the HTLM format, specify whether to display the detailed size of the file in the directory list

The default is on, showing the exact size of the file in bytes.
After changing to off, the approximate size of the file is displayed in kB or MB or GB

grammar autoindex_exact_size on|off;
Defaults autoindex_exact_size on;
Location http、server、location

(3) autoindex_format: Set the format of the directory listing

grammar autoindex_format html|xml|json|jsonp;
Defaults autoindex_format html;
Location http、server、location

Note: This command appears in 1.7.9 and later versions

(4) autoindex_localtime: Corresponding to the HTML format, whether to display the time on the directory list.

The default is off, and the displayed file time is GMT time.
After changing to on, the displayed file time is the server time of the file

grammar autoindex_localtime on | off;
Defaults autoindex_localtime off;
Location http、server、location

The configuration is as follows:

location /download{
    root /usr/local;
    autoindex on;
    autoindex_exact_size on;
    autoindex_format html;
    autoindex_localtime on;
}

Generally, XML/JSON is not used in these two ways

Nginx user authentication module

Corresponding to the access of system resources, we often need to restrict who can access and who cannot. This is what we usually call the authentication part. What authentication needs to do is to determine whether the user is a legitimate user based on the user name and password entered by the user. If so, the access is allowed, and if not, the access is denied.

Nginx's corresponding user authentication is implemented through the ngx_http_auth_basic_module module, which allows restricting access to resources by verifying usernames and passwords using the "HTTP Basic Authentication" protocol. By default, nginx has already installed this module, if you don't need it, use --without-http_auth_basic_module.

The instructions of this module are relatively simple,

(1) auth_basic: Use the "HTTP Basic Authentication" protocol to enable verification of usernames and passwords

grammar auth_basic string|off;
Defaults auth_basic off;
Location http,server,location,limit_except

After it is enabled, the server will return 401, and the specified string will be returned to the client to prompt the user, but different browsers display inconsistent content.

(2) auth_basic_user_file: Specify the file where the username and password are located

grammar auth_basic_user_file file;
Defaults
Location http,server,location,limit_except

Specify the file path, the user name and password settings in the file, and the password needs to be encrypted. Can be automatically generated by tools

Implementation steps:

1.nginx.conf add the following content

location /download{
    root /usr/local;
    autoindex on;
    autoindex_exact_size on;
    autoindex_format html;
    autoindex_localtime on;
    auth_basic 'please input your auth';
    auth_basic_user_file htpasswd;
}

2. We need to use htpasswdtools to generate

yum install -y httpd-tools
htpasswd -c /usr/local/nginx/conf/htpasswd username //创建一个新文件记录用户名和密码
htpasswd -b /usr/local/nginx/conf/htpasswd username password //在指定文件新增一个用户名和密码
htpasswd -D /usr/local/nginx/conf/htpasswd username //从指定文件删除一个用户信息
htpasswd -v /usr/local/nginx/conf/htpasswd username //验证用户名和密码是否正确

insert image description here

Although the above method can realize the verification of user names and passwords, as you can see, all user names and password information are recorded in files. If the number of users is too large, this method will be a bit troublesome. At this time, we have to verify user permissions through the background business code.

Guess you like

Origin blog.csdn.net/zyb18507175502/article/details/130841857