Nginx study notes

The role of Nginx

Nginx is a high-performance HTTP and reverse proxy server. It is very powerful in handling high concurrency and can withstand high loads. Reports indicate that it can support up to 50,000 concurrent connections.

What you need to know before learning Nginx

forward proxy

If we access our school's intranet but we can't access it directly, we need a proxy service .
We go through
1.Configure proxy server
2. By accessing the proxy server
3. The proxy server will then access our school's intranet
so that we can access our school's intranet
. The biggest feature of forward proxy is
that a proxy server needs to be configured on the client to access the designated website.
insert image description here

reverse proxy

The reverse proxy is the 9001 port of our access server,
and then the server goes to the 8001 port of tomcat.
For users, we do not know the port of tomcat.
When we use it, the reverse proxy server and tomcat are integrated
reverse proxy features. Yes
What is exposed is the proxy server address, hiding the real server IP address.insert image description here

load balancing

Increase the number of servers, then distribute the requests to each server, and change the original request to a single server to distribute the request to multiple servers, and distribute the load to different servers, which is what we call load balanced

Suppose there are 900 requests to the nginx server. The nginx server will give 300 requests to each of these three ports and let them work together. The same number
insert image description here

Dynamic and static separation

The common way we learned before is to put such dynamic resources and static resources together. insert image description here
In order to reduce the burden of tomcat, nginx separates dynamic resources and static resources
insert image description here

Nginx installation

https://segmentfault.com/a/1190000040125857

Common commands of Nginx

First of all, I have to enter the nginx directory and I installed it here
insert image description here

1. Check the nginx version number
./nginx -v
2. Start nginx
./nginx
3. Stop nginx
./nginx -s stop
4. Reload nginx
./nginx -s reload

Nginx configuration file

[root@localhost ~]# vi /usr/local/nginx/conf/nginx.conf
# 第一部分 全局块:配置服务器整体运行的配置指令
# 处理并发数,越大处理越多
worker_processes  1;

# 第二部分 events块:设置nginx服务器和用户的网络连接
events {
    # 支持最大连接数为1024
    worker_connections  1024;
}

# 第三部分 http块:代理、缓存、日志等
# http块中还分为2个块:
    # http全局块 
        # http 全局块配置的指令包括文件引入
    # server块
        # 配置一个主机信息
http {
    # 设定mime类型,类型由mime.type文件定义
    include       mime.types;
    default_type  application/octet-stream;

    
    sendfile        on;
    
    keepalive_timeout  65;
    server {
        # 端口
        listen       80;
        # 域名
        server_name  localhost;
        # /目录下
        location / {
            # html文件
            root   html;
        }

}

Implementation example of reverse proxy one

The effect to be achieved is

Open the browser, enter the address www.123.com in the browser address bar (I use the domain name I purchased and configured myself), and jump to the main page of the liunx system tomcat

Ready to work

Install tomcat in the liunx system, use the default port 8080
* tomcat installation files are placed in the liunx system, decompress
* into the bin directory of tomcat, ./startup.sh starts the tomcat server

Ports that are open to the outside world
firewall-cmd --add-port=8080/tcp --permanent
firewall-cmd --reload to
view the open port numbers
firewall-cmd --list-all

The process of accessing is

insert image description here
The browser accesses nginx through the domain name and then nginx is forwarded to tomcat

nginx request forwarding configuration (reverse proxy)

insert image description here
If there is access to port 80 of this machine, it will be forwarded to 107.0.0.1:8080, which is the home page of tomcat
.
After the modification, restart nginx to take effect
insert image description here

Implementation example 2 of reverse proxy

effect to be achieved

Using the nginx reverse proxy, jump to services with different ports according to the access path. The
nginx listening port is 9001
to access http://192.168.17.129:9001/edu/ and directly jump to 127.0.0.1:8080
to access http:// 192.168.17.129:9001/vod/ jumps directly to 127.0.0.1:8081

Ready to work

(1) Prepare two tomcat servers, one port 8080 and one port 8081
(2) Create folders and test pages
insert image description here
insert image description here
insert image description here
Reference instructions:

# 创建2个文件
mkdir tomcat8080 tomcat8081
# 复制tomcat到2个文件中
cp apache-tomcat-7.0.70.tar.gz tomcat8080
cp apache-tomcat-7.0.70.tar.gz tomcat8081
# 关闭以前的tomcat
apache-tomcat-7.0.70/bin/shutdown.sh 
# 进入第一个tomcat
cd tomcat8080
# 解压tomcat
tar -zxvf apache-tomcat-7.0.70.tar.gz 
# 添加网页
mkdir /root/tomcat8080/apache-tomcat-7.0.70/webapps/aaa
vi /root/tomcat8080/apache-tomcat-7.0.70/webapps/aaa/aaa.html
this is 8080!!!
# 启动tomcat
apache-tomcat-7.0.70/bin/startup.sh

# 进入第二个tomcat
cd tomcat8081
# 解压文件
tar -zxvf apache-tomcat-7.0.70.tar.gz
# 修改端口
vi apache-tomcat-7.0.70/conf/server.xml 
# 22行改为8015
<Server port="8015" shutdown="SHUTDOWN">
# 71行改为8081
<Connector port="8081" protocol="HTTP/1.1"
           connectionTimeout="20000"
           redirectPort="8443" />
# 退出配置文件
:wq
mkdir /root/tomcat8081/apache-tomcat-7.0.70/webapps/aaa
vi /root/tomcat8081/apache-tomcat-7.0.70/webapps/bbb/aaa.html
this is 8081!!!
# 启动tomcat
apache-tomcat-7.0.70/bin/startup.sh
# 查看端口号
netstat -nultp                     
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      31853/nginx: master 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      988/sshd            
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1250/master         
tcp6       0      0 127.0.0.1:8015          :::*                    LISTEN      32369/java          
tcp6       0      0 :::8080                 :::*                    LISTEN      32274/java          
tcp6       0      0 :::8081                 :::*                    LISTEN      32369/java          
tcp6       0      0 :::22                   :::*                    LISTEN      988/sshd            
tcp6       0      0 ::1:25                  :::*                    LISTEN      1250/master         
tcp6       0      0 127.0.0.1:8005          :::*                    LISTEN      32274/java          
tcp6       0      0 :::8009                 :::*                    LISTEN      32274/java          
udp        0      0 127.0.0.1:323           0.0.0.0:*                           673/chronyd         
udp6       0      0 ::1:323                 :::*                                673/chronyd    
# nginx的配置文件设置
# 在http中添加一个server
    server {
        # 监听9001
        listen       9001;
        server_name  localhost;
        # 链接中有aaa就跳转8080
        location ~ /edu/ {
            proxy_pass http://127.0.0.1:8080;
        }
        # 链接中含有bbb跳转8081
        location ~ /vod/ {
            proxy_pass http://127.0.0.1:8081;
        }
    }
#####################################################
1、= :用于不含正则表达式的 uri 前,要求请求字符串与 uri 严格匹配,如果匹配成功,就停止继续向下搜索并立即处理该请求。
2、~:用于表示 uri 包含正则表达式,并且区分大小写。
3、~*:用于表示 uri 包含正则表达式,并且不区分大小写。
4、^~:用于不含正则表达式的 uri 前,要求 Nginx 服务器找到标识 uri 和请求字符串匹配度最高的 location 后,立即使用此 location 处理请求,而不再使用 location块中的正则 uri 和请求字符串做匹配。

注意:如果 uri 包含正则表达式,则必须要有 ~ 或者 ~* 标识。
#####################################################
# 重新加载nginx
/usr/local/nginx/sbin/nginx -s reload

load balancing

achieve effect

(1) Enter the address http://192.168.17.129/edu/a.html in the browser address bar, the load balancing effect, the average port 8080 and 8081

Ready to work

(1) Prepare two tomcat servers, one 8080 and one 8081
(2) In the webapps directory of the two tomcats, create a folder named edu, and create a page a.html in the edu folder for testing

Load balancing configuration in nginx configuration file

insert image description here
insert image description here
This means that if I access my host address and it starts with /, then I will forward it to the two addresses configured in myserver.
Refer to the instructions


# 创建2个文件
mkdir tomcat8080 tomcat8081
# 复制tomcat到2个文件中
cp apache-tomcat-7.0.70.tar.gz tomcat8080
cp apache-tomcat-7.0.70.tar.gz tomcat8081
# 关闭以前的tomcat
apache-tomcat-7.0.70/bin/shutdown.sh 
# 进入第一个tomcat
cd tomcat8080
# 解压tomcat
tar -zxvf apache-tomcat-7.0.70.tar.gz 
# 添加网页
mkdir /root/tomcat8080/apache-tomcat-7.0.70/webapps/aaa
vi /root/tomcat8080/apache-tomcat-7.0.70/webapps/aaa/aaa.html
this is 8080!!!
# 启动tomcat
apache-tomcat-7.0.70/bin/startup.sh

# 进入第二个tomcat
cd tomcat8081
# 解压文件
tar -zxvf apache-tomcat-7.0.70.tar.gz
# 修改端口
vi apache-tomcat-7.0.70/conf/server.xml 
# 22行改为8015
<Server port="8015" shutdown="SHUTDOWN">
# 71行改为8081
<Connector port="8081" protocol="HTTP/1.1"
           connectionTimeout="20000"
           redirectPort="8443" />
# 退出配置文件
:wq
mkdir /root/tomcat8081/apache-tomcat-7.0.70/webapps/aaa
vi /root/tomcat8081/apache-tomcat-7.0.70/webapps/bbb/aaa.html
this is 8081!!!
# 启动tomcat
apache-tomcat-7.0.70/bin/startup.sh
# 查看端口号
netstat -nultp                     
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      31853/nginx: master 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      988/sshd            
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1250/master         
tcp6       0      0 127.0.0.1:8015          :::*                    LISTEN      32369/java          
tcp6       0      0 :::8080                 :::*                    LISTEN      32274/java          
tcp6       0      0 :::8081                 :::*                    LISTEN      32369/java          
tcp6       0      0 :::22                   :::*                    LISTEN      988/sshd            
tcp6       0      0 ::1:25                  :::*                    LISTEN      1250/master         
tcp6       0      0 127.0.0.1:8005          :::*                    LISTEN      32274/java          
tcp6       0      0 :::8009                 :::*                    LISTEN      32274/java          
udp        0      0 127.0.0.1:323           0.0.0.0:*                           673/chronyd         
udp6       0      0 ::1:323                 :::*                                673/chronyd   
# nginx配置文件设置
vi /usr/local/nginx/conf/nginx.conf
http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    ##################################
    upstream myserver{
        server 192.168.2.177:8080;
        server 192.168.2.177:8081;
    }
    ##################################
    server {
        listen       80;
        server_name  localhost;
        location / {
            root   html;
            ##################################
            proxy_pass http://myserver;
            ##################################
            index  index.html index.htm;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}
:wq
# 重新加载服务器
./nginx -s reload

nginx allocation server strategy

First polling (default)

Each request is allocated to different back-end servers one by one in chronological order. If the back-end server goes down, it can be automatically eliminated.

The second weight

weight means the weight defaults to 1, the higher the weight, the more clients will be assigned

server 192.168.2.177:8080 weight=5
server  192.168.2.177:8081 weight=10

The third ip_hash

Each request is allocated according to the hash result of the access ip, so that each visitor has a fixed access to a backend server

upstream server_pool{
    ip_hash; 
    server 192.168.5.21:8080; 
    server 192.168.5.22:8081;
 }

The fourth kind of fair (third party)

Requests are allocated according to the response time of the back-end server, and those with short response times are given priority.

upstream server_pool{ 
    server 192.168.5.21:80; 
    server 192.168.5.22:80; 
    fair;
 }

Dynamic and static separation configuration

What is static and dynamic separation?

insert image description here
Different request forwarding can be achieved by specifying different suffix names by location. By setting the expires parameter, you can make the browser cache expire time and reduce the request and traffic before the server. The specific definition of Expires: It is to set an expiration time for a resource, that is to say, it does not need to go to the server for verification, and it can directly confirm whether it expires through the browser itself, so no additional traffic will be generated. This approach is ideal for resources that change infrequently. (If the file is updated frequently, it is not recommended to use Expires to cache), I set 3d here, which means that the URL is accessed within these 3 days, a request is sent, and the last update time of the file on the server has not changed. The server fetches and returns the status code 304. If there is any modification, it is directly downloaded from the server and the status code 200 is returned.

Simply put, dynamic resources are placed in tomcat and static resources are put together separately to reduce the workload of tomcat

Ready to work

Prepare static resources in the liunx system for access
insert image description here

configure nginx.cof

insert image description here
The effect achieved by autoindex on is like this
insert image description here

insert image description here
insert image description here

High availability

What is high availability

High availability means that after a master server is sent, the slave server can work for the master server
insert image description here

The implementation is that the two servers display the same virtual ip address to the outside world, and then the user generally accesses the main server and the main server sends it to the backup. Even if the main server sends the user, the user will not feel it.

Reference instructions

# 主服务器
vi /etc/keepalived/keepalived.conf
gbal_defs {
   notification_email {
       [email protected]
       [email protected]
       [email protected]
   }
   notification_email_from [email protected]
   smtp_server 192.168.2.177
   smtp_connect_timeout 30
   # 本机主机名,改不改无所谓
   router_id LVS_DEVEL
}
vrrp_script chk_http_port {
   # 检测脚本的位置
   script "/usr/local/src/nginx_check.sh"
   # 检测脚本执行的间隔
   interval 2
   # 执行完本高可用权重怎么的改变
   weight 2
}
vrrp_instance VI_1 {
   # 主服务器 MASTER 备份服务器 BACKUP
   state MASTER 
   # 网卡名
   interface ens33 
   # 主、备机的virtual_router_id 必须相同
   virtual_router_id 51 
   # 主、备机取不同的优先级,主机值较大,备份机值较小
   priority 100 
   # 一秒检测一次心跳
   advert_int 1
   # 验证方式为密码  密码1111
   authentication {
       auth_type PASS
       auth_pass 1111
   }
   virtual_ipaddress {
       # 虚拟ip为192.168.2.50
       192.168.2.50
   }
}
:wq
# 备份服务器
vi /etc/keepalived/keepalived.conf
gbal_defs {
   notification_email {
       [email protected]
       [email protected]
       [email protected]
   }
   notification_email_from [email protected]
   smtp_server 192.168.2.177
   smtp_connect_timeout 30
   # 本机主机名,改不改无所谓
   router_id LVS_DEVEL
}
vrrp_script chk_http_port {
   # 检测脚本的位置
   script "/usr/local/src/nginx_check.sh"
   # 检测脚本执行的间隔
   interval 2
   # 执行完本高可用权重怎么的改变
   weight 2
}
vrrp_instance VI_1 {
   # 主服务器 MASTER 备份服务器 BACKUP
   state BACKUP
   # 网卡名
   interface ens33 
   # 主、备机的virtual_router_id 必须相同
   virtual_router_id 51 
   # 主、备机取不同的优先级,主机值较大,备份机值较小
   priority 90
   # 一秒检测一次心跳
   advert_int 1
   # 验证方式为密码  密码1111
   authentication {
       auth_type PASS
       auth_pass 1111
   }
   virtual_ipaddress {
       # 虚拟ip为192.168.2.50
       192.168.2.50
   }
}
:wq
# 编写脚本文件放入2台服务器的/usr/local/src下
vi /usr/local/src/nginx_check.sh
#!/bin/bash
A=`ps -C nginx –no-header |wc -l`
if [ $A -eq 0 ];then
 /usr/local/nginx/sbin/nginx
 sleep 2
 if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then
 killall keepalived
 fi
fi
:wq
# 2台服务器启动nginx和keepalived
systemctl restart keepalived
/usr/local/nginx/sbin/nginx

stop master server

systemctl stop keepalived
[root@localhost ~]# netstat -nultp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1003/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1220/master         
tcp6       0      0 :::22                   :::*                    LISTEN      1003/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      1220/master         
udp        0      0 127.0.0.1:323           0.0.0.0:*                           661/chronyd         
udp6       0      0 ::1:323                 :::*                                661/chronyd 

then visit again

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324226898&siteId=291194637
Recommended