Nginx use detailsNginx quick start Nginx configuration file details

Nginx

All the installation files required for the document have been packaged, download
link: https://pan.baidu.com/s/1yWUDj3lXK9nm-huyzSwu8g
Extraction code: u7dl
copy this content and open the Baidu Netdisk mobile phone App, the operation is more convenient --Share from Baidu Netdisk Super Member V3

One, Nginx introduction

Nginx is a high-performance HTTP and reverse proxy server. It occupies a small memory and has a very powerful ability to handle high concurrency.

Nginx can be completed, load balancing, dynamic and static separation

Some of them cannot be uploaded due to violations. It is recommended to watch this video to complete the knowledge points that are not mentioned. Nginx in Silicon Valley explains the portal

1. Load Balancing

Increase the number of servers and distribute requests to each server instead of concentrating requests to a single server

Insert picture description here

2. Separation of dynamic and static

Analyze dynamic resources and static resources by different servers, reducing the pressure when using only a single server
Insert picture description here

Two, Nginx installation (Linux)

  1. Upload pcre dependent files to the /usr/src directory of Linux

Insert picture description here

  1. Enter the above directory,cd /usr/src

  2. Unzip the pcre compressed file,tar -zxvf pcre-8.37.tar.gz

  3. Enter the unzipped directory,cd pcre-8.37/

  4. Followed by the implementation ./configure, make && make installto complete the installation

  5. Check whether the installation is successful,pcre-config --version

Insert picture description here

  1. Install openssl, zlib, gcc dependencies
yum -y install make zlib zlib-devel gcc-c++ libtool openssl openssl-devel
  1. Upload the Nginx installation package to the /usr/src directory of Linux

Insert picture description here

  1. Enter the above directory,cd /usr/src

  2. Unzip the Nginx installation file,tar -zxvf nginx-1.12.2.tar.gz

  3. Enter the unzipped directory,cd nginx-1.12.2/

  4. Followed by the implementation ./configure, make && make installto complete the installation

  5. Open port 80:

(1) firewall-cmd --permanent --add-port=80/tcp, show success means success

(2) firewall-cmd --reloadRestart the firewall to take effect, and display success means success

(3 ) firewall-cmd --query-port=80/tcp, displaying yes means that port 80 is successfully opened

  1. Start Nginx, cd /usr/local/nginx/sbinand execute./nginx

Insert picture description here

  1. Enter http://[Linux IP address]:80 in the browser address bar (Nginx uses port 80 by default)

Insert picture description here

Three, Nginx common instructions

1. Start Nginx

First cd /usr/local/nginx/sbin, then ./nginx, or directly /usr/local/nginx/sbin/nginx

2. Close Nginx

First cd /usr/local/nginx/sbin, then ./nginx -s stop, or directly /usr/local/nginx/sbin/nginx -s stop

3. Hot deployment of Nginx

(You can also reload the modified configuration file without restarting Nginx)

First cd /usr/local/nginx/sbin, then ./nginx -s reload, or directly /usr/local/nginx/sbin/nginx -s reload

Four, Nginx configuration file

Nginx configuration files are placed in the /usr/local/nginx/conf/ directory

Insert picture description here

The content after opening the nginx.conf file:

# 第一部分:全局块
worker_processes  1;  # worker进程数,值越大,支持的并发数量越大,尽量与cpu数相同

# 第二部分:events块
events {
    worker_connections  1024;  # 每个worker进程支持的最大连接数默认为1024
}

# 第三部分:http块
http {
	# http全局块
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

	# server块,一般都是对此部分进行配置 (可以有多个server块)
    server {
        listen       80;
        server_name  localhost;
        location / {
            root   html;
            index  index.html index.htm;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

Five, Nginx configuration example (1)

1. Achieve the effect

Enter the address www.123.com in the browser to jump to the homepage of the tomcat server of the Linux system

2. Step analysis

(1) Install tomcat in Linux, use the default port 8080

(2) Access process analysis

Insert picture description here

(3) Set the correspondence between domain name and ip address in the windows host file

Insert picture description here

(4) In the Nginx configuration file,vim /usr/local/nginx/conf/nginx.conf

Insert picture description here

(5) Start Nginx, open the browser to run, the effect is as shown in the figure

Insert picture description here

Six, Nginx configuration example (2)

1. Achieve the effect

Modify the listening port of Nginx to 9001, and jump to different server pages according to the browser address

Visit http://[Linux ip address]:9001/edu/ Jump to 127.0.0.1:8080
Visit http://[Linux ip address]:9001/vod/ Jump to 127.0.0.1:8081

2. Step analysis

(1) Create two Tomcat servers, one port 8080 and one port 8081 (and open these two ports)

(2) Create edu, vod folders and a.html files in the webapps directories of the two Tomcat servers respectively

(3) Configure in the Nginx configuration file,vim /usr/local/nginx/conf/nginx.conf

Insert picture description here

(4) Open the browser to test

Insert picture description here

3. Description of the location directive in the Nginx configuration file

(1) Introduction: used to match the requested address

(2) Grammar

Insert picture description here

(3) Wildcard

=: Before the uri without regular expressions, the request string is required to strictly match the uri. If the match is successful, stop
continuing to search and process the request immediately

~: Match the request address containing uri, case sensitive

~*: match the request address containing uri, not case sensitive

^~: Before the uri without regular expressions, the Nginx server is required to find the location with the highest matching degree between the identification uri and the request string, and then use this location to process the request immediately, instead of using the regular uri and request in the location block String matching

Seven, Nginx configuration example-load balancing

1. Achieve the effect

Enter http://[Linux ip address]/edu/a.html in the address bar of the browser, and distribute this request to two Tomcat servers

2. Step analysis

(1) Create two Tomcat servers, one port 8080 and one port 8081 (and open these two ports)

(2) Create the edu folder and a.html file in the webapps directory of the two Tomcat servers respectively

(3) Load balancing configuration in Nginx configuration file,vim /usr/local/nginx/conf/nginx.conf

upstream myserver {
# 列出所要负载均衡的tomcat服务器
    server 192.168.206.128:8080;
    server 192.168.206.128:8081;
}

    server {
        listen       80;
        server_name  192.168.206.128;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
            proxy_pass http://myserver;
        }

(4) Operation result: Enter the address in the address bar, and each refresh will go to another Tomcat server

3. Load balancing distribution strategy

(1) Polling (default)

Assign to different servers one by one according to the time sequence of the request. If a server goes down, this server will be automatically removed

(2) weight

weight represents the weight, the default is 1, the higher the weight, the more requests are allocated, usage:

Insert picture description here

(3) ip_hash

Each request is allocated according to the hash of the accessed ip address, that is, which server is accessed by a certain address for the first time, and then this request will always access this server, which can solve the session problem. Usage:

Insert picture description here

(4) fair

The request is allocated according to the server's response time, and the server with a short response time is given priority. Usage:

Insert picture description here

Eight, Nginx configuration example-separation of dynamic and static

1 Overview

Nginx dynamic and static separation simply means to separate dynamic and static requests. It cannot be understood as simply physically separating dynamic and static pages. Strictly speaking, dynamic requests should be separated from static requests, which can be understood as using Nginx to process static pages and Tomcat to process dynamic pages. From the perspective of current implementation, it can be roughly divided into two types. One is to purely separate static files into a separate domain name and place them on a separate server, which is also the current mainstream solution; the other method is dynamic and static files. Mixed and released, separated by Nginx

2. Achieve the effect

Create a /data/www/ folder in the root directory of Linux, which stores the static resource a.html, and create a /data/www/ folder under the webapps of the Tomcat server in 8080, which stores the static resource a .html, when the request for this static resource is entered, the static resource in the root directory is accessed, not the static resource in the Tomcat server

3. Step analysis

(1) Create the above folders and files

(2) Perform dynamic and static separation configuration in the Nginx configuration file,vim /usr/local/nginx/conf/nginx.conf

server {
    listen       80;
    server_name  192.168.206.128;
    #charset koi8-r;
    #access_log  logs/host.access.log  main;
    location /www/ {
        root /data/;
        index index.html index.htm;
}
}

Nine, Nginx configuration example-high availability cluster

1 Overview

When the Nginx main server goes down, the standby server is used to ensure the high availability of the service. The idea is as follows:

Insert picture description here

2. Step analysis

(1) Two Nginx servers (ie, two Linux virtual machines) are required, the addresses are 192.168.17.129 and 192.168.17.131 respectively

(2) Install Nginx in two virtual machines

(3) Install keepalived in the main server

i. Enter the usr directory,cd /usr/

ii. Installation environment

1. wget http://www.percona.com/redir/downloads/Percona-XtraDB-Cluster/5.5.37-25.10/RPM/rhel6/x86_64/Percona-XtraDB-Cluster-shared-55-5.5.37-25.10.756.el6.x86_64.rpm

2. rpm -ivh Percona-XtraDB-Cluster-shared-55-5.5.37-25.10.756.el6.x86_64.rpm

iii. Use yum command to install,yum install keepalived -y

iv. After installation, create a directory keepalived in /etc, which contains the configuration file keepalived.conf

(4) Delete the original configuration file,rm -rf /etc/keepalived/keepalived.conf

(5) Replace with the new configuration file keepalived.conf, the content is as follows:

global_defs {
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   notification_email_from [email protected]
   #邮件服务器通知地址(暂不配置,默认即可,默认是当前虚拟机的ip地址)
   smtp_server 192.168.17.129
   #邮件服务器超时时间(暂不配置,默认即可)
   smtp_connect_timeout 30
   #当前虚拟机的IP地址
   router_id 192.168.17.129
}

vrrp_script Monitor_Nginx {
 script "/etc/keepalived/nginx_check.sh"    #检测脚本执行的路径
 interval 2                             #检测脚本执行的间隔
 weight 2                              #检测脚本执行的权重
}

vrrp_instance VI_1 {
    state MASTER       # 标识这个机器是MASTER(主服务器)还是BACKUP(备服务器)
    interface ens33      # 当前机器的网卡名称  
    virtual_router_id 51  # 虚拟路由的编号,主备必须一致
    priority 100         # 主、备机取不同的优先级,主机值较大,备份机值较小
    advert_int 1         # VRRP Multicast广播周期秒数,即每隔一秒检测是否宕机
    authentication {
        auth_type PASS   #(VRRP认证方式)
        auth_pass 1111   #(密码)
    }
    track_script {
		Monitor_Nginx # 调用nginx进程检测脚本
	}
    virtual_ipaddress {
        192.168.17.50  # 给两台Nginx服务器绑定的虚拟ip地址
    }
}

(6) Added keepalived detection script,vim /etc/keepalived/nginx_check.sh

#!/bin/bash
if [ "$(ps -ef | grep "nginx: master process" | grep -v grep )" == "" ]
 then
 killall keepalived
fi

(7) Start the keepalived service,service keepalived start

(8) Install Nginx standby server and keepalived on another Linux system, the steps are as described above, but the red part of the above content needs to be modified in the keepalived configuration file

3. Operation results

The browser enters the virtual ip address 192.168.17.50 to access the main server, kill the Nginx process of the main server, and this virtual ip address will access the standby server

Ten, Nginx principle analysis

Looking at the Nginx process, you will find two processes, worker and master.

Insert picture description here

The thread model of work is shown in the figure:

Insert picture description here

The working principle is shown in the figure:

Insert picture description here

The benefits of using one master and multiple workers:

  1. Conducive to hot deployment: the worker who grabs the request will execute the task, and the rest of the idle workers will update the configuration file, and the configuration file information will be automatically updated when the task of the worker is finished.

  2. Each worker is an independent process, if one of the workers has a problem, it will not affect the other workers

Three common problems:

  1. How many workers are appropriate?

Answer: The number of workers is the same as the number of CPUs of the server. It is best to modify it in the global block of the Nginx configuration file.

  1. How many connections of a worker will be occupied by a request?

Answer:
(1) If it is only a static resource, it will occupy 2 connections (receiving and returning two connections)

(2) If Nginx is used as a proxy server to use Tomcat to process dynamic resources, it will take up 4 connections

  1. How to calculate the maximum concurrent number of Nginx?

Answer:
(1) If it is just a static resource, the maximum number of concurrent = the number of workers * the maximum number of connections per worker / 2

(2) If Nginx is used as a proxy server to use Tomcat to process dynamic resources, the
maximum concurrent number = the number of workers * the maximum number of connections per worker/4

Guess you like

Origin blog.csdn.net/weixin_49343190/article/details/112006564