nginx installation and use (study notes)

1. Introduction to nginx

  • Nginx is a high-performance HTTP and reverse proxy web server. It is very powerful in handling high concurrency and can withstand the test of high load.
  • It is characterized by less memory and strong concurrency capability. In fact, the concurrency capability of nginx does perform better among web servers of the same type.

1.1 Forward Proxy

Nginx can not only act as a reverse proxy to achieve load balancing. The process of accessing the server through a proxy server is called forward proxy. A proxy server needs to be configured on the client side to access specified websites
insert image description here

1.2 Reverse proxy

Reverse proxy: The client can access without any configuration, we only need to send the request to the reverse proxy server, the reverse proxy server will select the target server to obtain the data, and then return to the client, at this time the reverse proxy The server and the target server are the same server to the outside world, which exposes the proxy server address and hides the real server IP address.
insert image description here

1.3 Load Balancing

  • Increase the number of servers, and then distribute the requests to each server, change the original request to a single server to distribute the requests to multiple servers, and distribute the load to different servers, which is what we call load balanced.
  • The client sends multiple requests to the server, and the server processes the requests, some of which may need to interact with the database. After the server finishes processing, it returns the results to the client.

A single server can't solve it. We increase the number of servers, and then distribute the requests to each server. Instead of concentrating the original requests on a single server, we distribute the requests to multiple servers and distribute the load to different servers. is what we call load balancing
insert image description here

1.4 Dynamic and static separation

Traditional way:

  • There are static resources (JS, HTML, etc.) and dynamic resources (jsp, servlet, etc.) deployed on the server side
    insert image description here

In order to speed up the website parsing speed, the dynamic and static separation method is adopted:

  • The dynamic pages and static pages are parsed by different servers to speed up the parsing speed and reduce the pressure on the original single server.

insert image description here

2 nginx installation, common commands and configuration files

2.1 Install nginx in Linux system

download

First open the nginx official website: http://nginx.org/
Enter the 2017 version, find the nginx 1.12.1 version, click to download
insert image description here
and download the 1.12.2 compressed package.
insert image description here
Download pcre.8.37 version and
enter the official website: https://sourceforge.net/projects/pcre/files/pcre/8.37/
insert image description here

Install

1. Install openssl, zlib, gcc dependencies

yum -y install make zlib zlib-devel gcc-c++ libtool openssl openssl-devel

insert image description here

2. Install pcre dependencies

  • Enter the usr/src directory, and directly drag the downloaded pcre-8.37.tar.gz compressed package into xshell
  • Use the decompression command tar –xvf pcre-8.37.tar.gz
    insert image description here
  • Enter the decompressed pcre-8.37 folder, and use the ./configure command
    if it appearserror: no acceptable C compiler found in $PATHProblem
    The cause is that a suitable compiler is not installed.
    Execute the following command:
sudo yum install gcc-c++

(When using sudo yum install gcc-c++, gcc and other dependent packages will be automatically installed/upgraded.)
Re-execute the command ./configure
successfully!
insert image description here

  • In the pcre-8.37 folder, finally use the make && make install command, that is, compile first and then install
  • View version pcre-config --version

3. Similarly, in the usr/src directory, directly drag the downloaded nginx.tar.gz compressed package into xshell

  • 解压、./configure、make && make install
  • Enter the directory /usr/local/nginx/sbin/nginx to start the service
    insert image description here
    insert image description here
    4. Check whether the installation is successful
  • Enter the linux address in the browser, such as http://192.168.xx.xxx/
  • If the page is not displayed, you can turn off the firewall
关闭防火墙:systemctl stop firewalld

开启防火墙:systemctl start firewalld

查看防火墙状态:systemctl status firewalld

insert image description here
Successful installation!

In order to prevent the firewall from blocking the port of Nginx, the following settings can be made:

  • View open port numbers: firewall-cmd --list-all
  • Set the open port number: firewall-cmd --add-port=80/tcp --permanent
  • Restart the firewall: firewall-cmd --reload

2.2 nginx common commands

Enter the nginx directory, the nginx command must be used in the sbin directory of nginx:

cd /usr/local/nginx/sbin

View version number

./nginx -v

start nginx

./nginx

stop nginx

./nginx -s stop

reload nginx

./nginx -s reload

2.3 nginx configuration file

nginx configuration file location

cd /usr/local/nginx/conf/nginx.conf

Contents of the configuration file

  • Global block:
    From the configuration file to the content between the events block, it mainly sets some configuration instructions that affect the overall operation of the nginx server, mainly including configuring the user (group) running the Nginx server, the number of worker processes allowed to be generated, and the process PID storage path, log storage path and type, and the introduction of configuration files, etc.
    For example: The larger the value of worker_processes, the more concurrent processing it can support
  • events block:
    The commands involved in the events block mainly affect the network connection between the Nginx server and the user. Commonly used settings include whether to enable serialization of network connections under multiple work processes, whether to allow multiple network connections to be received at the same time, and which event driver to choose Model to handle connection requests, the maximum number of connections that each word process can support at the same time, etc.
  • http block:
    the most frequent part of the configuration, most of the functions such as proxy, cache and log definition, and the configuration of third-party modules are here. It should be noted that: http block can also include http global block and server block.
    • http global block
      The directives of http global block configuration include file import, MIME-TYPE definition, log customization, connection timeout, maximum number of single link requests, etc.
    • server block
      • Each http block can include multiple server blocks, and each server block is equivalent to a virtual host
      • Each server block is also divided into global server blocks, and can contain multiple locaton blocks at the same time

3. nginx configuration example-1-reverse proxy

Achieved effect

Open the browser, enter the address www.123.com in the browser address bar, and jump to the tomcat main page of the liunx system
insert image description here

Preparation

  • Install Tomcat in the linux system, using the default port 8080

    • Put the tomcat installation file in the usr/src folder under the linux system, unzip it
    • Enter the bin directory of tomcat, ./startup.sh starts the tomcat server
  • Add permissions to open access ports to the outside world

添加8080端口:firewall-cmd --add-port=8080/tcp --permanent

重载防火墙:firewall-cmd –reload

查看已经开放的端口号: firewall-cmd --list-all
  • Test: Enter the linux server in the windows system to see if you can access the tomcat server page
    insert image description here

  • Specific operation:
    1. In the host file in the windows system, add the mapping between the domain name and the ip address, so that after the browser enters www.123.com, it can jump to the nginx server. Configure in the hosts
    insert image description here
    fileAdd www.123.com to your own ip address
    2. In the nginx configuration file, configure the request forwarding (reverse proxy configuration )
    insert image description here

3. Test: After the request is forwarded by the nginx server, it finally visits www.123.com and forwards it to the tomcat server
insert image description here

Reverse proxy instance two

achieve effect

Use nginx reverse proxy: According to the access path, the nginx server needs to be processed, and the request is redirected to a different port service, where the nginx server listening port is 9001

  • Visit http://192.168.11.129:9001/edu/ to jump directly to 127.0.0.1:8080
  • Visit http://192.168.11.129:9001/vod/ and jump directly to 127.0.0.1:8081

Preparation

  1. The first step: prepare two tomcat servers, one port 8001, one port 8002, and prepare the test page
  2. Step 2: Create folders ( edu and vod ) and test page

specific configuration

  • Find the nginx configuration file for reverse proxy configuration

insert image description here

  • location command description

  • This directive is used to match URLs

  • The syntax is as follows:

    • = : For uri without regular expression, request string must be strictly matched with uri, if the match is successful, stop searching and process the request immediately.
    • ~: It is used to indicate that the uri contains regular expressions and is case-sensitive.
    • ~*: It is used to indicate that the uri contains regular expressions and is not case-sensitive.
    • ^~: before the uri without regular expression, the Nginx server is required to find the location with the highest matching degree between the identification uri and the request string, and immediately use this location to process the request instead of using the regular uri and request in the location block string to match.
    • Note: If uri contains regular expressions, it must have ~ or ~ mark. *
  • Port numbers for external access are 9001 8080 8081

test

insert image description here
insert image description here

4. nginx configuration instance-2-load balancing

achieve effect

In the address bar of the browser, enter the address http://192.168.17.129/edu/a.html, which has the effect of load balancing, that is, the average port 8080 and 8081

Preparation

  • Step 1: Prepare two tomcat servers, one 8080 and one 8081
  • Step 2: In the webapps directory of the two tomcats, create a folder named edu, and create a page a.html in the edu folder for testing

Specific configuration (here is mainly average distribution, that is, polling)

Configure load balancing in the nginx configuration file (mainly in the http block for configuration):

  • Using the upstream command: add two servers to take the load
  • Configure in the server block: server_name and listen are the address ports of the nginx server, configure proxy_pass in the location block, and introduce two servers that bear balanced loads, and each time the nginx server will distribute the load to different service units.
    insert image description here

test

Request to display pages under ports 8080 and 8081, switch back and forth

nginx distribution server policy

  • polling (default)

    • Each request is assigned to different backend servers one by one in chronological order. If the backend server is down, it can be automatically eliminated.
  • weight

    • weight represents the weight, the default is 1, the higher the weight, the more clients will be assigned
    • Specify the polling probability, the weight is proportional to the access ratio, and it is used in the case of uneven performance of the backend server
      insert image description here
  • ip_hash

    • Each request is assigned according to the hash result of the access ip, so that each visitor accesses a backend server fixedly, which can solve the session problem
      insert image description here
  • fair (third party)

    • Allocate requests according to the response time of the backend server, and assign priority to those with short response times
      insert image description here

5. nginx configuration example-3-dynamic and static separation

basic introduction

  • Separate dynamic and static requests. From the perspective of current implementation, dynamic and static separation can be roughly divided into two types:

    • 1. Purely separating static files into separate domain names and placing them on independent servers is also the current mainstream recommended solution
    • 2. Dynamic and static files are mixed together and released, separated by nginx

​ Specify different suffix names by location to achieve different request forwarding.
​ By setting the expires parameter (adding it in location), you can make the browser cache expire time and reduce the previous requests and traffic with the server: ​Specific Expires definition: it is to set
an expiration time for a resource, that is to say, there is no need to go to For server-side verification, you can directly confirm whether it is expired through the browser itself, so no additional traffic will be generated. This approach is ideal for resources that change infrequently. (If the file is updated frequently, it is not recommended to use Expires to cache), I set 3d here, which means that I visit this URL within these 3 days, send a request, and compare the last update time of the file on the server. If there is no change, it will not change from The server fetches and returns a status code of 304. If there is any modification, download it directly from the server and return a status code of 200.
insert image description here

Preparation

Prepare static resources in the linux system, such as image and www folders, to store static resources

specific configuration

  1. Configure in the nginx configuration file, find the nginx installation directory, and open the /conf/nginx.conf configuration file
    insert image description here
  • Add listening port and access address
  • Each static resource access directory corresponds to a section of location, where root - indicates the location of the root directory of the resource in linux
  • autoindex on: Display the resource list under the static resource directory

test

  • Finally, check whether the Nginx configuration is correct, and then test whether the dynamic and static separation is successful. You need to delete a static file on the back-end tomcat server to check whether it can be accessed. If it can be accessed, it means that the static resource nginx returns directly, and does not go to the back-end tomcat server

  • Access the image (you can also directly add the name of the static resource you want to access in the path)
    insert image description here

  • visit www
    insert image description here

6. nginx configuration high availability cluster

Keepalived + Nginx high availability cluster (master-slave mode)
insert image description here

What is nginx high availability

  • In the previous configuration, there is only one nginx server. If this server goes down, the client cannot access the requested one. The high-availability cluster is to solve such a problem.
    insert image description here
  • Two nginx servers, one master server and one slave server, are used to ensure that the nginx service will not crash due to a server downtime
  • Requires a software Keepalived support
    • The function of Keepalived is to detect the status of the server. If there is a web server down or the work fails, Keepalived will detect it and remove the faulty server from the system, and use other servers to replace the work of the server. Keepalived will automatically add the server to the server group after the work is normal. All these tasks are automatically completed without manual intervention. What needs manual work is to repair the faulty server.
  • Expose a virtual ip to the outside, and forward it to the two nginx servers through routing to process the request forwarding

Preparations for configuring a high-availability cluster

  • Two nginx servers are required, for example: 192.168.11.129 and 192.168.11.133
  • Install nginx on both servers
  • Install keepalived on both servers
    • yum install keepalived –y
    • After installation, the directory keepalived is generated in etc, and there is a file keepalived.conf
  • Complete high availability configuration (master-slave configuration)
    • Modify the /etc/keepalived/keepalivec.conf configuration file
global_defs {
    
     #全局配置
	notification_email {
    
    
         acassen@firewall.loc
         failover@firewall.loc
         sysadmin@firewall.loc
 	}
     notification_email_from Alexandre.Cassen@firewall.loc
     smtp_server 192.168.11.129
     smtp_connect_timeout 30
     router_id LVS_DEVEL #主机的名字
}
vrrp_script chk_http_port {
    
     # 检测脚本配置
     script "/usr/local/src/nginx_check.sh" # 脚本文件位置
     interval 2 # 检测脚本执行的间隔
     weight 2 # 权重,一旦校测脚本中的条件成立,就修改为这个新权重(比如检测到这个服务器宕机,那么就将权重降低)
}
vrrp_instance VI_1 {
    
     # 虚拟ip配置
    state BACKUP # 备份服务器上将 MASTER 改为 BACKUP 
    interface ens33 //网卡
    virtual_router_id 51 # 主、备机的 virtual_router_id 必须相同
    priority 90 # 主、备机取不同的优先级,主机值较大,备份机值较小(主机100,备机90)
    advert_int 1 # 默认每隔一秒检测主机状态
    authentication {
    
    
         auth_type PASS
         auth_pass 1111
     }
     virtual_ipaddress {
    
    
     	192.168.11.88 // VRRP H 虚拟地址 (对外暴露的虚拟ip地址)
     }
}
  • Add a detection script in /usr/local/src (to detect whether the main server is down or not)
#!/bin/bash
A=`ps -C nginx –no-header |wc -l`
if [ $A -eq 0 ];then
     /usr/local/nginx/sbin/nginx # nginx位置
     sleep 2
     if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then
     	killall keepalived
     fi
fi
  • Start nginx and keepalived on the two servers
    • Start nginx: ./nginx
    • 启动 keepalived:systemctl start keepalived.service

final test

  • Enter the virtual ip address 192.168.11.88 in the browser address bar
    insert image description here

  • Stop the main server (192.168.11.129) nginx and keepalived, and then enter 192.168.11.88, the internal access is the standby nginx server (192.168.17.133)
    Still accessible!

7. Principle of nginx

1. Adopt the working mode of master and worker

  • mater as a manager, manages and monitors the work process

  • As the actual processing process, workers use the method of scrambling to process processing requests
    insert image description here
    insert image description here
    2. How workers work
    insert image description here
    3. One master and multiple wokers are beneficial

  • You can use nginx –s reload hot deployment, use nginx for hot deployment

    • No need to restart nginx to avoid server downtime
    • If a worker is processing the request, only the remaining workers perform hot deployment
  • Each woker is an independent process. If there is a problem with one of the wokers, the other wokers will continue to compete independently to realize the request process without causing service interruption.

4. How many wokers are appropriate to set

  • It is most appropriate that the number of workers is equal to the number of CPUs of the server.
    5. The number of connections worker_connection
  • How many connections did the woker take up when sending the request?
    • 2 or 4
      • If accessing static resources, the client sends a request, and the worker responds
      • If it is to access dynamic resources, then on the basis of the above, the worker will add two more connections to interact with tomcat and request database access
  • Nginx has a master and four wokers. Each woker supports a maximum number of connections of 1024. What is the maximum number of concurrency supported?
    • The maximum concurrent access to ordinary static resources is: worker_connections * worker_processes /2 (the maximum number of connections per worker × the number of workers, and then divided by the two connections to access static resources)
    • And if HTTP is used as a reverse proxy, the maximum number of concurrency should be: worker_connections*worker_processes / 4

Come on! Programmer!

Guess you like

Origin blog.csdn.net/lj20020302/article/details/129827520
Recommended