[Nginx]-Introductory knowledge

Overview

  • Nginx is a high-performance HTTP and reverse proxy web server , and also provides IMAP/POP3/SMTP services. Nginx was developed by Igor Sesoev of Russia. The first public version 0.1.0 was released on October 4, 2004.
    It released the source code under a BSD-like license because of its stability. , rich feature set, simple configuration files and low consumption of system resources. On June 1, 2011, nginx 1.0.4 was released

    Nginx is a lightweight web server/reverse proxy server and email (IMAP/POP3) proxy server, released under the BSD-like protocol. It is characterized by less memory and strong concurrency capabilities . In fact, the concurrency capability of Nginx is better than other web servers of the same type.

  • Nginx can be used as a web server for static pages. It does not support Java, and Java programs can only be completed by cooperating with Tomcat. Nginx is specially developed for performance optimization and can withstand high load tests. Reports show that it can support up to 50,000 concurrent connections

  • Tomcat is also a server, but it is more inclined to be an application server , that is, as a container for Java Web applications to be deployed and run in it; while Nginx directly receives HTTP requests through reverse proxy and other technologies, and then forwards them to Tomcat in the background. That is, HTTP servers such as Nginx prefer transmission and access control at the HTTP protocol level, completing the direct reception and forwarding of requests, while application servers such as Tomcat prefer logical processing of requests.

reverse proxy

Forward proxy : If you imagine the Internet outside the LAN as a huge resource library, the client in the LAN needs to access the Internet through a proxy server. We send the request to the proxy server, and the proxy server then accesses what we want. To access resources, this kind of proxy server is called a forward proxy.
The client needs to configure the proxy server in the browser. The biggest feature of a forward proxy is that the client is very clear about the server address it wants to access. The server only knows which proxy server the request comes from but not the specific client. That is, the forward proxy blocks or hides the real client information and is "proxyed" is the client

Reverse proxy : Mainly used in the case of distributed deployment of server clusters. After receiving requests from multiple clients to the server, Nginx distributes them to the back-end business processing server for processing according to certain rules. At this time, the source of the request is The client's information is clear, but it is not clear which server the request is processed by.
In fact, the client is unaware of the proxy, because the client does not need any configuration to access it. We only need to send the request to the reverse proxy. Server, the reverse proxy server selects the target server to obtain the data, and then returns it to the client. At this time, the reverse proxy server and the target server are one server to the outside world. What is exposed is the proxy server address and the IP address of the real server is hidden. What is "proxyed" is the server

load balancing

The client sends multiple requests to the server, and the server processes the requests. Some of them may need to interact with the database. After the server completes the processing, it returns the data to the client. This is a very basic and simple B/S interaction method.

This architecture is more suitable for early systems that are relatively simple, low-cost, and have relatively few concurrent requests. However, as the amount of information continues to grow, the amount of access and data increases rapidly, and the complexity of system business increases, this architecture will cause the server to respond to client requests slowly; when the amount of concurrency is particularly large, it is easy for the server to directly collapse

In order to solve this problem, our first thought may be to upgrade the server configuration such as increasing memory, but we know that Moore's Law is increasingly invalid, and hardware improvements can no longer meet the increasing needs.

Since the vertical solution to the problem does not work, try the horizontal solution. At this time, we have to mention the cluster . We increase the number of servers and then distribute the requests to each server. Instead of concentrating all the original requests on a single server, we distribute the requests to multiple servers and distribute the load to each server. On the server, this is what we call load balancing

Distribute the load to different service units, not only to ensure service availability, but also to ensure that the response is fast enough to give users a good experience

static and dynamic separation

In order to speed up the parsing speed of the website, dynamic pages and static pages can be parsed by different servers to speed up the parsing speed and reduce the pressure on the original single server. To put it simply, it means to separate dynamic and static requests. It cannot be understood as simply physically separating dynamic pages and static pages. It can be understood as using Nginx to process static pages and Tomcat to process dynamic pages. The realization angles of dynamic and static separation are roughly divided into two types:

  1. Purely separate static files into separate domain names and place them on separate servers
  2. Dynamic and static files are mixed and published together, and separated through Nginx

In the Nginx configuration file, specify different suffix names through location to achieve different request forwarding

The browser cache expiration time can be set through the expires parameter to reduce requests and traffic between the server and the server. expires is to set an expiration time for a resource. There is no need to go to the server for verification. You can directly confirm whether it has expired through the browser, so it will not generate additional traffic. However, it is not recommended to use expires to cache files that are frequently updated. For example, set expires to 3 days, access this URL within 3 days, send a request, and compare the last update time of the file on the server to see if there is any change. If not, it will not be fetched from the server, and status code 304 will be returned; if so, it will be directly Redownload from the server and return 200

Install on Linux

The local environment used is a self-built virtual machine CentOS 7 on VMware

Connect remotely to Linux

Execute the ifconfig command on the virtual machine to view the IP address of the virtual machine (the network connection method used by the virtual machine is NAT), then use XShell software on the local Windows machine to connect to the virtual machine, and then operate on the local machine through XShell. There is no need to directly Operate on a virtual machine

Install required dependencies

Dependencies required to install Nginx: openssl (http://www.openssl.org/), zlib (http://www.zlib.net/), pcre (http://www.pcre.org/)

CentOS 7 needs to install gcc -c++ before installing pcre, so install other dependencies besides pcre first.

yum -y install make zlib zlib-devel gcc-c++ libtool  openssl openssl-devel
# 安装pcre,安装到该目录下
cd /usr/local/src
wget http://downloads.sourceforge.net/project/pcre/pcre/8.37/pcre-8.37.tar.gz
# 安装完解压
tar -zxvf pcre-8.37.tar.gz
# 编译安装
make && make install
# 安装完可以执行以下命令查看pcre的版本,会显示8.37,说明安装成功
pcre-config --version

Install Nginx

After the dependencies are installed, you can install Nginx.

cd /usr/local/src
wget http://nginx.org/download/nginx-1.21.6.tar.gz
# 解压压缩包
tar -zxvf nginx-1.21.6.tar.gz
# 解压完进入目录执行configure进行检查
cd nginx-1.21.6
./configure
# 最后编译安装
make && make install

Check usage

After installation, you will find that there is an additional Nginx directory under /usr/local. Enter the sbin directory under this directory. There is an Nginx executable file. You can run Nginx by executing it.

After execution, ps -ef | grep nginxyou can see that Nginx-related processes are running.
At this time, enter the /usr/local/nginx directory again and you will find several more subdirectories. Enter the conf directory and open the configuration file nginx.conf

nginx example picture 1

From this section, we can see that the port Nginx listens to is 80. We can go back to the local machine, enter the IP address of the virtual machine in the browser and access it. Because the default port of the browser is 80, you can access it without entering the port number. If the page displays the Nginx interface, it means the operation is successful.

If the Nginx interface is not displayed, it may be that port 80 of the virtual machine is not open. You can use firewall-cmd --add-port=80/tcpthe command to open port 80 (you can add the --permenent option after it to take effect permanently, otherwise it will become invalid after restarting, that is, the opened port is closed again. ), and then use the firewall-cmd --list-ports(or firewall-cmd --list-all) command to view all open ports. If port 80 is displayed, it means the opening is successful. If not, try firewall-cmd --reloadrestarting the firewall. You should be able to access it again at this time.

Common commands

You must enter the Nginx directory before using the Nginx operation command: /user/local/nginx/sbin
Check the version number: ./nginx -v
Close Nginx: ./nginx -s stop
Restart: ./nginx -s reopen
Turn on Nginx: ./nginx
Check the Nginx process: ps -ef | grep nginx
Reload the Nginx configuration (file):./nginx -s reload

nginx configuration file

Location: /usr/local/nginx/conf/nginx.conf
Composition:

  • The global block, from the beginning of the configuration file to the events block, mainly sets some configuration instructions that affect the overall operation of the Nginx server. For example, it is the worker_processes 1;key configuration of the Nginx server's concurrent processing service. The larger this value, the more concurrent processing that can be supported. More, but it will be restricted by hardware, software and other equipment
  • events block, the instructions involved mainly affect the network connection between the Nginx server and the user. For example, worker_connections 1024;it means that the maximum number of connections supported by each worker_process is 1024.
    This part has a great impact on the performance of Nginx, and it should be flexibly configured in practice.
  • HTTP block, this is the most frequently configured part of the Nginx server. Most functions such as proxy, cache, and log definition and the configuration of third-party modules are here. HTTP blocks can also include HTTP global blocks, server blocks

Configuration example

reverse proxy

Requirement : Access the Tomcat homepage in the Linux system through the www.123.com domain name in the Windows browser

  1. Configure the mapping relationship between the domain name and the IP address (192.168.17.129, here is the IP address of the Linux system) in the hosts file under C:\Windows\System32\drivers\etc. Configure the mapping between the domain name and the IP address directly in this file. relationship, when accessing a domain name, it can be directly resolved to an IP address through this file without going through the DNS domain name system. This approach prevents domain name hijacking
  2. Configure the reverse proxy (request forwarding) in the Nginx configuration file
    nginx configuration file configure reverse proxy
    to access 192.168.17.129, and it will be forwarded to http://127.0.0.1:8080. location represents the request path

Note that you need to open the port you want to access (80) in the nginx firewall:
firewall-cmd --add-port=80/tcp --permanent
firewall-cmd --reload (reload after configuring to make it effective)
if prompted The firewall is not open. You can open it through systemctl start firewalld.
View the open ports: fire-cmd --list-all.
Delete the open ports: fire-cmd --remove-port=80/tcp --permanent.

The syntax of location: location [=/~/~*/^~] uri{}
=: used before the uri without regular expressions, the request string is required to strictly match the uri. If the match is successful, the search will stop and continue. Process the request immediately
~: used to represent uri containing regular expressions, and is case-sensitive
~*: used to represent uri containing regular expressions, and case-insensitive
^~: used for uris without regular expressions. Before requesting Nginx to find the identifier uri The location that matches the request string with the highest degree will be used immediately to process the request, instead of using the regular uri in the location block to match the request string. If the uri contains a regular expression, it must be marked with ~or~*

load balancing

Configure load balancing
Here, the weight of the two servers (ports) is 1, which means that the probability of each port being requested is equal

allocation strategy

  1. Polling (default strategy): Each request is assigned to different back-end servers one by one in chronological order. If the back-end server goes down, it can be automatically eliminated.

  2. weight : Represents the weight, the default is 1, the higher the weight, the more clients will be allocated. Specify the polling probability, and the weight is proportional to the access ratio, which is used when the performance of the back-end server is uneven.

  3. ip_hash : Each request is allocated according to the hash result of the access IP, so that each visitor accesses a backend server fixedly, which can solve the session problem
    ip_hash allocation method

  4. fair (third party): allocated according to the response time of the back-end server , priority allocation for short response time
    fair distribution method

static and dynamic separation

Configure dynamic and static separation
In this way, you can access the static files in the data folder of the Linux system through the file name http://192.168.17.128/www/ and http://192.168.17.128/image/

Guess you like

Origin blog.csdn.net/Pacifica_/article/details/120712396