Detailed Nginx usage

Detailed Nginx usage

As a high-performance web server, nginx must have been coveted by everyone for a long time, and you want to learn it. Not much grammar, there are a lot of online. The following bloggers do some descriptions and analysis on several very commonly used functions of nginx. After learning these functions, normal development and deployment are not a problem. Therefore, I hope that after reading it, you can install nginx to learn the configuration test, so that you can really master it.

1. Forward Agent

Forward proxy: a behavior in which the internal network server actively requests external network services

Just looking at the concepts, some readers may still not understand: what is called "forward" and what is called "agent", let's understand these two terms separately.

Positive: the same or consistent direction

Agency: You can entrust or rely on others to do things you can't do yourself or you don't plan to do.

With the help of explanation, we return to the concept of nginx. Forward proxy actually means that the client cannot actively or does not intend to complete the initiative to initiate a request to a server, but instead entrusts the nginx proxy server to initiate a request to the server, obtain the processing result, and return it to the client end.

It can be seen from the figure below that the request initiated by the client to the target server is initiated by the proxy server instead of it to the target host. After the result is obtained, it is returned to the client through the proxy server.

Let me give you a chestnut: The majority of socialist successors know that in order to protect the flowers of the motherland from the miasma of the outside world, the state has made some "optimizations" to the Internet. Under normal circumstances, it is not possible to access the Internet, but as programmers, if we Without the help of search engines such as Google, the code of ecstasy would be eclipsed. Therefore, some fan qiang technologies and software have appeared on the Internet for people in need, such as a VPN. In fact, the principles of VPNs are basically the same. Similar to a forward proxy, that is, a computer that needs to access the external network, initiates a request to access the external network, through the VPN on the machine to find a proxy server that can access foreign websites, the proxy server initiates a request to the foreign website, and then Return the result to the machine.

Forward proxy configuration:

Copy code

server { 
#Specify 
DNS server IP address    resolver 114.114.114.114; #Specify 
proxy port      
listen 8080;    
location / { #Set     
the protocol and address of the proxy server (fixed)      
proxy_pass http://$http_host$request_uri;   
}    
}  

Copy code

In this way, the server with port 8080 in the intranet can actively request to the host at 1.2.13.4, such as under Linux:

curl --proxy proxy_server:8080 http://www.taobao.com/ 

The key configuration of forward proxy:

  1. resolver: DNS server IP address
  2. listen: the port of the intranet server that initiated the request
  3. proxy_pass: the protocol and address of the proxy server

2. Reverse proxy

Reverse proxy: reverse proxy refers to the use of a proxy server to accept the request from the client, and then forward the request to the upstream server in the intranet. After the upstream server has processed it, the result is returned to the client through nginx.

The principle of forward proxy is described above. I believe that for reverse proxy, it is well understood.

Reverse proxy is to uniformly accept requests from the outside world through nginx, then forward them to the server in the intranet as needed, and return the processing request to the external client. At this time, the proxy server acts as a web server externally. The end does not know the existence of the "upstream server" at all.

For example: a server has only one port 80, and there may be multiple projects in the server. If project A is port 8081, project B is 8082, project C is 8083, assuming the domain name pointing to the server is www.xxx. com. At this time, the access B project is www.xxx.com:8082, and so on. The URLs of other projects also need to add a port number, which is very unsightly. At this time, we give port 80 to the nginx server and give each Each project is assigned an independent subdomain name. For example, Project A is a.xxx.com, and the forwarding configuration of each project is set in nginx, and then access to all projects is accepted by the nginx server, and then forwarded to different according to the configuration Server processing. The specific process is shown in the figure below:

Reverse proxy configuration:

Copy code

{Server 
    # listening port 
    the listen 80; 
    # server name, which is accessed by the client domain name address 
    server_name a.xxx.com; 
    #nginx log output file 
    access_log logs / nginx.access.log main; 
    #nginx error log output file 
    error_log logs /nginx.error.log; 
    root html; 
    index index.html index.htm index.php; 
    location / { #The 
        address of the proxy server 
        proxy_pass http://localhost:8081; #Modify 
        the URL sent to the client 
        proxy_redirect off; 
        proxy_set_header Host $host; 
        proxy_set_header X-Real-IP $remote_addr; 
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; 
        proxy_max_temp_file_size 0; 
   } 
} 

Copy code

In this way, you can access the website corresponding to project a through a.xxx.com, without the need to bring an ugly port number.

The key points of reverse proxy configuration are:

  1. server_name: represents the domain name entered when the client initiates a request to the server
  2. proxy_pass: Represents the access address of the source server, which is the server that actually processes the request (localhost+port number).

3. Transparent proxy

Transparent proxy: also called simple proxy, meaning that when the client initiates a request to the server, the request will first reach the transparent proxy server, and the proxy server will then forward the request to the real origin server for processing, that is, the client does not know that there is a proxy server. The presence.

Let me give you a chestnut: its usage is a bit similar to interceptors, such as office computers in companies with strict regulations. No matter what we do with computers, the security department can intercept anything we send out. This is because of computers. When sending to the outside world, it actually passes through a transparent server on the network first, and then after its processing, we then go to the Internet. When we surf the Internet, we do not perceive that there is an interceptor intercepting our data and information.

Some people say that a transparent proxy is a bit like a reverse proxy. The proxy server first accepts the request and then forwards it to the origin server. In fact, there is a difference in essence. Transparent proxy means that the client does not perceive the existence of the proxy server, while reverse proxy means that the client perceives the existence of only one proxy server. Therefore, one of them hides itself and the other hides the origin server. . In fact, transparent proxy and forward proxy are similar. The client initiates the request and the proxy server handles it. The difference between them is that the forward proxy is the proxy server instead of the client request, while the transparent proxy is the client initiating the request. When requesting, it will pass through the transparent proxy server before reaching the server. During this process, the client is not aware of the proxy server.

4. Load Balancing

Load balancing: The process of distributing the requests received by the server according to rules is called load balancing. Load balancing is a manifestation of reverse proxy.

It is possible that most of the web projects that people come into contact with are done with one server at the beginning, but when the website visits become more and more large, a single server cannot hold it. At this time, you need to increase the server to make it. The cluster is used to share the traffic pressure, and when these servers are set up, nginx plays the role of receiving traffic and shunting. When a request is made to the nginx server, nginx can distribute the request to different servers according to the set load information. After the server is processed, nginx obtains the processing result and returns it to the client. In this way, load balancing can be achieved by using nginx's reverse proxy.

There are several modes for nginx to achieve load balancing:

1. Polling: Each request is allocated to different back-end servers one by one in chronological order, which is also the default mode of nginx. The configuration of the polling mode is very simple, just add the server list to the upstream module.

The following configuration means: there are three servers in the load, and when the request arrives, nginx distributes the request to the three servers for processing in chronological order.

upstream serverList { 
server 1.2.3.4; 
server 1.2.3.5; 
server 1.2.3.6; 
} 

2. ip_hash: Each request is allocated according to the hash result of the access IP, and the same IP client regularly accesses a back-end server. It can ensure that requests from the same ip are hit on a fixed machine, which can solve the session problem.

The following configuration refers to: there are three servers in the load, when the request arrives, nginx preferentially distributes according to the result of ip_hash, that is, the request of the same IP is fixed on a certain server, and the other requests are distributed in chronological order. Three servers are processed.

Copy code

upstream serverList { 
    ip_hash 
    server 1.2.3.4; 
    server 1.2.3.5; 
    server 1.2.3.6; 
} 

Copy code

3. url_hash: Allocate requests according to the hash result of the visited URL, and the same URL is forwarded to the same back-end server for processing.

Copy code

upstream serverList { 
    server 1.2.3.4; 
    server 1.2.3.5; 
    server 1.2.3.6; 
    hash $request_uri;  
    hash_method crc32;  
} 

Copy code

fair: Allocate requests according to the response time of the back-end server, and give priority to the short response time.

Copy code

upstream serverList { 
    server 1.2.3.4; 
    server 1.2.3.5; 
    server 1.2.3.6; 
    fair; 
} 

Copy code

In each mode, the parameters that can be carried behind each server are:

  1. down: The current server does not participate in the load temporarily
  2. weight: weight, the greater the value, the greater the load of the server.
  3. max_fails: Allow the number of failed requests, the default is 1.
  4. fail_timeout: pause time after max_fails failures.
  5. backup: Backup machine. Only when all other non-backup machines are down or busy will the backup machine be requested.

For example, the following configuration means: there are three servers in the load. When the request arrives, nginx distributes the requests to the three servers in chronological order and weight. For example, if there are 100 requests, 30% are processed by server 4, and 50% Server 5 processes 20% of requests from server 6.

upstream serverList { 
    server 1.2.3.4 weight=30; 
    server 1.2.3.5 weight=50; 
   server 1.2.3.6 weight=20; 
} 

For example, the following configuration means: there are three servers in the load, the failure timeout period of server 4 is 60s, server 5 does not participate in the load temporarily, and server 6 is only used as a backup machine.

upstream serverList { 
    server 1.2.3.4 fail_timeout=60s; 
    server 1.2.3.5 down; 
    server 1.2.3.6 backup; 
} 

The following is an example of configuring load balancing (only the key configuration is written):

among them:

upstream: load configuration module, serverList is the name, start at will

server_name: is the domain name address requested by the client

proxy_pass: a module that points to a list of loads, such as serverList

Copy code

upstream serverList { 
     server 1.2.3.4 weight=30; 
     server 1.2.3.5 down; 
     server 1.2.3.6 backup; 
 }    
  
 server { 
     listen 80; 
     server_name  www.xxx.com; 
    root   html; 
    index  index.html index.htm index.php; 
    location / { 
        proxy_pass  http://serverList; 
        proxy_redirect     off; 
        proxy_set_header   Host             $host; 
   } 
} 

Copy code

5. Static Server

Nowadays, many projects are popular with front-end separation, that is, front-end server and back-end server are separated and deployed separately. This method allows front-end and back-end personnel to perform their duties without relying on each other. In the front-end separation, the operation of front-end projects is not Need to use Tomcat, Apache and other server environments, so you can directly use nginx as a static server.

The configuration of the static server is as follows, the key configuration is:

  1. root: The root directory of the absolute path of the direct static project.
  2. server_name: The domain name address of the static website.

Copy code

server { 
       listen  80;                                                          
       server_name  www.xxx.com;                                                
       client_max_body_size 1024M; 
       location / { 
              root   /var/www/xxx_static; 
              index  index.html; 
          } 
   } 

Copy code

6. Installation of nginx

After learning so many configuration usages of nginx, we need to test every knowledge point to be impressed. Before that, we need to know how nginx is installed. Let’s take the Linux environment as an example to briefly describe how to install nginx in yum. A step of:

Installation dependencies:

//One-click installation of the above four dependencies 
yum -y install gcc zlib zlib-devel pcre-devel openssl openssl-devel

Install nginx:

yum install nginx 

Check whether the installation is successful:

nginx -v 

Start up/quite nginx:

/etc/init.d/nginx start 
/etc/init.d/nginx stop 

Edit the configuration file:

/etc/nginx/nginx.conf 

After these steps are completed, we can enter the nginx configuration file nginx.conf to configure and test the above knowledge points.

Guess you like

Origin blog.csdn.net/qq_30960647/article/details/107360077