8 knowledge points take you to master Nginx easily, you don’t even collect it when you get started to fascinating!

Preface

Because there are many environments in development, development environments, test environments, pre-production environments, etc., nginx is often used to configure agents. But I just know how to use it, and I want to sort out the knowledge of nginx carefully.

1. What is Nginx

Nginx (engine x) is a high-performance HTTP and reverse proxy web server.

Nginx is written in an event-driven manner, so it has very good performance and is also a very efficient reverse proxy and load balancing server. In terms of performance, Nginx occupies few system resources, can support more concurrent connections, and achieve higher access efficiency; in terms of function, Nginx is an excellent proxy server and load balancing server; in terms of installation and configuration, Nginx is easy to install , Flexible configuration.

Nginx supports hot deployment, the startup speed is extremely fast, and the software version or configuration can be upgraded without interruption of service, even if it runs for several months without restarting.

Under the system of microservices, Nginx is being used as a gateway by more and more projects, cooperating with Lua for current limiting and fuse control.

A reverse proxy is mentioned here, what is a reverse proxy?

Nginx forwards the request to different machines and different ports (or directly returns the result) according to the port, domain name, and url of the received request, and then returns the returned data to the client.

In the Java design pattern, the proxy mode is defined as follows: a proxy object is provided to an object, and the proxy object controls the reference of the original object.

Reverse proxy: client one>proxy<one> server

An example of a reverse agency for renting a house:

A (client) wants to rent a house, and B (agent) rents the house to him.
At this time, C (server) is actually the landlord.
B (agent) is the intermediary who rents this house to A (client).

During this process, A (client) does not know who is the landlord of
this house. He may think that this house is B (agent).

Features of reverse proxy

  • Nginx does not have its own address. Its address is the address of the server, such as www.baidu.com. To the outside, it is the producer of data.
  • Ngxin clearly knows which server to go to to get the data (before receiving the request, it has determined which server to connect to)

If there is a reverse, there should be a positive.

The so-called forward proxy is the proxy that follows the direction of the request, that is, the proxy server is configured by you to serve you and request the address of the target server. The biggest feature of the forward proxy is that the client is very clear about the server address to be accessed; the server only knows which proxy server the request comes from, but not which specific client comes from; the forward proxy mode shields or hides the real client information

Forward proxy: client <one> agent one> server

The forward agent also simply makes an example of renting a house:

A (client) wants to rent the house of C (server), but A (client) does not know that C (server) cannot rent it.
B (agent) knows that C (server) can rent this house, so you ask B (agent) to help rent this house.

In this process, C (server) does not know A (client) but only B (agent)
C (server) does not know that A (client) rents a house, only knows that the house is rented to B (agent).

2. Nginx application scenarios

1. http server . Nginx is an http service that can independently provide http services. It can be a static web server.

2. Virtual hosting. Multiple websites can be virtualized on one server. For example, virtual hosts used by personal websites.

  • Port-based, different ports
  • Based on the domain name, different domain names

3. Reverse proxy, load balancing . When the number of website visits reaches a certain level, when a single server cannot meet the user's request, a cluster of multiple servers is required to use nginx as a reverse proxy. And multiple servers can share the load equally, and no one server will be idle due to a high load of one server.

3. Install Nginx

blog.s135.com/nginx_php_v…

4. Command

Dumping: The data on devices such as memory, CPU, I/O, etc. are dynamic (or volatile), which means that the data will be lost when it is used up or if an exception occurs. If I want to get the data at certain moments (it may be debugging program bugs or collecting some information), I have to dump it in a static form (such as a file). Otherwise, you will never get these data.

5. Nginx configuration

The main configuration file of Nginx is: nginx.conf.

The configuration inside is mainly like this:

We can clearly divide the nginx.conf configuration file into three parts:

Global block:  From the beginning of the configuration file to the events block, some configuration instructions that affect the overall operation of the nginx server will be set, mainly including the configuration of the user (group) running the Nginx server, the number of worker processes allowed to be generated, and the storage of the process PID Path, log storage path and type, and introduction of configuration files, etc.

For example, the configuration in the first line above:

This is the key configuration of the Nginx server's concurrent processing service. The larger the worker_processes value, the more concurrent processing can be supported, but it will be restricted by hardware, software and other equipment.

**events block: **The instructions involved mainly affect the network connection between the Nginx server and the user. Common settings include whether to enable serialization of network connections under multiple work processes, whether to allow multiple network connections to be received at the same time, and which one to choose Event-driven model to process connection requests, the maximum number of connections that each word process can support at the same time, etc.

http block: The most frequent part of Nginx server configuration, most of the functions such as proxy, cache and log definition, and the configuration of third-party modules are here.

6. Reverse proxy

The reverse proxy has been explained above, let's write one now. When I configure the reverse proxy in the company, I will add a server to http:

Then configure in hosts:

Next, we sort out the grammar:

The server_name instruction is mainly used to configure name-based virtual hosts

The function of gzip is whether to enable compressed transmission

The location directive is used to match the URL

The proxy_pass directive is used to set the address of the proxy server

proxy_set_header is used to set the header information (request header) received by the proxy server

Basically, we can configure reverse proxy if we understand server_name, location, proxy_pass

7. Nginx management virtual host

I haven't touched this part in my work, but when I searched for information, the name seemed very big, so I deliberately researched it. When we want to virtualize multiple websites on one server, we can use virtual hosts to achieve it.

The virtual host uses special software and hardware technology. It divides a server host running on the Internet into a "virtual" host. Each virtual host can be an independent website and can have an independent domain name. With complete Internet server functions (WWW, FTP, Email, etc.), virtual hosts on the same host are completely independent. From the point of view of website visitors, each virtual host is exactly the same as an independent host.

7.1 Virtual hosting based on domain name

1. Add the following code segment in the http braces:

server {  
        #监听端口 80  
        listen 80;   
                                
        #监听域名feng.com;  
        server_name feng.com;
          
        location / {              
                # 相对路径,相对nginx根目录。也可写成绝对路径  
            root    feng;  
            
            # 默认跳转到index.html页面  
            index index.html;                 
        }  
    }
复制代码

2. Switch the installation directory: cd/usr/local/software/nginx

3. Create a directory: mkdir feng

4. Create a new index.html file: vi /usr/local/software/nginx/feng/index.html, the content of the file:

<html>
        <head>
            <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
        </head>
        <body>
            <h2>枫</h2>
        </body>
    </html>
复制代码

5. Re-read the configuration file:

/usr/local/software/nginx/sbin/nginx-s reload

kill -HUP process number

6. Configure the windows native host:

192.168.197.142 feng.com # Linux server IP address

7. Visit: http://feng.com:80/

7.2 Port-based virtual host configuration

server {
        listen  2022;
        server_name     feng.com;
        location / {
           root    /home;
           index index.html;
        }
    }
复制代码

7.3 Virtual host configuration based on IP address

    server {
      listen  80;
      server_name  192.168.197.142;
      location / {
              root    ip;
              index index.html;
      }
    }

8. Load Balancing

What we hear most about using Nginx is load balancing, so what is load balancing?

**Load balancing: **As the current core parts of the existing network increase with the increase in business volume, the rapid growth of access and data traffic, its processing capacity and computing strength have also increased correspondingly, making a single server device fundamentally Can't afford it.

A cheap, effective and transparent method derived from this situation to expand the bandwidth of existing network equipment and servers, increase throughput, strengthen network data processing capabilities, and improve network flexibility and availability is a technology called Load Balance (Load Balance). ).

There are several schemes for Nginx to achieve load balancing.

8.1 Polling

Round Robin is Round Robin, which distributes the client's Web requests to different back-end servers in turn according to the order in the Nginx configuration file.

upstream backserver {
    server 192.168.0.14;
    server 192.168.0.15;
}
复制代码

8.2 weight

Weight-based load balancing is Weighted Load Balancing. In this way, we can configure Nginx to distribute more requests to high-configuration back-end servers, and relatively few requests to low-configuration servers.

upstream backserver {
    server 192.168.0.14 weight=3;
    server 192.168.0.15 weight=7;
}
复制代码

The higher the weight, the greater the probability of being visited, as in the above example, they are 30% and 70% respectively.

8.3 ip_hash

In the aforementioned two load balancing schemes, consecutive web requests from the same client may be distributed to different back-end servers for processing. Therefore, if a session is involved, the session will be more complicated. Common is the persistence of database-based sessions. To overcome the above problems, you can use a load balancing scheme based on IP address hashing. In this case, consecutive web requests from the same client will be distributed to the same server for processing.

upstream backserver {
    ip_hash;
    server 192.168.0.14:88;
    server 192.168.0.15:80;
}
复制代码

8.4 fair

The request is allocated according to the response time of the back-end server, and the short response time is preferred.

upstream backserver {
    server server1;
    server server2;
    fair;
}
复制代码

8.5 url_hash

Distribute requests according to the hash result of the visited URL, so that each URL is directed to the same (corresponding) back-end server, which is more effective when the back-end server is cached.

upstream backserver {
    server squid1:3128;
    server squid2:3128;
    hash $request_uri;
    hash_method crc32;
}
复制代码

Increase in servers that need to use load balancing

proxy_pass http://backserver/; 
upstream backserver{ 
    ip_hash; 
    server 127.0.0.1:9090 down; (down 表示单前的server暂时不参与负载) 
    server 127.0.0.1:8080 weight=2; (weight 默认为1.weight越大,负载的权重就越大) 
    server 127.0.0.1:6060; 
    server 127.0.0.1:7070 backup; (其它所有的非backup机器down或者忙的时候,请求backup机器) 
} 
复制代码

max_fails: The default number of allowable request failures is 1. When the maximum number of times is exceeded, the error defined by the proxy_next_upstream module is returned.

fail_timeout: The time to pause after max_fails failures.

Configuration example:

#user  nobody;

worker_processes  4;
events {
# 最大并发数
worker_connections  1024;
}
http{
    # 待选服务器列表
    upstream myproject{
        # ip_hash指令,将同一用户引入同一服务器。
        ip_hash;
        server 125.219.42.4 fail_timeout=60s;
        server 172.31.2.183;
    }

    server{
        # 监听端口
        listen 80;
        # 根目录下
        location / {
        # 选择哪个服务器列表
            proxy_pass http://myproject;
        }

    }
}
复制代码

8.6 In-depth practice

The above are all load balancing schemes. For the specific implementation, I saw that Zhihu wrote very well, and it looks handsome to take off.

9. Summary

Nginx is really powerful, and it is used more and more widely. Although I don't use it much in the company, I have learned a lot of knowledge about Nginx, and I have more ideas for project construction and optimization. Maybe it doesn't have to be deep understanding, just to meet our daily needs. Understanding it, when we are working on a project or solving a problem, it can be used as a solution for us, it is very Nice.

to sum up

The editor also compiled more than 270 pages of information on the actual combat of Spring Cloud and Docker microservice architecture, more than 100 pages of Spring Boot study notes, Spring Family Bucket (1187 pages PDF), and 1,000 interview questions for Java engineers from major Internet companies. , Follow the official account: Kylin change bug.

Guess you like

Origin blog.csdn.net/QLCZ0809/article/details/112281343