Nginx interview questions in one step

 1. What is Nginx?

Nginx is a lightweight/high performance reverse proxy web server for HTTP, HTTPS, SMTP, POP3 and IMAP protocols. It implements very efficient reverse proxy and load balancing. It can handle 20,000 to 30,000 concurrent connections, and the official monitoring can support 50,000 concurrent connections.

2. What are the advantages of Nginx?  

  • Cross-platform, easy to configure.
  • Non-blocking, high concurrent connections: handle 20,000-30,000 concurrent connections, and the official monitoring can support 50,000 concurrent connections.
  • Small memory consumption: Only 10 Nginxes will take up 150M memory.
  • Inexpensive and open source.
  • High stability, the probability of downtime is very small.
  • Built-in health check function: If a server is down, a health check will be done, and the request sent will not be sent to the down server. Resubmit the request to another node

3. Nginx application scenarios? 

  • http server. Nginx is an http service that can provide http services independently. It can be used as a web static server.
  • virtual host. It is possible to virtualize multiple websites on one server, such as virtual machines used by personal websites.
  • Reverse proxy, load balancing. When the number of website visits reaches a certain level and a single server cannot satisfy user requests, multiple server clusters are required to use nginx as a reverse proxy. And multiple servers can share the load evenly, and there will be no situation where a certain server is down due to high load and a certain server is idle.
  • Security management can also be configured in nginz. For example, Nginx can be used to build an API interface gateway to intercept each interface service.

4. How does Nginx handle requests? 

server {         # 第一个Server区块开始,表示一个独立的虚拟主机站点
   listen       80; # 提供服务的端口,默认80
   server_name  localhost; # 提供服务的域名主机名
   location / { # 第一个location区块开始
     root   html; # 站点的根目录,相当于Nginx的安装目录
     index  index.html index.html;  # 默认的首页文件,多个用空格分开
} # 第一个location区块结果
  • First of all, when Nginx starts, it will parse the configuration file to obtain the port and IP address to be monitored, and then initialize the monitored Socket in the Nginx Master process (create Socket, set addr, reuse and other options, bind to the specified ip address port, and then listen to monitor).
  • Then, fork (an existing process can call the fork function to create a new process. The new process created by fork is called a child process) to generate multiple child processes.
  • After that, the child process will compete to accept new connections. At this point, the client can initiate a connection to nginx. When the client performs a three-way handshake with nginx and establishes a connection with nginx. At this time, a certain child process will accept successfully, get the Socket of the established connection, and then create nginx's encapsulation of the connection, that is, the ngx_connection_t structure.
  • Next, set the read and write event processing functions, and add read and write events to exchange data with the client.
  • Finally, Nginx or the client will actively close the connection. At this point, a connection is dead.

5. What is a forward proxy? 

A server located between the client and the original server (origin server), in order to obtain content from the original server, the client sends a request to the proxy and specifies the target (origin server), and then the proxy forwards the request to the original server and obtains the content returned to the client.

The client can use the forward proxy. The forward proxy can be summed up in one sentence: the proxy side is the client side. For example: we use OpenVPN and so on.

6. What is a reverse proxy? 

The reverse proxy (Reverse Proxy) method refers to the use of a proxy server to accept connection requests on the Internet, then send the request to the server on the internal network and return the results obtained from the server to the client requesting the connection on the Internet At this time, the proxy server acts as a reverse proxy server externally.

The reverse proxy summary is just one sentence: the proxy side is the server side.

7. What are the advantages of a reverse proxy server? 

A reverse proxy server can hide the presence and characteristics of the origin server. It acts as an intermediate layer between the internet cloud and web servers. This is great for security reasons, especially if you are using a web hosting service.

8. How is the Nginx load balancing algorithm implemented? What are the strategies? 

In order to avoid server crashes, everyone will share the server pressure through load balancing. Form a cluster of the corresponding servers. When a user accesses, they first access a forwarding server, and then the forwarding server distributes the access to servers with less pressure.

There are five strategies for implementing Nginx load balancing:

1. Polling (default)

Each request is assigned to different back-end servers one by one in chronological order. If a back-end server goes down, the faulty system can be automatically eliminated.

  1. weightweight

The larger the weight value, the higher the assigned access probability, which is mainly used when the performance of each backend server is unbalanced. The second is to set different weights in the case of master and slave, so as to make reasonable and effective use of host resources.

3. ip_hash (IP binding)

Each request is allocated according to the hash result of the access IP, so that visitors from the same IP can access a back-end server, and can effectively solve the session sharing problem existing in dynamic web pages

4. fair (third-party plug-in)

The upstream_fair module must be installed.

Compared with weight and ip_hash, which are more intelligent load balancing algorithms, the fair algorithm can intelligently perform load balancing according to the page size and loading time, and give priority to those with short response times.

5.url_hash (third-party plug-in)

The hash package of Nginx must be installed

Allocate requests according to the hash result of the accessed URL, so that each URL is directed to the same backend server, which can further improve the efficiency of the backend cache server.

9. How to limit the current? 

Nginx current limiting is to limit the speed of user requests to prevent the server from being overwhelmed

There are 3 types of current limiting

  • Normal limit access frequency (normal traffic)
  • Burst limit access frequency (burst traffic)
  • Limit the number of concurrent connections

Nginx's current limiting is based on the leaky bucket flow algorithm

Implement three current limiting algorithms

1. Normal limit access frequency (normal traffic):

Limit the requests sent by a user, how often my Nginx receives a request.

Nginx uses ngx_http_limit_req_modulemodules to limit the access frequency. The principle of limitation is essentially based on the principle of the leaky bucket algorithm. In the nginx.conf configuration file, limit_req_zonecommands and limit_reqcommands can be used to limit the request processing frequency of a single IP.

 1r/s represents one request per second, and 1r/m receives one request per minute. If Nginx still has other requests that have not been processed at this time, Nginx will refuse to process the user's request.

2. Burst limit access frequency (burst traffic):

Limit the requests sent by a user, how often my Nginx receives one.

The above configuration can limit the access frequency to a certain extent, but there is also a problem: if the burst traffic exceeds the request and is rejected, and cannot handle the burst traffic during the event, how should it be further processed at this time?

Nginx provides the burst parameter combined with the nodelay parameter to solve the problem of traffic bursts. You can set the number of requests that can be processed beyond the set number of requests that can be processed additionally. We can add the burst parameter and nodelay parameter to the previous example:

 Why is there an extra burst=5 nodelay;? This can mean that Nginx will immediately process the first five requests for a user, and the redundant ones will be dropped slowly. If there are no other user requests, I will process yours. For other requests, my Nginx will miss it and not accept your request

3. Limit the number of concurrent connections

Modules in Nginx ngx_http_limit_conn_moduleprovide the function of limiting the number of concurrent connections, which can be configured using limit_conn_zonedirectives and execution. limit_connLet's take a look at a simple example

 

 It is configured that a single IP can only have a maximum of 10 concurrent connections at the same time, and the maximum number of concurrent connections of the entire virtual server can only be 100 connections at the same time. Of course, the number of connections to a virtual server is only counted when the request headers are processed by the server. As mentioned just now, Nginx is implemented based on the principle of the leaky bucket algorithm. In fact, current limiting is generally implemented based on the leaky bucket algorithm and the token bucket algorithm.

Guess you like

Origin blog.csdn.net/m0_62436868/article/details/129158096