Nginx-- interview will be the basis

Original article in Public number: ape Zhou Xiansen program. This platform is not updated regularly, like my article, I welcome attention to the micro-channel public number.
file

Recently been updated series of articles on Nginx, and finally several key knowledge points Nginx speaking almost the same. Benpian space Nginx as the end of the series, mainly to give some answers to frequently asked during the interview Nginx knowledge points. In fact, the interview questions for Nginx point is not much to ask to ask to go basically similar problems. Next we take a look at Nginx basic interview questions.

Nginx role

The problem is the entry-level knowledge, discuss Nginx usefulness. I think as long as several important points are in place to answer it, you can consider such an answer: Nginx is a high-performance web server and reverse proxy server, but also a IMAP / POP3 / SMTP server. Not only can achieve load balancing, you can also do the interface current limit, caching and other functions.

Using the vantage point of Nginx

  • And the use of epoll Nginx kqueue network I / O model in the actual production environment capable of supporting concurrent connections about 30,000.

  • Nginx low memory consumption.

  • Nginx cross-platform, and the configuration is relatively low degree of difficulty.

  • Nginx built-health checks, load balancing if one server goes down, and then received a request is sent to another server for processing.

  • Support Gzip compression, you can add a local browser cache Header head.

  • Nginx supports hot deployment, configuration changes can be smoothly carried out without interruption of service.

  • Nginx asynchronous receive a user request, reducing the pressure on the Web server.

How to achieve high concurrency Nginx

This problem might understand a little out of Nginx friends who are emerging out of five words: asynchronous non-blocking. Nginx is actually non-blocking asynchronous, using epoll model and the underlying code significantly optimized. In fact, there are talked about before Nginx is the use of a master process model multiple worker processes, each time it receives a request, the request will be master in accordance with certain policies to be distributed to a worker process for processing the request. the number of worker processes is generally set to match the number and the CPU core, non-blocking asynchronous mode will make the worker thread is waiting for callback requests idle time can receive new requests, when receiving a callback request and then go back to the old process the request , thus completing a few worker process has achieved a high concurrency problems.

Why Nginx does not use multithreading?

As we all know, did not create a new thread, we need to assign cpu and memory. Of course, the creation process is the same, but because of too many threads can cause excessive memory consumption. Therefore Nginx single thread asynchronous processing user requests, and does not need to continuously allocate memory for the cpu and a new thread to reduce memory consumption server, so that Nginx performance more efficient.

Nginx How to handle the request?

After Nginx starts, the first parsing configuration files will be successfully resolved ip and port number of the virtual server, create a socket in the main process master the process of addrreuse options are set, and the socket is bound to the corresponding ip address and port and monitoring. Then create a child process worker process, when the client and Nginx conduct three-way handshake, you can create a successful connection with the Nginx. When a new request comes in, the idle worker process will be competitive, when one worker process to compete successfully, you will get this socket connection has been successfully established, then create ngx_connection_t structure, then set the event handler to read and write and add event is used to read and write data exchange with the client. When the end of the request or the client Nginx active connection is closed, this time a request is processed.

Why do static and dynamic separation?

In the daily development, front-end resource request static files such as pictures do not need to go through the back-end servers, but these types of API calls you need to back-end processing requests, so in order to improve the response speed of the resource file, we should use static and dynamic separation the strategy to do architecture. We can put static files Nginx, the request is forwarded to the back-end server resources dynamically to further processing.

Several common way Nginx load balancing?

Polling: Nginx default mode with polling load balancing, each new request individually assigned to different back-end server to perform the processing in chronological order, if the back-end server is down, the health check Nginx this will eliminate the back-end server. But polling is obvious: low reliability and load distribution imbalance, so polling is more suitable for static image server or server resources.

  • weight: weight ratio may be set different weights for different back-end servers, which can change the proportions of the different backend server processing the request. You can give better performance of back-end servers higher weight.

  • ip_hash: in this way will be allocated according to the back-end server ip address hash result of a request to process the request, so that each user-initiated request fixed only handled by the same backend server, so you can solve the session problem.

  • fail: this manner somewhat similar to polling mode, primarily to the allocation request based on the response time of back-end servers, back-end server in response to a short time priority allocation request.

  • url_hash: in this way the result is in accordance with the hash of the request url to assign different requests to different servers, a request in this way will be done by each url is processed with a backend server, when the backend server cache more efficient .

Session unsynchronized how to deal with?

In fact, the above mentioned solutions, load balancing methods use ip_hash way, ip address this request will be converted hash algorithm if a user has visited a back-end device, the access again, automatically locate the server. Of course, by redis cached user session, and they can deal with the problem of sync session.

Nginx common optimal allocation

  1. Worker_processes adjust the number of worker processes specified Nginx need to create just mentioned the number of worker processes and is generally set to match the number of CPU cores.

  2. Nginx adjust worker_connections set up to serve the number of clients simultaneously. Worker_processes configuration can be combined to obtain the service per second, the maximum number of clients.

  3. Start gzip compression, the file size can be compressed to reduce the bandwidth of the http client, can greatly improve the loading speed of the page.

  4. Caching is enabled, if requested static resources, enable caching can greatly enhance the performance. About Enable Cache Cache This article can be viewed Nginx: Nginx caching principle and mechanism

Nginx forward proxy

Forward proxy is everyone most contact with the agency model, then what is it forward proxy? We all know that Google is not accessible in the country is normal, but sometimes we need time because of technical problems to access Google, we will first find a proxy server can access Google, we will send the request to the proxy server, proxy server to access Google, then access to the data is returned to us, such a process is to forward proxy. Forward proxy biggest feature is the client need to clearly know the address of the server to be accessed, Google servers only clear from the request which proxy server, but do not know which particular client from the forward proxy can hide specific information on real clients.
file

The client must be set up forward proxy server, but also need to know the IP address and port agent forward proxy server. Can be summed up forward proxy is a proxy of the client, is a server located between the client and Google's servers, in order to obtain data from Google's servers, the client sends a request and specify the target (Google servers) to the proxy server, the proxy server to transmit the request to the original and the obtained data is returned to the client. Summary of several forward proxy role:

  • Access to foreign sites do not access the cache to speed up access to resources
  • Client Access authorization, authentication proxy access
  • You can record user access records (access management), external users to hide information

Nginx Reverse Proxy

Multiple clients to request sent by the server, Nginx server after receiving the request, in accordance with certain rules forwarded to a different server for business logic processing, that is, load balancing strategy we have just mentioned. At this point the client from which the request is certain, but the request was not clear which server processes, Nginx play is a reverse proxy role. Can be understood, the external reverse proxy is transparent, accessible to visitors do not know they are a proxy. Reverse proxy server proxy is mainly used in the case of a distributed server cluster deployment, reverse proxy server to hide the information. Summarize two roles reverse proxy:

  • Ensure security within the network, usually as a reverse proxy to access the public network address, Web servers within the network

  • Load balancing, to optimize the loading site through reverse proxy server

Forward Proxy and Reverse Proxy difference in Nginx
file

  1. In the forward proxy to hide the source of client information requests;

  2. In the reverse proxy, the hidden server requests specific information processing;

Welcome to my personal public concern number: Program ape Zhou Xiansen
file

Guess you like

Origin www.cnblogs.com/niyueling/p/11573240.html