Interview questions: Nginx is how to achieve high concurrency? Common optimization means what?

Interview questions:

Nginx is how to achieve concurrent? Why Nginx does not use multithreading? Common optimization means Nginx What? What are the possible reasons for the error 502?

Interviewer psychological analysis

Mainly to see the basic principles of candidates for NGINX whether familiar, operation and maintenance personnel because most point NGINX more or less know everything, but really very few could understand its principles. Understand its principles, in order to do optimization, or only still move like, a problem also no way to start.

Fur people understand, generally to be a Web Server, set up a Web site; primary operation and maintenance may put forward HTTPS, configure a reverse proxy; the definition of a mid-level operation and maintenance upstream, write a regular judge; veterans to be a performance optimization, write ACL, it is also possible to change to change the source code (small series represent no ability to change the source code).

Face questions analysis

1. Nginx is how to achieve high concurrency?

Asynchronous, non-blocking, and to use a large number of underlying epoll code optimization.

If a server uses a process is responsible for a request of the way, then the number is the number of concurrent processes. Under normal circumstances, there will be a lot of progress has been waiting.

The nginx use of a master process, multiple woker mode processes.

  • master process is primarily responsible for the collection, distribution request. Whenever a request is over, master will pull up a worker process is responsible for handling the request.
  • At the same time master process is also responsible for monitoring the state woker ensure high reliability
  • woker process is generally set to coincide with the number of cpu core. The number of requests the nginx woker process at the same time can be processed only by memory limitations, you can handle multiple requests.

Nginx asynchronous non-blocking mode of operation being among the waiting time used up. When you need to wait for these processes to free up stand, so the performance of a small number of processes to solve a lot of concurrency issues.

Each comes in a request, there will be a worker process to deal with. But not the whole process of treatment and to what extent? Place to deal with possible obstruction, such as server forwards the request to the upstream (back-end), and waiting for a request to return. So, this process worker is smart, he will after sending a request to register an event: "If the upstream returned, let me know, and I then went dry." So he went to rest. At this point, if another request comes in, he can then quickly treated in this manner. Once the upstream server returned, it will trigger this event, worker will take over, this request will then go down.

2. Why Nginx does not use multithreading?

Apache: create multiple processes or threads, and each thread or process will assign cpu and memory (multi-threading process than small, so the worker is higher than perfork support concurrency), complicated by the General Assembly had run out of server resources.

Nginx: single-threaded asynchronously (number of work processes Administrators can configure Nginx master process of) non-blocking process request (epoll), will not be allocated for each request cpu and memory resources, save a lot of resources, but also reduce the a large amount of CPU context switching. So that makes Nginx support higher concurrency.

3. Nginx configuration common optimization What?

(1) adjusting worker_processes

Refers to the number of worker Nginx to be generated, the best practice is for each CPU to run a worker process.

CPU cores on your system, enter

$ grep processor / proc / cpuinfo | wc -l 
复制代码

(2) maximize worker_connections

Nginx Web server can serve a number of clients simultaneously. When combined with worker_processes, to maximize client service number per second can

The maximum number of clients / sec = * worker worker process connections

In order to maximize the full potential of Nginx, workers should be connected to the process of setting the maximum allowable core can run a number of 1024.

(3) Enable Gzip compression

Compressed file size, reducing the transmission bandwidth http client, thus improving page load speed

gzip configuration example follows the recommendations at http :( portion)

(4) a static file caching is enabled

Enabling static file caching to reduce bandwidth and improve performance, you can add the following command to define a static computer files cached page:

location ~* .(jpg|jpeg|png|gif|ico|css|js)$ {  
expires 365d;  
} 
复制代码

(5) Timeouts

keepalive connection reduces the opening and closing of connections required CPU and network overhead, obtain optimum performance variable reference may need to be adjusted:

(6) disabled access_logs

Access log record, which records each request nginx, thus consuming a lot of CPU resources, thereby reducing the performance nginx.

Completely disable access logging

access_log off; 
复制代码

If you must have access log records access log buffer is enabled

access_log /var/log/nginx/access.log主缓冲区= 16k 
复制代码

4.502 What are the possible reasons being given?

If (1) FastCGI process has started

(2) FastCGI whether the number of worker processes is not enough

(3) FastCGI execution time is too long

(4) FastCGI Buffer enough

nginx and apache as buffering limit distal end, the buffer parameters may be adjusted

fastcgi_buffer_size 32k;  
fastcgi_buffers 8 32k; 
复制代码

(5) Proxy Buffer enough

If you use Proxying, adjustment

proxy_buffer_size 16k;  
proxy_buffers 4 16k; 
复制代码

(6) php script execution time is too long

The php-fpm.conf of

<value name="request_terminate_timeout">0s</value> 
复制代码

0s into a time


Guess you like

Origin blog.csdn.net/weixin_33826268/article/details/91399333