Nginx configuration files, optimizing Detailed

Part I "compile and install nginx" has been installed nginx, nginx configuration files and write this part of the optimization parameters.

View nginx configuration file path, you can use nginx configuration file check command nginx -t:

. 1 [Node4 the root @ ~] # Nginx - T
 2 Nginx: The Configuration File /etc/nginx/nginx.conf syntax IS OK  # Nginx profile path compiled installation 
. 3
Nginx: Configuration File /etc/nginx/nginx.conf the Test IS successful

nginx documentation: http://nginx.org/en/docs/http/ngx_http_core_module.html#server_tokens

nginx configuration file has four parts:

  main, global settings, related to the other portion is provided

  server, hosting set up, mainly used to specify the virtual machine host domain name, ip and port
  positioning location, URL matching agent targeting
  upstream, the upstream server clustering, load balancing cluster configuration
four parts, server inherit the main, location inherit server; upstream instruction does not inherit nor be inherited.

 nginx configuration file:

. 1 Vim / etc / Nginx / nginx.conf
 2  
. 3  User Nginx Nginx;          #nginx users and user groups, the default is nobody, recommendations are amended as Nginx
 . 4  worker_processes Auto;    number #nginx process, it is recommended to specify the number in accordance with the CPU, the general it is a multiple of two as denoted by 8. the core 4 may be set to auto, and can be used in conjunction with worker_cpu_affinity auto
 . 5  #worker_cpu_affinity Auto;
 . 6 worker_rlimit_nofile 65535 ;     #nginx process to open up the file description, the theory can be opened to open up ulimit -n nginx ÷ number of processes, since nginx allocation request is not balanced and the machine, is assumed to fill 10240, when the total amount reaches 3-4W concurrent processes may be more than 10,240, it is generally consistent with the values ulimit -n,
 . 7  
. 8 # set the log file, the error log type definitions Debug | info | Notice | The warn | error | Crit 
 . 9  #access_log OFF;
 10 the error_log / var / log / Nginx /The warn the error.log;
 . 11 #error_log / var / log / Nginx / the error.log Crit;
 12 is PID / var / RUN / nginx.pid;
 13 is  
14  # mode of operation and the maximum number of connections
 15  Events {
 16          use epoll;     # use epoll the i / o model reference time model use [kqueue | rtsig | epoll | / dev / poll | select | poll], nginx has a different event models for different operating systems, standard event model includes select and poll, if the current system is not present a more effective method, nginx select or choose to poll; kqueue and efficient event model comprises Epoll the like
 . 17          worker_connections 20480 ;     # each process the maximum number of connections allowed. The theoretical maximum number of connections for each nginx × worker_connections worker_processes
 18 is          #multi_accept ON;    #If multi_accept is disabled, a worker process will accept one new connection at a time. Otherwise, a worker process will accept all new connections at a time. The directive is ignored if kqueue connection processing method is used, because it reports the number of new connections waiting to be accepted. If multi_accept been banned, nginx a process can only work while receiving a new connection. Otherwise, a worker process can accept all new connections at the same time. If you use nginx kqueue connection method, then this instruction is ignored, because this method will wait for the report in the number of accepted new connection.
19 }
20 is  
21 is  #http services set
 22 is  HTTP {
 23 is      # love file extensions and file type map, include a main module instruction, may be split and a reference profile, reduce the complexity of the configuration file.
24-      the include / etc / nginx / mime.types;
 25      default_type the Application / OCTET - Uninterpreted Stream;         # default file type
 26          server_tokens OFF;    # enable or disable error page or service response nginx version number of the head, formats on | off | bulit The default is ON
 27          sendfile ON; # Specifies whether to use sendfile system call to transfer files. The default is off, sendfile system call to transfer data directly between the two file descriptors (full operation in the kernel), to avoid the copy data between user buffers and the kernel buffer, the operation efficiency is high, it is referred to the zero copy.
28          tcp_nopush ON;# Enable or disable tcp_nopush socket option or tcp_cork socket options on linux on freebsd. The option is enabled only when using sendfile. Enabling this option allows the 4 * on Linux and FreeBSD, in a packet and sends a response at the beginning of the header file. A complete packet to send the file. .
29          TCP_NODELAY ON;     # enable or disable the use TCP_nodelay options. When converting to keep alive the connection status, will enable this option. In addition, it ssl connection, enable the unbuffered websocket agents and agents.
30          #charset UTF . 8 ;     # character set
 31 is  
32         
33 is          Resolver 223.5 . 5.5 Valid = 100s;     Title # upstream server resolves to a name server address resolution, parameter options [valid = time] [ipv6 = on | off] [status_zone = zone] valid DNS cache expiration settings; address can be specified as the domain name or IP address, and optionally port (1.3.1,1.2.2). If the port is not specified, the port 53. Name server queries in a circular fashion.
34 is          resolver_timeout 30s; # timeout
 35  
36         server_names_hash_bucket_size 128 ;     storage size # hash table, depending on the default processor. For fast processing static data sets, such as the server name, value, MIME type map instructions, the name of the request header string, Nginx hash table. During start-up and each reconfiguration, Nginx select the smallest possible size of the hash table, such that the storage key having a bucket size of the hash value is not more than the same configuration parameters (hash bucket size). The size of the table is represented by the barrel. Adjustment will continue until the table size exceeds hash max size parameter. Most hash has a corresponding command allows you to change these parameters, for example, for the server name hash, which is the maximum size of the server name and the server name hash hash bucket size. hash bucket size parameters of the processor cache line size and the size of a multiple alignment. This is achieved by reducing the number of memory accesses, to speed up the search for the hash key on modern processors. If the hash bucket size equal to the size of a cache line processor, then in the worst case, the number of memory access during the search for the key twice - first calculate the bucket address, and then search for the first time in a bucket of keys secondary access. Accordingly, if the increase in message request nginx hash max size or hash bucket size, then the first parameter should be increased first.
37 [          client_header_buffer_size 32K; # client request buffer size of the head, this can be set according to the size of your paging system, a request header size is generally no more than 1k, but since the paging systems have typically greater than 1k, so that there is provided for the page size.
38 is          large_client_header_buffers . 4 512K;# Set to read a large number of client requests and maximum size of the buffer header. The request line can not exceed the size of a buffer zone, or else will be 414 (Request uri too large) error is returned to the client. Request header field can not exceed the size of a buffer, or 400 (Bad Request) error is returned to the client. Buffer only on demand. By default, the buffer size is equal to 8K bytes. If the connection switching process at the end of the request remains active, release the buffer.
39          client_max_body_size 300m;     # client to upload the maximum file size
 40          client_body_buffer_size 512K; # client buffer size is
 41 is  
42 is          keepalive_timeout   30 ; # client connection time to remain open, 0 is disabled client connections ,
 43 is  
44 is          proxy_connect_timeout     180 [ ; # Unit is s, the back-end server connection timeout _ initiate a handshake response timeout waiting
 45          proxy_read_timeout        180 ; after a successful connection _ # waiting for backend server response time _ in fact, has entered into a back-end queue waiting to be processed (it can be said that the back-end the server processing the request time)
 46         proxy_send_timeout        180 [ ; # backend servers return data is within a predetermined time period _ backend server must complete transmission of all the data
 47          proxy_buffer_size 256K;
 48          proxy_buffers             . 8 128K;
 49          proxy_busy_buffers_size 256K;
 50          proxy_temp_file_write_size 256K;
 51 is          proxy_max_temp_file_size 600m;

First wrote this work came.

 

Guess you like

Origin www.cnblogs.com/ajunyu/p/11670753.html