Nginx: Nginx configuration file Detailed

Nginx HTTP server is a performance-oriented design, compared to Apache, lighttpd has possession of less memory, high stability advantages.

Detailed below is the configuration of Nginx:

###### Nginx configuration file nginx.conf ##### Detailed Chinese

#Define running Nginx users and user groups
user www www;

#nginx number of processes, recommended setting is equal to the total number of cores CPU.
worker_processes 8;
 
# Define global error log type, [Debug | info | Notice | The warn | error | Crit]
error_log /usr/local/nginx/logs/error.log info;

# Process pid file
pid /usr/local/nginx/logs/nginx.pid;

# Specify the maximum process can open descriptors: Number
# Work mode and the maximum number of connections
# This means that when a directive is nginx process opens up the number of file descriptors, the theoretical value should be opened up to the number (ulimit file -n) divided by the number of nginx process, but nginx allocation request is not so uniform, so it is best to ulimit - value of n is consistent.
# Now Linux 2 is turned to open the file in the kernel .6 number 65535, worker_rlimit_nofile you should fill in the corresponding 65535.
# This is because the allocation request nginx dispatched to the process is not so balanced, so if you fill in the 10240, the total amount of concurrency reach 3 - 40000 process when there may be more than 10240, and then returns a 502 error.
worker_rlimit_nofile 65535 ;


events
{
    # Reference event model, use [kqueue | rtsig | epoll | / dev / poll | the SELECT | poll]; epoll model
    # Is the Linux 2 .6 above high-performance network kernel version the I / O model, linux the epoll recommended, if run on FreeBSD, use kqueue model.
    # Additional information:
    # Similar to the apache, nginx for different operating systems, different event models
    #A) standard event model
    # Select, poll belongs to the standard event model, if the current system more efficient method does not exist, nginx will choose to select or poll
    #B) Efficient event model
    #Kqueue: Use in FreeBSD 4.1 +, OpenBSD 2.9 +, NetBSD 2.0 and MacOS X. MacOS X system using a dual-processor kqueue use may cause a kernel panic.
    #Epoll: Use the system Linux kernel version 2.6 and later.
    # / Dev / poll: used in the Solaris . 7  . 11 / 99 +, the HP / UX 11.22 + (eventport), the IRIX 6.5 . 15 + and the UNIX Tru64 . 5 .1A + .
    #Eventport: Use in the Solaris 10 . To prevent problems kernel panic, it is necessary to install security patches.
    use epoll;

    # Single process maximum number of connections (maximum number of connections = connections * number of processes)
    # Accordance with the hardware adjustment, front and work together with the process, as large as possible, but do not take cpu went 100 % on the line. Maximum number of connections per process maximum number of connections allowed, in theory, each server is nginx.
    worker_connections 65535;

    #keepalive timeout.
    keepalive_timeout 60;

    # Client request buffer size of the head. This can be set according to the size of your paging system, a general request header size does not exceed 1k, but due to the general paging system should be greater than 1k, so here set page size.
    # Page size can be ordered getconf PAGESIZE made.
    #[root@web001 ~]# getconf PAGESIZE
    #4096
    But there client_header_buffer_size # exceeds 4k, but client_header_buffer_size this value must be set to "system page size" integral multiple.
    client_header_buffer_size 4k;

    # This will open the specified file cache is not enabled by default, max specify the number of buffers, recommendations and open the same number of files, inactive refers to delete cache files after much time has not been requested.
    open_file_cache max=65535 inactive=60s;

    # This refers to how long a cache of checking for valid information.
    # Syntax: open_file_cache_valid Time default value: open_file_cache_valid 60 Using field: http, server, location This directive specifies the information when you need to check valid open_file_cache cached items.
    open_file_cache_valid 80s;

    Provided with a minimum inactive time #open_file_cache instruction parameter file number, if this number is exceeded, the file descriptor has been opened in the cache, the above example, if a file has not been used within a time inactive, it will be removed.
    # Syntax: open_file_cache_min_uses number defaults: open_file_cache_min_uses 1 Use field: http, server, location This directive specifies the minimum number of files in open_file_cache invalid command parameters that can be used within a certain time frame, if you use a larger value, file description symbol in the cache is always open.
    open_file_cache_min_uses 1;
    
    # Syntax: open_file_cache_errors ON | OFF Default value: open_file_cache_errors off using field: http, server, location This directive specifies whether a file is recorded in the search cache error.
    open_file_cache_errors on;
}
 
 
 
# Set http server, using its reverse proxy feature provides load balancing support
http
{
    # File extension and file type map
    include mime.types;

    # Default file type
    default_type application/octet-stream;

    # Default encoding
    #charset utf-8;

    # Server name hash table size
    # Save the server name hash table is server_names_hash_max_size and server_names_hash_bucket_size controlled by instructions. Parameter hash bucket size is always equal to the size of the hash table, and a multiple way cache size of the processor. Reducing the number of accesses in the memory after the accelerated keys hash table lookup in the processor becomes possible. If the hash bucket size equal to the size of the processor cache all the way, then find the key, the number of times the worst case lookup in memory 2. The first is to determine the address storage unit, and the second is to find the key in a storage unit. Thus, given if required to increase Nginx prompt hash max size or hash bucket size, then the size of the previous primary is increased by one parameter.
    server_names_hash_bucket_size 128;

    # Client request buffer size of the head. This can be set according to the size of your paging system, a general request header size does not exceed 1k, but due to the general paging system should be greater than 1k, so here set page size. Page size can be ordered getconf PAGESIZE made.
    client_header_buffer_size 32k;

    # Client request header buffer size. nginx will use the default client_header_buffer_size this buffer to read the header value, if the header is too large, it will use large_client_header_buffers to read.
    large_client_header_buffers 4 64k;

    # Nginx is set by the size of the uploaded file
    client_max_body_size 8m;

    # Open efficient file transfer, nginx sendfile instruction specifies whether the function call sendfile output files for general application to on, if the application used for downloading the disk IO heavy duty applications, may be set to off, in order to balance the disk and network I / O processing speed and reduce the load on the system. Note: If the picture is not displayed properly put into this off.
    #sendfile directive specifies whether nginx sendfile call function (zero copy mode) to output files for common applications, it must be set on. If the application used for downloading the disk IO heavy duty applications, it may be set to off, in order to balance the processing speed of the disk with the IO network, reduce system uptime.
    sendfile on;

    # Open access directory listings, download the appropriate server, off by default.
    auto index is;

    # This option enables or disables the use of socke TCP_CORK option, this option is only used when the use of sendfile
    tcp_nopush on;
     
    tcp_nodelay on;

    # Longer connection time, in seconds
    keepalive_timeout 120;

    #FastCGI relevant parameters in order to improve site performance: reducing resource consumption, improve access speed. The following parameters can see literally understanding.
    fastcgi_connect_timeout 300;
    fastcgi_send_timeout 300;
    fastcgi_read_timeout 300;
    fastcgi_buffer_size 64k;
    fastcgi_buffers 4 64k;
    fastcgi_busy_buffers_size 128k;
    fastcgi_temp_file_write_size 128k;

    #gzip Module Settings
    gzip ON; # turn on gzip compression output
    gzip_min_length 1k; # smallest compressed file size
    gzip_buffers . 4 16K; # decompression buffer
    gzip_http_version 1.0 ; compressed version # (default 1. . 1 , if the front end is squid2.5 1. Use Please 0 )
    gzip_comp_level 2 ; # compression level
    text gzip_types / Plain the Application / the X-text-JavaScript / CSS the Application / xml; # compression type, default already included textml, so the following do not need to write, and write up there will not be a problem, but there will be a warn.
    gzip_vary on;

    # Enable IP limit the number of connections required when using
    #limit_zone crawler $binary_remote_addr 10m;



    # Load balancing configuration
    upstream piao.jd.com {
     
        #upstream load balancing, weight is a weight, the weight can be defined in accordance with the machine configuration. The higher the greater the chance weigth parameter is the weight, the weight value is assigned to.
        server 192.168.80.121:80 weight=3;
        server 192.168.80.122:80 weight=2;
        server 192.168.80.123:80 weight=3;

        #nginx the upstream currently supports four modes of distribution
        # 1 , polling (default)
        # Each request individually assigned to a different time order back-end server, if the back-end server is down, can be automatically removed.
        #2、weight
        # Polling a probability proportional to weight ratio and access, for the case where unevenness backend server performance.
        #E.g:
        #upstream END {
        #    server 192.168.0.14 weight=10;
        #    server 192.168.0.15 weight=10;
        #}
        #2、ip_hash
        # Press hash result for each request assigned ip access so that each visitor to access a fixed back-end server, can solve the problem of session.
        #E.g:
        #upstream END {
        #    ip_hash;
        #    server 192.168.0.14:88;
        #    server 192.168.0.15:80;
        #}
        # 3 , Fair (third party)
        # According to the response time of the backend server allocation request, a short response time priority allocation.
        #upstream backend {
        #    server server1;
        #    server server2;
        #    fair;
        #}
        # 4 , url_hash (third party)
        Press access url # hash result to the allocation request, each url directed to the same back-end servers, back-end server is effective when the cache.
        Example #: hash join statement in the upstream, server statement can not be written other parameters like weight, hash_method hash algorithm is used
        #upstream backend {
        #    server squid1:3128;
        #    server squid2:3128;
        #    hash $request_uri;
        #    hash_method crc32;
        #}

        #tips:
        #upstream bakend {# define the load balancing device and the device state Ip} {
        #    ip_hash;
        #    server 127.0.0.1:9090 down;
        #    server 127.0.0.1:8080 weight=2;
        #    server 127.0.0.1:6060;
        #    server 127.0.0.1:7070 backup;
        #}
        # Proxy_pass http increase in server load balancing requires the use of: // bakend /;

        # Status of each device is set to:
        # 1 .down representation before being does not participate in a single server load
        # 2 .Weight is the greater weight, the greater the weight of the load weight.
        # . 3 .max_fails: permitted number of failed requests defaults to 1. When the number exceeds the maximum, the module defined error return proxy_next_upstream
        # 4 .fail_timeout: After max_fails failed, pause time.
        # . 5 .backup: all other non-backup machine down or busy, requests backup machine. So this machine will be the lightest pressure.

        #nginx supports simultaneous load balancing multiple sets of settings, not used to the server to use.
        #client_body_in_file_only set to On can speak client post over the data to a file used for debug
        Catalog #client_body_temp_path set the record file can have up to 3 layers directory
        #location a URL matching can be redirected or a new proxy load balancing
    }
     
     
     
    # Virtual host configuration
    server
    {
        # Listening port
        listen 80;

        # Can have multiple domain names, separated by spaces
        server_name www.jd.com jd.com;
        index index.html index.htm index.php;
        root /data/www/jd;

        # Of ****** load balancing
        location ~ .*.(php|php5)?$
        {
            fastcgi_pass 127.0.0.1:9000;
            fastcgi_index index.php;
            include fastcgi.conf;
        }
         
        # Picture Cache time setting
        location ~ .*.(gif|jpg|jpeg|png|bmp|swf)$
        {
            expires 10d;
        }
         
        #JS CSS caching and time settings
        location ~ .*.(js|css)?$
        {
            expires 1h;
        }
         
        # Log format set
        # $ Remote_addr and $ http_x_forwarded_for to record ip address of the client;
        # $ Remote_user: used to record the client user name;
        # $ Time_local: used to record the access time and time zone;
        # $ Request: to record the url with http protocol request;
        # $ Status: used to record the status of the request; 200 is successful,
        # $ Body_bytes_sent: records are sent to the client the main content file size;
        # $ Http_referer: access to records come from that page links;
        # $ Http_user_agent: Record information about the client browser;
        # Usually placed behind a reverse proxy web server, so that you can not get to the customer's IP address, and by $ remote_add get the IP address of the reverse proxy server iP address. Reverse proxy server forwards the request http header information, you can increase x_forwarded_for information request to record the IP address of the server address existing clients and former clients.
        log_format access '$remote_addr - $remote_user [$time_local] "$request" '
        '$status $body_bytes_sent "$http_referer" '
        '"$http_user_agent" $http_x_forwarded_for';
         
        # Define this virtual hosts access log
        access_log  /usr/local/nginx/logs/host.access.log  main;
        access_log  /usr/local/nginx/logs/host.access.404.log  log404;
         
        # Of " / " Enable Reverse Proxy
        location / {
            proxy_pass http://127.0.0.1:88;
            proxy_redirect off;
            proxy_set_header X-Real-IP $remote_addr;
             
            # Back-end Web server through the X- -Forwarded- get the user's real IP For
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
             
            # Here are some reverse proxy configuration, optional.
            proxy_set_header Host $host;

            # The maximum allowed client requests bytes of the single-file
            client_max_body_size 10m;

            # Maximum number of bytes the client proxy cache buffer request,
            # If it is set to a higher value, such as 256k, then, whether using firefox or IE browser to submit any picture of less than 256k, you are normal. If a comment to this instruction, use the default settings client_body_buffer_size, which is twice the operating system page size, 8k or 16k, the problem arises.
            # Regardless firefox4.0 or IE8. 0 , submitted a relatively large, about 200k pictures, return 500 Internal Server Error error
            client_body_buffer_size 128k;

            # Indicates the nginx prevent HTTP response code of 400 or higher response.
            proxy_intercept_errors on;

            Timeout # backend server connection _ initiate a handshake timeout waiting for response
            #nginx with the back-end server connection timeout (proxy connection timeout)
            proxy_connect_timeout 90;

            # Backend server data return time (Send Timeout)
            # Backend server data return time _ that is, within the prescribed time of the back-end server must complete all of the data transfer
            proxy_send_timeout 90;

            # After successful connection, the response time of the backend server (proxy receive timeout)
            After a successful connection _ # waiting for backend server response time _ in fact, has entered into a back-end queue waiting to be processed (you can also say that it was the back-end server to process the request)
            proxy_read_timeout 90;

            # Set up a proxy server (nginx) to save the header information of the user buffer size
            # Set the size of the first portion from the buffer is read reply proxy server, which normally contains a small portion of the response header of the response, this value is the default size for the case where the instruction specified size proxy_buffers a buffer, but you can set it to a smaller
            proxy_buffer_size 4k;

            #proxy_buffers buffer pages set in 32k below average
            # Provided for the read response (from the proxy server) and the number of buffer size, also for default page size, according to different operating systems may be 4k or 8k
            proxy_buffers 4 32k;

            # High load buffer size (proxy_buffers * 2 )
            proxy_busy_buffers_size 64k;

            # Set the size of the write proxy_temp_path data, prevent a worker process obstruction when passing the file is too long
            # Set the cache folder size, greater than this value, pass from the upstream server
            proxy_temp_file_write_size 64k;
        }
         
         
        # Set View Nginx state address
        location /NginxStatus {
            stub_status on;
            access_log on;
            auth_basic "NginxStatus";
            auth_basic_user_file confpasswd;
            #Htpasswd file contents can be used to provide apache htpasswd tool to generate.
        }
         
        # Local static and dynamic separation reverse proxy configuration
        # All jsp pages were handed over to tomcat or resin handle
        location ~ .(jsp|jspx|do)?$ {
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_pass http://127.0.0.1:8080;
        }
         
        # All static files nginx read directly without going through a tomcat or resin
        location ~ .*.(htm|html|gif|jpg|jpeg|png|bmp|swf|ioc|rar|zip|txt|flv|mid|doc|ppt|
        pdf|xls|mp3|wma)$
        {
            expires 15d; 
        }
         
        location ~ .*.(js|css)?$
        {
            expires 1h;
        }
    }
}
###### Nginx configuration file nginx.conf ##### Detailed Chinese

Article reprint to: https://www.cnblogs.com/hunttown/p/5759959.html

Guess you like

Origin www.cnblogs.com/nhdlb/p/12348324.html