Nginx do load balancing several polling policy

In order to solve the case of a single point of cluster environment can not support high-concurrency, cluster using multiple servers to provide services, generally used in the cluster nginx will forward the request from the client to the server

nginx load balancing can be used to improve the throughput of the site, to ease the pressure of a single server.

A. What is Nginx

Nginx is an open source, high-performance, reliable HTTP middleware, agency services

1. IO multiplexing the epoll ( the IO multiplexing)

How to understand it? To give an example!
There are A , B , C three teachers, they have encountered a problem, to help students solve a class of class assignments.
Teacher A use of a start from the first row of students a way to answer students take turns to answer questions, the teacher A waste a lot of time, and some students' work has not been completed yet, the teacher came, and repeatedly efficiency is extremely slow.
Teacher B is a ninja, he found the teacher A method does not work, so he used the shadow two places at once, out of several spare yourself the same time to help several students to answer questions, not answering the last teacher B depleted energy become tired. Teacher C relatively smart, he told the students, who completed the job by show of hands, the students have raised their hands before going to his guidance, he let the students take the initiative to voice, separate from the " concurrent " . The teacher C is Nginx .

2. Lightweight

  • Less functional modules - Nginx retaining only the HTTP modules required, the other by way of plug-ins, add the day after tomorrow
  • Code modularity - more suitable for secondary development, such as Alibaba Tengine

3. CPU affinity

The CPU core and Nginx worker processes to bind, to each worker process in a fixed CPU execution on reducing switching CPU 's Cache Miss , which improves performance.

Two .Nginx installation and start-up

 

windos installation, download address environment http://nginx.org/en/download.html

 

By cmd navigate to the directory where Nginx

Open: Start nginx.exe

Quick stop: nginx.exe -s STOP

Reload the configuration : nginx.exe -s reload

Full stop: nginx.exe -s quit

III. 5 kinds of strategies to provide Nginx

  • Polling (each request individually assigned to a different time order back-end server, if the back-end server is down, automatically removed)
  • Weight (weight weight weight weight the greater, the greater the probability of access for performance of back-end server unevenness case)
  • ip_hash (each request assigned by hash result Access ip so that each visitor to access a fixed back-end server, can solve the problem of session)
  • Fair (third party) by the response time of the allocation request to the backend server, a short response time priority allocation.
    upstream backend {
    server server1.linuxany.com;
    server server2.linuxany.com;
    fair;
    }
  • url_hash (third party)
    upstream backend {
    server squid1:3128;
    server squid2:3128;
    hash $request_uri;
    hash_method crc32;
    }
    upstream web {   
         server 192.168.1.116:80 down;
         server 192.168.1.118:80 weight=2;
         server 192.168.1.113:80;
         server 192.168.1.112:80 backup;
    }

    Detailed configuration:

             down  indicates that the current Web Server is temporarily not participate in load 
             weight   by default 1.weight greater, the greater the weight load of weight. 
             backup : All other non-backup Server down or busy, request backup machine. So this machine will be the lightest pressure.

    Profile nginx.conf:
    #Define nginx running users and groups 
    the User the WWW the WWW; 
     
    # Set the number of nginx process, generally set cpu cores, auto to automatically detect 
    worker_processes Auto; 
     
    # global error log-defined types, [Debug | info | Notice | The warn | error | Crit] 
    error_log logs / error.log; 
    #error_log logs / error.log The warn; 
    #error_log logs / error.log info; 
    #error_log logs / error.log Debug; 
    #error_log logs / error.log Notice; 
    #error_log logs / Crit the error.log; 
     
    # daemon pid file 
    pid logs / nginx.pid; 
     
    #events module is provided comprising a handle connected nginx all 
    events { 
        the maximum number (the maximum number of connections a single process # = * connectionsNumber of processes) 
        worker_connections   1024 ; 
        # Set nginx receive a new link after receiving notice as many links 
        multi_accept ON; 
        # set in rotation method for multiplexing a client thread 
        use epoll;        
    } 
     
     
    #http module controls the nginx http treatment All the core features of 
    HTTP { 
     
        # file extension and file type map for 
        the include mime.types; 
     
        # default file type 
        default_type the Application / OCTET - Uninterpreted Stream; 
     
        # to open or close the error page nginx version number deng 
        server_tokens ON; 
     
        # client request header buffer size 
        large_client_header_buffers 4 64K; 
     
        # nginx is set by the size of the uploaded file 
        client_max_body_size 1024M; 
     
        
        client_body_buffer_size 2048K; 
     
        # open and efficient file transfer, disk IO optimization settings 
        the sendfile ON; 
     
        the gzip ON; 
     
        # permit or prohibit compressed upon request and corresponding response stream, compressing all the requests on behalf of the any 
        gzip_proxied the any; 
     
        # == setting data compression level . 1 - . 9 of Room 9 slowest compression ratio is the maximum 
        gzip_comp_level 9 ; 
     
        # set need to compress data format 
        gzip_types text / Plain text / CSS text / XML text / JavaScript file application / JSON file application / X-JavaScript file application / XML file application / XML + RSS; 
        
        # server distributes pool list (note Consignee server the IP + port, which is to be written http, can easily take the name, the distribution can be arranged on the corresponding) 
        upstream {Web 
            server 192.168.2.6 : 8085 weight = . 1 ; # server 1 weight weights (the weights large, the greater the chance of access) 
            Server 192.168 . 2.6 :-8086 weight = . 5 ; server # 2 
        } 
        
        # distributed load balancing request 
        Server { 
            the listen 80 ; 
            # multiple domain names separated by spaces baidu2.com baidu3.com Baidu.com 
            server_name 192.168.2.6; 
     
            # Set default access home 
            index index.html index .php; 
     
            LOCATION / { 
                # All requests to the reverse proxy server pool name server is arranged upstream proxy_pass defined as the name of the web, this is HTTP: // web 
                proxy_pass HTTP: // web ; 
                proxy_set_header the Host Host $; 
                proxy_set_header the X- -Real-  IP $ REMOTE_ADDR;
            } 
       # distribution server pool can not be written nginx.conf file, the other from a conf file, then you can use the following code introduced
    
          include /etc/nginx/conf.d/*.conf;
        }
    }

     

    nginx supports simultaneous load balancing multiple sets of settings, not used to the server to use.

    client_body_in_file_only set to On can speak client post over the data to a file used for debug
    directory client_body_temp_path set the record file can have up to 3 layers directory
    location of the URL matching can be redirected or a new proxy load balancing

 

Guess you like

Origin www.cnblogs.com/bob-zb/p/12612717.html