Nginx introduction (a)

Nginx (engine x) is a high-performance HTTP server and reverse proxy web server, but also provides IMAP / POP3 / SMTP services.

Nginx biggest feature is support for high concurrency and efficient load balancing, in a highly concurrent demand scenario, the Apache server is a good substitute.

A, Nginx advantages and disadvantages

1. Advantages

(1) high concurrency: According to official given response can support up to 50,000 concurrent connections.

Less (2) memory consumption: static files, from the same web services, uses less memory than apache and resources, all of it is lightweight.

(3) simple and stable: simple configuration, the basic configuration in a conf file, the performance is relatively stable, it can be 7 * 24 hour long continuous operation.

(4) a high degree of modularity: Nginx is highly modular design, the preparation is relatively simple module, comprising gzipping, byte ranges, chunked responses, and SSI-filter like filter, and supports SSL TLSSNI.

(5) support Rwrite rewrite rules: according to different domain name, URL, and HTTP requests are distributed to different back-end server group.

(6) low cost: Nginx can do high concurrent load balancing, and Nginx is a free open source, and other hardware to do if you use F5 load balancing, hardware costs are higher. 

(7) Support multi-system: Nginx the code from scratch written in C language, has been ported to many architectures and operating systems including: Linux, FreeBSD, Solaris, Mac OS X, AIX and Microsoft Windows, due Nginx is a free open source, It can be compiled and used on various systems.

2. shortcomings

(1) the difference between dynamic process: nginx handle static files consume less memory, but the process is very tasteless dynamic pages, the front end is now generally used as a reverse proxy nginx withstood pressures, apache as a dynamic back-end processing request.

(2) rewrite weak: Although support for nginx rewrite function, but compared to Apache, though, Apache powerful than nginx's rewrite.

Two, Nginx application scenarios

  • Forward Proxy
  • Reverse Proxy
  • HTTP server (including static and dynamic separation)
  • Load Balancing

1. Forward Proxy

  Forward proxy is a client sends a request to the proxy and specify the target (the original server), and then forwarded the request to the original proxy server and content to get back to the client.

resolver 114.114.114.114 8.8.8.8;
    server {

        resolver_timeout 5s;

        listen 81;

        access_log  e:/wwwrootproxy.access.log;
        error_log   e:/wwwrootproxy.error.log;

        location / {
            proxy_pass http://$host$request_uri;
        }
    }
resolver forward proxy is configured DNS server, listen port is forward proxy, a good distribution can be used in the above or other proxy plug-ie above server ip + port number for the agent. Nginx supports hot start, that when we modify configuration files without closing Nginx, can be achieved to make the configuration take effect.

2. Reverse Proxy

  Nginx reverse proxy should be made up of one thing. Reverse proxy that: the proxy server to accept connection requests on the internet, and then forward the request to the server on the internal network, and the results obtained from the server back to the client requesting the connection to the internet, in which case the external proxy server on the performance of a reverse proxy server. It simply is not true server directly access the external network, so you need a proxy server, and proxy server can simultaneously access the external network of any connection with the same real server in a network environment, of course, it may be the same server, port different.

 Reverse proxy to achieve a simple piece of code:

server {  
        listen       80;                                                        
        server_name  localhost;                                              
        client_max_body_size 1024M;

        location / {
            proxy_pass http://localhost:8080;
            proxy_set_header Host $host:$server_port;
        }
}

After saving the configuration file to start Nginx, so that when we visit the localhost, equivalent to access localhost: 8080 a.

3. HTTP server (containing static and dynamic separation)

(1) Nginx itself is a static resource server, when only static resources, you can use the server Nginx to do, but now also popular movement separation can be achieved by Nginx.

server {
        listen       80;                                                        
        server_name  localhost;                                              
        client_max_body_size 1024M;


        location / {
               root   E:/wwwroot;
               index  index.html;
           }
    }

If this visit http: // localhost will default access to Ethe disk wwwrootdirectory below index.html, if a site just static pages, then you can deploy to achieve in this way.

(2) separating movement

  Static and dynamic separation is to make dynamic websites where dynamic pages according to certain rules to distinguish the same resources and often becomes a resource area, movement of resources to do after the split, we can according to the characteristics of static resources to do the cache operation, this is the core idea of ​​the site static process.

{Test upstream   
       Server localhost: 8080;   
       Server localhost: 8081;   
    }   

    Server {   
        the listen 80;   
        server_name localhost;   

        LOCATION / {   
            the root E: / wwwroot;   
            index index.html;   
        }   

        # requests by all static nginx handling, storage directory html   
        . LOCATION ~ (GIF | JPG | JPEG | PNG | BMP | SWF | CSS | JS) $ {   
            the root E: / wwwroot;   
        }   

        # all dynamic requests are forwarded to tomcat process   
        LOCATION ~ (JSP | do) $ {.   
            proxy_pass HTTP : // Test;   
        }   

        error_page 500 502 503 504 /50x.html;   
        LOCATION = {/50x.html  
            root   e:/wwwroot;  
        }  
    }

So that we can put pictures and HTML and css and js put under wwwroot directory, but only handles jsp tomcat and requests, such as when we suffix for gif of time, Nginx default wwwroot to get from the current dynamic map file requested return of course there is the same static file server with Nginx, we can also in another server, and then configure past like a reverse proxy and load balancing, as long as find out the most basic processes, a lot of very simple configuration , the other localtion back is actually a regular expression, it is very flexible.

4. Load Balancing

  Nginx load balancing is a common feature, which means that load balancing across multiple operating units allocated to execution, such as Web servers, FTP servers, business-critical application servers and other mission-critical servers, so as to work together to complete the task. In simple terms it is that when there are two or more sets of servers, distributed according to a random rule request to the specified server process, load balancing configuration generally need to configure the reverse proxy, to jump through the reverse proxy load balancer. The Nginx currently supports three kinds of native load balancing strategy, there are two kinds of popular third-party policy.
(1) RR (default)

Each request individually assigned to a different time order back-end servers, back-end server if downlost, can be automatically removed.

upstream test {
        server localhost:8080;
        server localhost:8081;
    }
    server {
        listen       81;                                                        
        server_name  localhost;                                              
        client_max_body_size 1024M;

        location / {
            proxy_pass http://test;
            proxy_set_header Host $host:$server_port;
        }
    }
#负载均衡的核心代码
 upstream test {
        server localhost:8080;
        server localhost:8081;
    }

2 servers configured here, of course, is in fact one, but just not the same port, and 8081 server does not exist, that is not to visit, but we visit http: // localhost time, it will not there is a problem, it will default to jump to http: // localhost: 8080 specifically because of Nginx server automatically determine the state will, if the server is not accessible (server hung up), it will not jump to this server, so it can avoid the case of a server linked to the use of influence, because the default is Nginx RR policy, we do not need so many more settings.

(2) Weight

Polling a probability proportional to weight ratio and access, for the case where unevenness backend server performance. E.g:

upstream test {
        server localhost:8080 weight=9;
        server localhost:8081 weight=1;
    }

So 10times usually only have 1time to visit would be 8081, but there are 9times will have access to8080。

(3) ip_hash

  The above two kinds of methods have a problem that when the next request to the request may be distributed to another server, when our program is not stateless (using the session save data), this time there is a great the very problem, such as to save the login information to the session, then jump to another server when you need to log in again, so many times we need a client access only one server, then you need to use iphash, and iphash each request is assigned according ip hash result of the visit, so that each visitor to access a fixed back-end server, can solve the problem of session.

upstream test {
        ip_hash;
        server localhost:8080;
        server localhost:8081;
    }

(4) fair (third party)

By the response time of the allocation request to the backend server, a short response time priority allocation.

upstream backend {
        fair;
        server localhost:8080;
        server localhost:8081;
    }

(5) url_hash (third party)

Press access url hash result to the allocation request, each url directed to the same back-end server, the back end server is effective when the cache. Hash join statement in the upstream, server statement can not be written other parameters such as weight, hash_method hash algorithm is used.

upstream backend {
        hash $request_uri;
        hash_method crc32;
        server localhost:8080;
        server localhost:8081;
    }

5 or more load-balancing each applicable use in different situations, so you can choose which policy to use mode according to the actual situation, but fair and url_hash need to install third-party modules to use.

Three, Apache, Nginx, Lighttpd comparison

Guess you like

Origin www.cnblogs.com/myitnews/p/11531624.html