Introduce key components of Nginx

    This article introduces Nginx itself functions, the module does not rely on any third party. Describes the common component features: reverse proxy, load balancing, Http server, forward proxy

Reverse Proxy (Reverse Proxy):

    What is a reverse proxy it? I understand that the most direct external request, an internal request by steering them Nginx software, request and response correlation. Nginx receives an external connection request, then forwards the request to the internal server, and the results fed requesting client, then the performance of the proxy server is a reverse proxy server. The purpose is to real internal server can not directly access the external network. Prerequisites proxy server is a server that can simultaneously access the external network of any connection with the same real server in a network environment.

 

Simple reverse proxy configurations:

server {

  listen  80;

  server_name  localhost;

  client_max_body_size 1024M;

  location / {

    proxy_pass  http://localhost:8080;

    proxy_set_header Host $host:$server:$server_port;

  }

}

After saving the configuration file to restart the Nginx service, it goes into effect.

 

Load Balancing:

    nginx load balancing mainly for http seventh on seven layer application layer network communications model, https be supported. Briefly, when two or more application servers, according to a corresponding rule to distribute requests a server process. Load balancing configuration with generally need to configure a reverse proxy, the reverse proxy jump to load balancing. (The current point of load balancing technology: hardware level, there are F5 load balancers, network layer upon layer surface has LVS (Linux Virtual Server), application-layer surface is nginx, Haproxy etc.)

    Here's an overview of commonly used Nginx load balancing strategy ( RR (default), polling, ip_hash, Fair, url_hash ):

RR (default) policy:

 Each request individually assigned sequentially to each server which, if the rear end of a station downtime, Nginx automatically isolated.

Simple configuration:

upstream test {

  server 192.168.1.22:8080;

  server 192.168.1.23:8080;

}

server  { 

  listen  80;

  server_name localhost;

  client_max_size 1024;

  location / {

    proxy_pass http://test;

    proxy_set header  Host  $host:$server:$server_post;

    }

}

Note: Here I configured the two servers, of course, it is in fact one, but just not the same port. We go to http: // localhost time, it will default to jump to http: // localhost: 8080 specifically because Nginx will automatically determine the status of the server. If the server is not accessible (server hung up), it will not jump to this server, so it avoids a server linked to the situation affect the use, since Nginx default RR policy, we do not need so many more settings.

 

Polling strategy:

  RR (default): upstream accordance polling (default) mode load, each request individually assigned to a different time order back-end server, if the back-end server is down, can be automatically removed. Although simple this way, low cost. But the disadvantage is: reliability, low and uneven load distribution.

  weight (polling by weight): probability for polling, weight, and is proportional to the ratio of access for performance of back-end server unevenness circumstances set.

Simple configuration:

upstream test {

   server 192.168.1.22 weight = 6; --- allocated 60% of the requests to the server 

   server 192.168.1.23 weight = 4; --- 40% allocated to the request server

}

 

ip_hash way:

Each request is assigned according ip hash result of the visit, so that each visitor to access a fixed back-end server, can solve the problem of session. Configuration only need to add in the upstream in "ip_hash;" can be.

Simple configuration:

upstream test {

  ip_hash;

  server 192.168.1.22:8080;

  server 192.168.1.23 8080;

}

 

far way (third party)

    By the response time of the allocation request to the backend server, a short response time priority allocation. And weight distribution similar strategy.

upstream test {

  far;

  server 192.168.1.22:8080;

  server 192.168.1.23 8080;

}

 

url_hash (third party)

    Press access url hash result to the allocation request, each url directed to the same back-end server, the back end server is effective when the cache. Hash join statement in the upstream, server statement can not be written other parameters such as weight, hash_method hash algorithm is used.

upstream test {

  hash $request_url;

  hash_method crc32;

  server 192.168.1.22:8080;

  server 192.168.1.23 8080;

}

 

HTTP server:

     Nginx itself is a static resource server, when only static resources, you can use the server Nginx to do, but can also complete static and dynamic separation.

Static configuration:   

server  {

  listen 80;

  server_name localhost;

  client_max_body_size 1024M;

  location / {

    root E:\wwwroot;

    index index.html;

  }

}

Save the configuration file and restart Nginx service will take effect. Way http: // localhost will default access when wwwroot directory of the E drive index.html file.

Static and dynamic separation:

    Static and dynamic separation is to make dynamic websites where dynamic pages according to certain rules to distinguish the same resources and often becomes a resource area, movement of resources to do after the split, we can according to the characteristics of static resources to do the cache operation, this is the core idea of ​​the site static process.

Simple configuration:

upstream test {

  server 192.168.1.22:8080;

  server 192.168.1.23 8080;

}

server  {

  listen 80;

  server_name localhost;

  client_max_body_size 1024M;

  location / {

    root E:\wwwroot;

    index index.html;

  }

  location ~ \. (gif | jpg | jpeg | png | bmp | swf | css | js) $ {---- all the static requests are handled by nginx, stored in the E disk wwwroot directory

    root  E:\wwwroot;

  }

  location ~ \.(jsp|do)$ {

    proxy_pass http: // test1; ---- all dynamic requests to tomcat to handle

  }

}

So that we can put pictures and HTML and css and js put under wwwroot directory, but only handles jsp tomcat and requests, such as when we suffix for gif of time, Nginx default wwwroot to get from the current dynamic map file requested return of course there is the same static file server with Nginx, we can also in another server, and then configure past like a reverse proxy and load balancing, as long as find out the most basic processes, a lot of very simple configuration , the other localtion back is actually a regular expression, it is very flexible.

 

Forward proxy:

 When the client access internet (the client itself can not access the internet), can not directly access, and access via an intermediate server, which is a forward proxy. The so-called forward proxy, client-initiated Internet request, the request by the proxy server turned to Internet and the results fed back to the client proxy server. 

    If the Internet outside the LAN imagine a huge resource library, LAN clients to access the Internet, you need to access through a proxy server, the proxy service is called forward proxy.

Simple configuration:

192.168.1.22 solve 10.10.10.10;

server {

  resolver_timeout 3s;

  listen 80; 

  access_log e:\wwwroot\proxy_access.log;

  error_log e:\wwwroot\proxy_error.log;

  location / {

    proxy_pass http://$host:$request_uri;

  }

}

resolver forward proxy is configured DNS server, listen port is forward proxy, a good distribution can be used in the above or other proxy plug-ie above server ip + port number for the agent.

Guess you like

Origin www.cnblogs.com/GDZM/p/11320591.html