Turing College Road -VIP java architecture of the (six) nginx reverse proxy configuration and optimization

  Book connected back to say, nginx we learned simple configuration. Well, I'll talk about today, some of the optimal allocation of our ngxin (I do not really understand, did not dare to talk about advanced configuration). I first look at the benefits and nginx forward proxy.

nginx benefits

1, may be as high concurrent connections, official test Nginx 50,000 concurrent connections can be supported, the actual production environments can support 2 to 40,000 concurrent connections. His NIO mode on a blog mentioned, not repeat them here. 

2, less memory consumption, Nginx + PHP (FastCGI) server, at 30,000 simultaneous connections, opening 10 Nginx process consumes 150MB memory, 15MB * 10 = 150MB, opening 64 PHP-CGI process consumes 1280 memory, 20MB * 64 = 1280MB, plus the memory consumption of the system itself, a total consumption of less than 2GB of memory.

If the server's memory is relatively small, can only open 25 PHP-CGI process, so that the total amount of memory PHP-CGI consumes only 500MB.

3, low cost, open source software does not require any cost.

4, the configuration file is very simple, network and program as user-friendly, even, non-dedicated system administrators can understand.

5, support Rewrite rewrite, can according to different URL domain name http request will be assigned to different back-end server group.

6, built-in health check function if, NginxProxy the back end of a Web server is down, will not affect front-end access. (This will be given later in the detailed description and configuration)

7, saving bandwidth, support for GZIP compression, you can add a local browser cache Header head.

8, high stability, a reverse proxy, the probability of downtime is minimal.

9, supports hot deployment, Nginx supports hot deployment, its automatic particularly vulnerable, and almost 7 days * 24 hours non-stop running, even running a few months do not need to restart, can also be uninterrupted service case, the software version upgrade.

Having the benefits, let's talk about forward proxy and reverse proxy difference.

 In fact, this stuff is very hard to explain, but must pay attention to several points, nginx configuration is the same. And do not say nginx and destination servers are not together is a reverse proxy.

We middlemen make the difference, for example.

We're going to the market to buy meat (client). There is a meat processing (true server). We know the markets well aware it's meat dealers meat from meat processing purchase. (Meat traders is Agent).

We bought meat from the hands of traffickers meat. This sentence can be ignored (this is not bullshit you, anyway, the agent is below this). The key difference is that the only meat processing meat to the meat traders know, do not know who really bought his flesh, which is forward proxy.

Generally used for reptiles and VPN.

Or examples of meat.

We're going to the market to buy meat (client). There is a meat processing factory (real server). We bought meat from the hands of traffickers meat. The key difference is that we do not know the meat traffickers from which the meat is purchased meat processing meat. (Meat traders is Agent).

This is the reverse proxy, you may know all three of them, but did not confirm the buyer-seller relationship.

Generally used for load balancing.

Let's look at how to set up load balancing.

Anyway, the agent is responsible for balancing the general poll, weight (percentage), IP_hash, URL_hash, the minimum access algorithms.

Reverse proxy-related parameters:

  proxy_pass: # service name (address)

  proxy_redirect on / off: # whether to redirect

  proxy_set_header Host $ host: # Header pass parameters to the back-end services

  proxy_set_header X-Forwared-For $ remote_addr: # Header pass parameters to the back-end services

  proxy_connent_timeout 90 # Connection Broker server timeout

  proxy_send_timeout 90 # request the maximum time in seconds by default

  proxy_read_timeout 90 # read request maximum time, in seconds default

For more parameters refer to the official website, address http://nginx.org/en/docs/http/ngx_http_proxy_module.html . Inside super-detailed.

I would like to configure a simple reverse proxy.

 

 

 

upstream and server are the same level, not placed inside the server.

If the internal parameters of added weight, said weight addressing.

 

 

 8002 represents the visit twice, once accessed 8001. The cycle continues.

If a service is down, then nginx will not distribute requests to that service, when service restoration, nginx will automatically monitor the service starts, the service will be sent again to the service.

Here that two configuration, fail_timeout and slow_start. fail_timeout indicating the service request over how long it will consider this convinced is down, slow_start indicate that the service continues to monitor how long, if it is considered normal service has returned to normal.

FIG configuration:

 

 Note slow_start parameters can not be  hash , ip_hash and random  use with load balancing methods, the official website had this to say, but I have always said that slow_start invalid.

   backup for the backup services that we currently have any one server in a normal state, the request will not be distributed to the backup server. Unless all of the servers down all requests will be distributed to the backup.

   max_conns: maximum number of connections.

There are many adjustable parameters, we can go to the official website to view, I am concerned about simple optimization configuration below.

When we do big concurrency, we think the priority is cached, we should cache static files down, in fact, I can imagine the architecture is so ....

 

 

 In this way we can do distribute the request, then the separation of static JS, CSS file out, and will not let these static files take up our bandwidth, we look at how nginx cache do it.

 

 

Start with the administrator, you will see your future visits will leave folder, which is a read file is a cache file, we would like to explain the above configuration is what that means.

proxy_cache_path statement configuration block, the first parameter is the path to the account permissions and you start keeping ah. Or can not write, levels are saved in the directory level, we see the final production of documents is a MD5 file. 1: 2 is to explain a catalog take the last digit of the file name, the agent 2 is 2-3 lower directory name of the file name.

keys_zone = name: file size, and should name the following proxy_cache consistent. incative is the number of days to save. max_sizes is the largest unit saved. The internal cache location are proxy_cache name, proxy_cache_key url to be MD5 calculation. If a match is direct access to the cache.

Representative proxy_cachr_valid status code 200 to save 304 stored for 12 hours.

Optimization Expansion:

We have said that we can start multiple work processes, each work is running on a single cpu, cpu but their access is completely random, competition may occur cpu time consuming, we can use the bind CPU way to solve this problem.

Parameters work_cpu_affinity 0001 0010 0100 1000; this configuration allows us to start the four work, is to use a placeholder to bind the cpu.

Try to avoid using IP_hash do configuration, IP_hash request can only be distributed to the same external network address, routing switches with a lot of agents in the network IP, do not use IP_hash algorithm, a single server can cause a large number of requests, we can not achieve the purpose of balancing.

 

nginx we talk here today, there are many in-depth optimization, the official website written in great detail, you can go try. java scope of development, which I think almost it. Research can take thorough.

 

Guess you like

Origin www.cnblogs.com/cxiaocai/p/11442756.html