This is Nginx load balancing configuration

Nginx and haproxy can also do the same front-end load balancing request distribution effects, such as a tomcat service if a concurrent too high can lead to very slow process, new requests will be queued request to a certain extent it may have returned an error or denial of service , so that a plurality of back-end server processes the request by the load balancing, is a more effective method to enhance performance; addition after a certain single bottleneck to performance optimization, generally with load balancing cluster will do, the configuration is simple, the following configuration is process:

First you need to install nginx server, I have here has been installed, for example, there are three tomcat server at the following address:

192.168.1.238080

192.168.1.248080

192.168.1.258080

Nginx which is mounted on top of 192.168.1.23, only be achieved if a test server, you can run multiple tomcat open multiple ports on a single server, so that can improve performance.

First look nginx configuration, the following configuration is added in nginx.conf http {} in blocks and block outside server {}:

upstreammy_service{server127.0.0.1:8080weight=2;server192.168.1.24:8080weight=1;server192.168.1.25:8080weight=1;}

The above my_service is a cluster name, you can own name, server designated back-end service list, weight is set the weight, the greater the weight, the likelihood of the request being distributed over the greater, where the unit weight setting 2, that is assigned to incoming requests will be more on the local.

After this configuration, arranged to add location request and forward it to intercept the rear end of the cluster server {}, the most simple configuration is as follows:

location/{proxy_passhttp://my_service;proxy_redirectdefault;}

Such configuration after saving and reloading, and then are forwarded to all requests for the specified cluster machine processing, of course, may be provided to intercept such specific requests or .do .action can be set as desired; there may be additional location set more configuration items, such as the client body size, buffer size, timeout, etc., refer to the following configuration: date of birth from the name

location/{proxy_passhttp://my_service;proxy_redirectoff;proxy_set_headerHost h O s t ; p r O x Y s e t h e a d e r X R e a l I P host;proxy_set_headerX-Real-IP remote_addr;proxy_set_headerX-Forwarded-For$proxy_add_x_forwarded_for;client_max_body_size10m;client_body_buffer_size128k;proxy_connect_timeout90;proxy_send_timeout90;proxy_read_timeout90;proxy_buffer_size4k;proxy_buffers432k;proxy_busy_buffers_size64k;proxy_temp_file_write_size64k;}

Can refer to the above configuration, arranged according to specific business needs.

Guess you like

Origin blog.csdn.net/sakura379/article/details/93469028