Detailed including installation and configuration Nginx windows environment (b)

Proxy server software over the wall

Reverse proxy server user --Nginx - IIS

Cross-platform configuration simple non-blocking high concurrent connections 5W (epoll model)
general processing mechanisms (establish a link - receiving data - sending data)
1. by way of blocking calls, (read event is not ready to wait) will be blocked calls waiting to enter the kernel, CPU will make out with someone else
if constantly check the event status determination can read and write operations by 2 non-blocking, the overhead is relatively large
3. the non-blocking asynchronous event handling mechanism (select / poll / epoll / lqueue ) when the event is not ready to put inside the queue is not ready to handle the event loop processing read a prepared Oh No.
4 with more threads: no need to create a thread, no context switching, concurrent does not lead to more

 

nginx supports load balancing scheduling algorithm is as follows:

  1. Polling weight (default): received request in order to individually assign different back-end servers, even during use, a back-end server goes down, the server will automatically remove Nginx out queue, request receiving case will not be affected. In this manner, a different set of back-end servers to a weight value (weight), the distribution ratio for the different adjustment request server; the greater the weight data, the greater the chance is assigned to the request; the weighting value, mainly configured for the actual working environment in different back-end server hardware adjustments.

  2. ip_hash: Each request for matching in accordance with the hash result originating ip client, this algorithm a client is always fixed ip address access to the same back-end server , which has to a certain extent, solve the set of cluster deployment environment session sharing problem.

  3. fair: intelligently scheduling algorithm, dynamic response time of the processing to be allocated according to the request equilibrium back-end server, in response to short time high probability of high efficiency server assigned to the request, the response time is long low processing efficiency of the server assigned to less request ; combines the advantages of a scheduling algorithm of the first two. However, note that the default nginx does not support fair algorithm, if you want to use this scheduling algorithm, install the module upstream_fair

  4. According to the results of the allocation request url hash visit,: url_hash each request url will point to a server back-end fixed improve cache efficiency, it can be used as a static server nginx situation. Also note that nginx default scheduling algorithm does not support this, you want to use, then you need to install nginx packages of hash

A master process, generates a read or a work process memory consumption oh built a small health check function: a hang does not affect access high stability supports compression
master process: multi-process mode of work - Profile - new work process - told the old process is completed the end of the
worker: the addition of a lock, with a processing time of only one process

The number of worker_Processes 1 worker process
worker-connection 1024

More F5-- Nginx web server cluster multiple clusters ---
out of service niginx -s stop reload the configuration: nginx -s reload the daemon: start nginx.exe

Configure server {listen 80 servrer-name} current domain name server
to server cluster location {proxy_pass http://netitcast.com}
server cluster upsteam netitcast.com {server cluster name weignt greater the weight the greater the probability distribution #servere 172.168.1.1 : 8081 weight = 1}

 

 

Guess you like

Origin www.cnblogs.com/yuyangbk/p/12204663.html