Joking about Nginx (4)

Case: Nginx is applied as a load balancing server

        The load balancing function of Nginx is implemented through upstream commands, so its load balancing mechanism is relatively simple to implement, and it is a 7-layer switching load balancing implementation based on content and applications. Nginx load balancing has the ability to monitor the health of back-end servers by default, but the detection ability is weak, only limited to port detection, and the load balancing ability is outstanding in the case of relatively few back-end servers (less than 10). For a load application with a large number of back-end nodes, since all access requests go in and out from one server, it is easy to block requests and cause connection failures, so the performance of the back-end server cannot be fully utilized.

1. Nginx's load balancing algorithm:

        Nginx's load balancing module currently supports 4 scheduling algorithms:

                □ Polling (default), each request is allocated to different back-end servers one by one in chronological order. If a back-end server crashes, the faulty system will be automatically eliminated, so that user access will not be affected.

                ■weight , specifies the polling weight. The larger the weight value, the higher the access probability allocated. It is mainly used when the performance of each server in the backend is unbalanced.

                □ip_hash, each request is allocated according to the hash result of the access IP, so that visitors from the same IP can access a back-end server, effectively solving the session sharing problem in dynamic web pages.

                ■fair, it is a more intelligent load balancing algorithm than the above two. This algorithm can intelligently perform load balancing according to page size and loading time, that is, allocate requests according to the response time of the backend server, and give priority to those with short response time. Nginx itself does not support fair. If you need to use this scheduling algorithm, you must download the upstream_fair module of Nginx.

               □url_hash, allocate requests according to the hash result of the URL, so that each URL is directed to the same back-end server, which can further improve the efficiency of the back-end cache server. Nginx itself does not support url_hash. If you need to use this scheduling algorithm, you must install the Nginx hash package.

         In the HTTP Upstream module, you can specify the IP address and port of the backend server through the server command, and you can also set the status of each backend server in load balancing scheduling.

                The status is as follows:

                        ■down, indicating that the current server does not participate in load balancing temporarily.

                        □backup, the reserved backup machine. When all other non-backup machines fail or are busy, the backup machine will be requested, so this machine has the least access pressure.

                        ■max_fails, the number of times to allow requests to fail, the default is 1. When the maximum number of times is exceeded, an error defined by the proxy_next_upstream module is returned.

                        □fail_timeout, the time to suspend the service after max_failes failures. max_fails can be used with fail_timeout.

         Notice:

                When the load scheduling algorithm is ip_hash, the status of the backend server in the load balancing scheduling cannot be weight and backup.

2. Nginx load balancing configuration example:

        Configuration parameters:                

                worker_processes  1;

                events {

                    worker_connections  1024;

                }

                http {

                    include       mime.types;

                    default_type  application/octet-stream;

                    sendfile        on;

                    keepalive_timeout  65;

                    upstream myserver {

                        server 10.0.0.35:8080 weight=3 max_fails=3 fail_timeout=20s;

                        server 10.0.0.35:8081 weight=1 max_fails=3 fail_timeout=20s;

                    }

                    server {

                        listen       80;

                        server_name  localhost;

                        location / {

                            #root   html;

                            #index  index.html index.htm;

                            proxy_pass http://myserver;

                

                        }

                    }

                }


        In this configuration, the upstream keyword identifies the start of the load balancing configuration. upstream is the HTTPUpstream module of Nginx. This module uses a simple scheduling algorithm to achieve load balancing from the client IP to the backend server. In the above configuration section, a load balancer named myserver is specified through the upstream command. This name can be specified arbitrarily, and it can be called directly where needed later.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324780157&siteId=291194637