Nginx reverse proxy and load balancing

One, reverse proxy

Forward proxy: 

          Forward proxy, which is what we often call proxy, works as follows: I can't access a certain website, but I can access a proxy server, and this proxy server can access the website that I cannot access, so I connect to the proxy The server told him that I needed the content of the website that could not be accessed, and the proxy server would retrieve it and return it to me. From the perspective of the website, there is only one record when the proxy server fetches the content. Sometimes it is not known that it is the user's request, and the user's information is also hidden, depending on whether the proxy tells the website.

         In short, the forward proxy is a server located between the client and the origin server. In order to obtain content from the origin server, the client sends a request to the proxy and specifies the target, and then the proxy forwards the request to the origin server and returns the obtained content to Client. The client must make some special settings to use the forward proxy.

The concept of reverse proxy:

         A reverse proxy means that the client does not know that there is no content on the proxy server that it wants. For the client, the proxy server is the original server, and the client does not need to make any special settings. The client is in the namespace of the reverse proxy server Sends a normal request for the content of, and then the reverse proxy server determines where to forward the request, and returns the obtained content to the client as if the content is his own.

 The difference between forward proxy and reverse proxy:

        The typical application of forward proxy is to provide access to the Internet for LAN clients in the firewall. The forward proxy can also use the buffering feature to reduce network usage. The typical use of the reverse proxy is to provide the server behind the firewall for Internet users to access. The reverse proxy can also provide load balancing for multiple back-end servers, or provide buffering services for servers with slower back-ends. In addition, the reverse proxy can also enable advanced URL policies and management technologies, so that web pages in different web server systems exist in the same URL space at the same time.

       In terms of security, the forward proxy allows the client to access any website through it and hides the client itself, so you must take security measures to ensure that only passed and authorized clients provide services. The reverse proxy is transparent to the outside, and visitors do not know that they are accessing a proxy.

surroundings:

    Proxy + load balancing server: centos7:192.168.253.130

    web server 1: centos7:192.168.253.110

    web server 2: centos7:192.168.253.120

三台服务器都操作
[root@localhost ~]# setenforce 0
[root@localhost ~]# systemctl stop firewalld  关闭防火墙
[root@localhost ~]# yum -y install ntpdate
[root@localhost ~]# ntpdate pool.ntp.org   同步时间

1. Install nginx and modify the nginx.conf configuration file

[root@localhost src]# yum -y install gcc gcc-c++ zlib-devel pcre-devel   源码安装nginx所需要的依赖包
[root@localhost src]# tar xzf nginx-1.12.2.tar.gz 
[root@localhost src]# cd nginx-1.12.2
[root@localhost nginx-1.12.2]# ./configure
[root@localhost nginx-1.12.2]# make && make install

####在http{}中的server{}中的location  / {}中添加:proxy_pass ip   这个ip是代理的ip
[root@localhost ~]# vim /usr/local/nginx/conf/nginx.conf  添加
46             proxy_pass http://192.168.253.110;

2. Install web on the proxy side, test

[root@localhost ~]# yum -y install httpd
[root@localhost ~]# echo "welcome to hya" > /var/www/html/index.html
[root@localhost ~]# systemctl  start httpd

代理服务器开启nginx
[root@localhost ~]# /usr/local/nginx/sbin/nginx 
[root@localhost ~]# ss -tnl
State       Recv-Q Send-Q       Local Address:Port                      Peer Address:Port              
LISTEN      0      128                      *:80                                   *:* 

 Two, Nginx achieves load balancing

Description of upstream load balancing module:

         upstream is the http upstream module of nginx. This module uses a simple scheduling algorithm to achieve load balancing from the client ip to the back-end server, and specify a load balancing name through the upstream command. This name is arbitrarily specified, and it can be called directly where it needs to be used later.

upstream supports load balancing algorithms:

The load balancing module of nginx currently supports 4 scheduling algorithms,

Polling (default): Each request is assigned to different back-end servers in chronological order. If the back-end server goes down, the faulty server will be automatically removed, so that user access is not affected. Weight specifies the weight value. The larger the value, the greater the chance of access. This depends on the configuration level of the back-end server.

Ip_hash: Each request is allocated according to the hash result of the access ip, so that visitors from the same ip regularly access a back-end server, effectively solving the session sharing problem of dynamic pages.

Fair: This is a more intelligent load balancing algorithm than the previous two. This algorithm can intelligently balance the load based on the page consignment and load time, that is, allocate requests according to the response time of the backend server, and the response time is short. Priority allocation.

url_hash: This method distributes requests according to the hash result of the accessed URL, so that each URL is directed to the same back-end server, which can further improve the efficiency of the back-end cache server.

Status parameters supported by upstream:

In the http upstream module, you can specify the ip address and port of the back-end server through the server instruction, and you can also set the status of each node in load balancing scheduling, such as: down: indicates that the current server does not participate in load balancing. backup: reserved backup. When all other non-backup machines fail or are busy, the backup machine will be requested, so this machine is also the least stressed.

 max_fails: Allow the number of request failures, the default is 1, when the maximum number of times is exceeded, return

Error in proxy_next_upstream module definition.

Fail_timeout: The time to suspend the service after experiencing max_fails failure. Max_fails and faul_timeout are used together.

Note: When the load scheduling algorithm is ip_hash, the status of the back-end server in the load balancing scheduling cannot be Weight and backup.

1. Configure the nginx.conf file (based on reverse proxy environment operation)

###########################################
在http{}中server{}外添加:
upstream    webservers {
        server  ip:80    weight=1;
        server  ip:80   weight=1;
        }
并且修改location  / {}中http://后的名称为upstream后的名字
location / {
            root   html;
            index  index.html index.htm;
            proxy_pass  http://webservers;
            proxy_set_header X-Real-IP $remote_addr;
        }
####################################################
[root@localhost ~]# vim /usr/local/nginx/conf/nginx.conf
 34     upstream webservers{
 35       server 192.168.253.110:80 weight=1;
 36       server 192.168.253.120:80 weight=1;
 37     }

 46         location / {
 47             root   html;
 48             index  index.html index.htm;
 49             proxy_pass http://webservers;
 50         }

 2. Install the web and test

[root@localhost ~]# yum -y install httpd
[root@localhost ~]# echo "welcome to hya222" > /var/www/html/index.html
[root@localhost ~]# systemctl  start httpd

负载服务重载nginx
[root@localhost ~]# /usr/local/nginx/sbin/nginx -s reload

Guess you like

Origin blog.csdn.net/yeyslspi59/article/details/108071355