Haproxy builds a Web cluster-super detailed theory + experiment! ! !

1. Common WEB cluster scheduler

  • The current common WEB cluster scheduler is divided into software and hardware
  • Software usually uses LVS, Haproxy, Nginx
  • The most commonly used hardware is F5, and many people use some domestic products, such as Barracuda and NSFOCUS.

Two, Haproxy application analysis

■LVS has strong anti-load ability in enterprise applications, but there are deficiencies

  • LVS does not support regular processing and cannot achieve dynamic and static separation
  • For large websites, the implementation and configuration of LVS are complicated, and the maintenance cost is relatively high

■Haproxy is a software that provides high availability, load balancing, and application proxy based on TCP and HTTP

  • Suitable for web sites with heavy loads
  • Running on hardware can support tens of thousands of concurrent connection requests

Three, Haproxy scheduling algorithm principle

3.1、RR (Round Robin)

■ Haproxy supports multiple scheduling algorithms, the most commonly used are three

  • RR (Round Robin)
    ◆ RR algorithm is the simplest and most commonly used algorithm, namely round robin scheduling

  • Understanding example
    ◆ There are three nodes A, B, C
    ◆ The first user access will be assigned to node A
    ◆ The second user access will be assigned to node B
    ◆ The third user access will be assigned to node C
    ◆ Fourth A user access continues to be assigned to node A, polling to allocate access requests to achieve load balancing

3.2、LC (Least Connections)

■ LC (Least Connections)

  • The minimum number of connections algorithm dynamically allocates front-end requests according to the number of back-end node connections

■ Understanding examples

  • There are three nodes A, B, and C, and the number of connections of each node is A: 4, B: 5, C: 6
  • The first user connection request will be assigned to A, and the number of connections will become A:5, B:5, C:6
  • The second user request will continue to be allocated to A, and the number of connections will become A:6, B:5, C:6; a new request will be allocated to B, each time a new request will be assigned to the smallest number of connections Client
  • Since the number of connections of A, B, and C will be dynamically released in actual situations, it is difficult to have the same number of connections
  • Compared with the rr algorithm, this algorithm is greatly improved, and it is an algorithm that is currently used more.

3.3、SH(Source Hashing)

■ SH(Source Hashing)

  • Based on the source access scheduling algorithm, used in some scenarios where Session sessions are recorded on the server side. Cluster scheduling can be done based on the source IP, Cookie, etc.

■Understanding examples

  • There are three nodes A, B, and C. The first user is assigned to A for the first visit, and the second user is assigned to B for the first visit.
  • When the first user visits for the second time, it will continue to be assigned to A, and the second user will still be assigned to B for the second visit. As long as the load balancing scheduler does not restart, the first user’s access will be assigned to A, the second user access will be assigned to B to achieve cluster scheduling
  • The advantage of this scheduling algorithm is to achieve session retention, but when some IP visits are very large, it will cause unbalanced load, and some nodes have excessive visits, which affects business use

Fourth, use Haproxy to build a web cluster

4.1, environment configuration

■ Host requirements

  • One Haproxy server, two Nginx servers, and one shared storage (NFS) server
Host IP address operating system Main software
Haproxy server CentOS 7.6 x86_64 192.168.100.21 haproxy-1.4.24.tar
Nginx server 1 CentOS 7.6 x86_64 192.168.100.22 nginx-1.12.2.tar
Nginx server 2 CentOS 7.6 x86_64 192.168.100.23 nginx-1.12.2.tar
NFS server CentOS 7.6 x86_64 192.168.100.24 nfs-utils,rpcbind
  • When doing this experiment, you must first turn off the firewall and core protection of all servers!
[root@localhost examples]# systemctl stop firewalld
[root@localhost examples]# systemctl disable firewalld
[root@localhost examples]# vi /etc/selinux/config 
SELINUX=disabled   ## 关闭核心防护
= >> wq 保存

4.2, storage server 192.168.100.24

## 做共享目录实验时,最好不要选择7.4版本,会有很多bug
[root@localhost ~]# yum -y install rpcbind nfs-utils
[root@localhost ~]# systemctl start rpcbind   ## 启动时,先启动rpcbind
[root@localhost ~]# systemctl start nfs

[root@localhost ~]# vi /etc/exports  ## 做共享网段的目录
/opt/Tom 192.168.100.0/24(rw,sync)
/opt/Jack 192.168.100.0/24(rw,sync)

## 设置重启和开机自启rpcbind,nfs-utils
[root@localhost ~]# systemctl restart rpcbind
[root@localhost ~]# systemctl restart nfs
[root@localhost ~]# systemctl enable rpcbind
[root@localhost ~]# systemctl enable nfs

## 创建共享的主页,并给主页插入不同的内容,可以让我们更直观的看见后面的负载轮询页面
[root@localhost ~]# mkdir /opt/Tom /opt/Jack
[root@localhost ~]# echo "this is www.Tom.com" >/opt/Tom/index.html
[root@localhost ~]# echo "this is www.Jack.com" >/opt/Jack/index.html

4.2, Nginx server 1 192.168.100.22

  • To compile and install Nginx, we first upload the nginx software package to /opt, which can be uploaded with xftp or other software (the software package is in the resources I uploaded)
[root@localhost ~]#yum -y install pcre-devel zlib-devel gcc-c++
[root@localhost ~]# useradd -M -s /sbin/nologin nginx
[root@localhost ~]# cd /opt
[root@localhost ~]# tar zxvf nginx-1.12.2.tar.gz
[root@localhost ~]# cd nginx-1.12.2
[root@localhost nginx-1.12.2]# 
./configure \
--prefix=/usr/local/nginx \    ## 指定安装路径
--user=nginx \   ## 指定用户账号
--group=nginx    ## 指定组账户

[root@localhost nginx-1.12.2]# make && make install   ## 进行编译安装
[root@localhost nginx-1.12.2]# ln -s /usr/local/nginx/sbin/nginx /usr/local/sbin/   ## 做软连接,并查看
[root@localhost nginx-1.12.2]# ls -l /usr/local/sbin/nginx
lrwxrwxrwx 1 root root 27 5 月 16 16:50 /usr/local/sbin/nginx -> /usr/local/nginx/sbin/nginx
  • Nginx operation control

■Check the configuration file
Similar to the main Apache program httpd, the main Nginx program also provides the "-t" option to
check the configuration file to find out improper or incorrect configuration. The configuration file nginx.conf is located in the conf/ subdirectory of the installation directory by default
. To check the configuration files located in other locations, you can use the "-c" option to specify the path

[root@localhost ~]# nginx -t   ## 检查语法有无错误
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful

■ 启动、 停止 Nginx
killall -1 nginx                                                     ####安全重启
killall -3 nginx                                                     ###停止服务

如果出现: -bash: killall: command not found
[root@localhost ~]# yum -y install psmisc    ## 安装一下这条命令就好了
[root@localhost ~]# nginx                                ####启动
[root@localhost ~]# yum install net-tools
[root@localhost ~]# netstat -anpt | grep nginx
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN
7180/nginx: master
  • Add Nginx system service
■添加 Nginx 系统服务
[root@localhost ~]# vi /lib/systemd/system/nginx.service
[Unit]
Description=nginx                                                 ####描述
After=network.target                                            ####描述服务类别
[Service]
Type=forking                                                          ####后台运行形式
PIDFile=/usr/local/nginx/logs/nginx.pid                ####PID 文件位置
ExecStart=/usr/local/nginx/sbin/nginx                  ####启动服务
ExecReload=/usr/bin/kill -s HUP $MAINPID         ####根据 PID 重载配置
ExecStop=/usr/bin/kill -s QUIT $MAINPID           ####根据 PID 终止进程
PrivateTmp=true
[Install]
WantedBy=multi-user.target
= >> wq 保存

[root@localhost ~]# chmod 754 /lib/systemd/system/nginx.service
[root@localhost ~]# systemctl enable nginx.service
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to
/usr/lib/systemd/system/nginx.service.
这样一来, 就可以 systemctl 命令来启动、 停止、 重启、 重载 Nginx 服务器了, 方法是
在执行时添加相应的 start、 stop、 restart、 reload 参数

#######刷下命令,服务正常工作了#########
[root@localhost ~]# killall -3 nginx                                                     ###停止服务
[root@localhost ~]# systemctl start nginx.service
[root@localhost ~]# systemctl enable nginx.service
  • Install httpd mount test page
root@localhost ~]# yum -y install nfs-utils
[root@localhost ~]# showmount -e 192.168.100.24     ####如果还没发布,请到存储服务器发布下,exportfs -rv
Export list for 192.168.100.24:
/opt/Tom (everyone)
/opt/Jack  (everyone)

[root@localhost ~]# vi /etc/fstab 
192.168.100.24:/opt/Tom/ /usr/local/nginx/html/ nfs     defaults,_netdev     0 0        ###开机自动挂载,注意格式对齐

[root@localhost nginx-1.12.2]# systemctl start nfs  ## 开启nfs
[root@localhost nginx-1.12.2]# systemctl enable nfs  ## 设置开机自启nfs
[root@localhost nginx-1.12.2]# init 6   ## 重启测试一下测试页
[root@localhost ~]# curl 192.168.100.22   ## 可以使用这种方法验证
this is www.Tom.com
## 也可以在浏览输入 192.168.100.22 测试一下

4.3, Nginx server 2 192.168.100.23

The previous steps are the same as Nginx server 1

  • Install httpd mount test page
[root@localhost ~]# yum -y install nfs-utils
[root@localhost ~]# showmount -e 192.168.100.24     ####如果还没发布,请到存储服务器发布下,exportfs -rv
Export list for 192.168.100.24:
/opt/Tom  (everyone)
/opt/Jack (everyone)
 
[root@localhost ~]# vi /etc/fstab 
192.168.100.24:/opt/Jack/ /usr/local/nginx/html/ nfs     defaults     0 0        ###开机自动挂载,注意格式对齐

[root@localhost ~]# systemctl start nfs
[root@localhost ~]# systemctl enable nfs
[root@localhost nginx-1.12.2]# init 6

[root@localhost ~]# curl 192.168.100.23
this is www.Jack.com
## 也可以在浏览输入 192.168.100.23 测试一下

4.4, Haproxy server 192.168.100.21

  • Compile and install Haproxy, upload haproxy-1.4.24.tar.gz to the /opt directory
[root@localhost ~]# yum -y install pcre-devel bzip2-devel gcc gcc-c++   ## 安装所需环境
[root@localhost ~]# cd /opt
[root@localhost opt]# tar xzvf haproxy-1.4.24.tar.gz 
[root@localhost opt]# cd haproxy-1.4.24/
[root@localhost haproxy-1.4.24]# make TARGET=linux26
[root@localhost haproxy-1.4.24]# make install
  • Configure Haproxy service
[root@localhost haproxy-1.4.24]# mkdir /etc/haproxy
[root@localhost haproxy-1.4.24]# cp examples/haproxy.cfg /etc/haproxy/
[root@localhost haproxy-1.4.24]# vi /etc/haproxy/haproxy.cfg 
100dd ,把里面的配置文件都删除!输入下面代码!
global
        log 127.0.0.1   local0
        log 127.0.0.1   local1 notice
        #log loghost    local0 info
        maxconn 4096
        #chroot /usr/share/haproxy
        uid 99
        gid 99
        daemon
        #debug
        #quiet

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        retries 3
        #redispatch
        maxconn 2000
        contimeout      5000
        clitimeout      50000
        srvtimeout      50000

listen  webcluster 0.0.0.0:80
        option httpchk GET /index.html
        balance roundrobin
        server inst1 192.168.100.22:80 check inter 2000 fall 3        ## 这里IP改成节点服务器 1 的IP
        server inst2 192.168.100.23:80 check inter 2000 fall 3        ## 这里IP改成节点服务器 2 的IP
        
[root@localhost haproxy-1.4.24]# cp examples/haproxy.init /etc/init.d/haproxy 
[root@localhost haproxy-1.4.24]# chmod 755 /etc/init.d/haproxy
[root@localhost haproxy-1.4.24]# chkconfig --add haproxy
[root@localhost haproxy-1.4.24]# ln -s /usr/local/sbin/haproxy /usr/sbin/haproxy
[root@localhost haproxy-1.4.24]# service haproxy start 

[root@localhost haproxy-1.4.24]# systemctl start haproxy.service 
[root@localhost haproxy-1.4.24]# systemctl enable haproxy.service 

4.5. Verify that Haproxy builds a web cluster

  • Enter 192.168.100.21 in the browser to verify whether to poll the page.
  • The verification is successful! ! !

Insert picture description here

Insert picture description here

Guess you like

Origin blog.csdn.net/m0_46563938/article/details/108776567