LVS load balancing cluster ---------- NAT (Address Translation) mode

Overview of the cluster

  • Cluster (or cluster, Cluster): It means a group, a string. Used in the server field, it means a collection of a large number of servers to distinguish it from a single server.

  • Clusters can be divided into three categories according to the different functions provided and the target differences:
    1. Load balancing cluster
    2. High-availability cluster
    3. High-performance computing cluster
    (No matter which cluster, it includes at least two node servers , And the external performance as a whole, only provides one access entry (domain name or IP address))

(! This time I mainly introduce load balancing clusters!)

  • Load Balance Cluster (Load Balance Cluster) features:
    1. To improve the responsiveness of the application system, process as many access requests as possible, and reduce latency as the goal, to obtain high concurrency and high load (LB) overall performance.
    2. The load distribution of LB depends on the distribution algorithm of the master node

  • Load balancing hierarchical
    At least one load scheduler at the front end is responsible for responding to concurrent access requests from clients; the
    back end is composed of a large number of real servers to form a server pool to provide actual application services. The scalability of the entire cluster is achieved by adding or deleting server nodes Complete, and these processes are transparent to the client; in order to maintain the consistency of the service, so the node uses a shared storage device.

    1. The first layer: load scheduler
    2. The second layer: server pool
    3. The third layer: shared storage

  • The working mode of load balancing:
    1. Network Address Translation, referred to as NAT mode;
    a private network structure similar to a firewall, responsible for the scheduler as the gateway of all server nodes, that is, as the access entry of the client and the response of each node The client's access exit.
    The server node uses a private IP address and is located on the same physical network as the load scheduler. The security is better than the other two methods

    2. IP tunnel (IP Tunnel) is abbreviated as TUN mode; adopts an open network structure, the load scheduler only serves as the client's access entrance, and each node directly responds to the client through its own Internet connection without passing through the load scheduler; The server nodes are scattered at different locations in the Internet, have independent public IP addresses, and communicate with the load scheduler through a dedicated IP tunnel

    3. Direct routing (Direct Routing) is referred to as DR mode; it adopts a semi-open network structure, which is similar to the structure of the TUN mode, but the nodes are not scattered everywhere, but on the same physical network as the scheduler; the load scheduler is Each node server is connected through the local network, and there is no need to establish a dedicated IP tunnel.

LVS virtual server

  • LVS is a load balancing project developed for the Linux kernel. It is actually equivalent to a virtualized application based on IP addresses. It proposes an efficient solution for load balancing based on IP address and content request distribution; LVS has now become Linux Part of the kernel is compiled into the ip_vs module by default, and can be called automatically when necessary.

  • For different network services and configuration requirements, the LVS scheduler provides a variety of different load scheduling algorithms, among which are the four most common algorithms:
    1. Round Robin: The received requests are allocated to the cluster in turn in order Each node (real server) treats each server equally, regardless of the actual number of connections and system load of the server.

    2. Weighted Round Robin: Distribute requests according to the weight value set by the scheduler. The node with the higher weight value gets the task first, and the more requests are allocated; the server with strong performance guarantees that the more access traffic is undertaken.

    3. Least Connections: Allocation is based on the number of connections established by the real server, and the received access requests are prioritized to the node with the least number of connections.

    4. Weighted Least Connections: When the performance difference of server nodes is large, the weights can be automatically adjusted for real servers; nodes with higher performance will bear a larger proportion of active connection load.

  • Use the ipvsadm tool
    ipvsadm is an LVS cluster management tool used on the load scheduler. You can add and delete server nodes by calling the ip_vs module, and view the running status of the cluster

  • Regarding the usage of the ipvsadm tool command: (for example)
    1. Create a virtual server
    Example: ipvsadm -A -t 172.16.16.172:80 -s rr
    (The VIP address of the cluster is 172.16.16.172, which provides load distribution services for TCP port 80. Use the scheduling algorithm as round-robin; option -A means to add a virtual server, -t is used to specify the VIP address and TCP port, -s is used to specify the load scheduling algorithm (polling rr; weighted round-robin wrr; least connection lc; least weighted Connect wlc)

    2. Add server node
    Example: ipvsadm -a -t 172.16.16.172:80 -r 192.168.7.21:80 -m -w 1
    (virtual server 172.16.16.172 adds node; option -a adds real server, -t is used to specify VIP address and TCP port, -r is used to specify RIP address and TCP port, -m means to use NAT mode (-g DR mode, -i TUN mode), -w is used to set the weight (when the weight is 0, the node is suspended) )

    3. View the status
    of cluster nodes. Example: ipvsadm -ln
    (option -l can view LVS virtual servers in a list, you can specify to view only a certain VIP address (default view all), combined with option -n will display the address, port and other information in digital form )

    4. Delete the server node
    Example: ipvsadm -d -r 192.168.7.24:80 -t 172.16.16.172:80
    (delete the node 192.168.7.24 in the LVS cluster 172.16.16.172; when you need to delete a node from the server pool, use Option -d. To perform the delete operation, you must specify the target object, including node address and virtual IP address. When you need to delete the entire virtual server, use option -D and specify the virtual IP address without specifying the node.)

    Example: ipvsadm -D -t 172.16.16.172:80
    (delete this virtual server)

    5. Save the load distribution strategy
    Example: ipvsadm-save> /etc/sysconfig/ipvsadm: save the strategy
    (ipvsadm-restore represents the recovery strategy)

    Example: service ipvsadm save: save strategy

    Example: service ipvsadm stop: stop the service (clear strategy)
    [OK]
    [OK]

    Example: service ipvsadm start: start service (rebuild rules)
    [OK]
    [OK]

    Example: cat /etc/sysconfig/ipvsadm: Confirm save results

Build LVS load balancing cluster-----NAT mode

1. Formation ideas (using four virtual machines for demonstration)

1. A load scheduler (2 network cards)
NAT connection: 20.0.0.181 ------ as an external network address
Custom vm1: 192.168.100.181 ------ as an internal network address (no gateway required)

2.
Customize vm1 for a server pool 1 : 192.168.100.182 ------ as the internal network address (the gateway is the internal network ip of the load scheduler)

3. A server pool 2
custom vm1: 192.168.100.183 ------ as the internal network address (the gateway is the internal network ip of the load scheduler)

4. An NFS shared memory
Custom vm1: 192.168.100.184 ------ as the internal network address (the gateway is the internal network ip of the load scheduler or it does not need to be set)

2. Start to build

Configuration prerequisite: turn off firewall and core protection; install yum source

Configure the load scheduler (192.168.100.181)

1. Network environment The
original network card is set to: NAT connection mode; New network card: custom vm1 connection mode

[root@localhost ~]# yum -y install net-tools                   #### 可以安装一个工具,用 route -n 查看 网关信息 ,不安装则显示 bash找不到命令
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
........
IPADDR=20.0.0.181
NETMASK=255.255.255.0
GATEWAY=20.0.0.2
DNS1=8.8.8.8
DNS2=114.114.114.114
 
[root@localhost ~]# systemctl restart network
[root@localhost ~]# nmcli connection
NAME   UUID                                  TYPE      DEVICE 
ens33  0749124f-65e5-4be7-ae4d-d6e34350a1bc  ethernet  ens33  
ens36  147c26e4-8373-3454-bc50-9b0964d0e929  ethernet  ens36              ####复制下这个 UUID 号

[root@localhost ~]# cd /etc/sysconfig/network-scripts/
[root@localhost network-scripts]# ll
[root@localhost network-scripts]# cp ifcfg-ens33 ifcfg-ens36
[root@localhost network-scripts]# vi ifcfg-ens36
..........
NAME=ens36
UUID=147c26e4-8373-3454-bc50-9b0964d0e929                ####在这之前复制的 UUID 号粘贴上去
DEVICE=ens36
ONBOOT=yes
IPADDR=192.168.100.181
NETMASK=255.255.255.0

[root@localhost network-scripts]# systemctl restart network

2. Load the ip_vs mode and install the ipvsadm tool

[root@localhost ~]# yum -y install ipvsadm

.......

[root@localhost ~]# ipvsadm -v                    ####查看版本信息
[root@localhost ~]# modprobe ip_vs                 #####确认内核对 LVS 的支持
[root@localhost ~]# cat /proc/net/ip_vs
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port Forward Weight ActiveConn InActConn

3. Create a virtual server
. The VIP address of the cluster is 192.168.100.181, which provides load splitting services for TCP port 80. The round-robin scheduling algorithm used. For the load balancing scheduler, the VIP address must be the actual IP address of the machine.

[root@localhost ~]# ipvsadm -A -t 20.0.0.181:80 -s rr

4. Add server node

[root@localhost ~]# ipvsadm -a -t 20.0.0.181:80 -r 192.168.100.182:80 -m
[root@localhost ~]# ipvsadm -a -t 20.0.0.181:80 -r 192.168.100.183:80 -m

5. Save LVS strategy

[root@localhost ~]# ipvsadm-save > /opt/ipvsadm
[root@localhost ~]# cat /opt/ipvsadm
-A -t localhost.localdomain:http -s rr
-a -t localhost.localdomain:http -r 192.168.100.182:http -m -w 1
-a -t localhost.localdomain:http -r 192.168.100.183:http -m -w 1

6. Turn on the routing and forwarding function of the dispatch server

[root@localhost ~]# vi /etc/sysctl.conf

net.ipv4.ip_forward =1                    #####插入添加
Configure NFS shared storage service (192.168.100.184)

Need to install nfs-utils, rpcbind these two plug-ins

[root@localhost ~]# rpm -q nfs-utils                     ####查看这俩个插件有没有安装,若已安装 则显示 Nothing to do
[root@localhost ~]# rpm -q rpcbind
[root@localhost ~]# yum -y install nfs-utils rpcbind
[root@localhost ~]# systemctl start nfs rpcbind
 
[root@localhost ~]# mkdir /opt/51xue /opt/52xue                  ####创建二个目录,存测试页面用
[root@localhost ~]# vi /etc/esports                      #####编辑共享配置文件

/opt/51xue 192.168.100.0/24(rw,sync)                #####插入添加
/opt/52xue 192.168.100.0/24(rw,sync)


[root@localhost ~]# systemctl restart nfs rpcbind
[root@localhost ~]# systemctl enable nfs rpcbind

[root@localhost ~]# echo "this is www.51xue.top" >/opt/51xue/index.html                ####导入测试网页至文件中
[root@localhost ~]# echo "this is www.52xue.top" >/opt/52xue/index.html

[root@localhost ~]# showmount -e                 #####查看挂载信息
Export list for localhost.localdomain:
/opt/52xue 192.168.100.0/24
/opt/51xue 192.168.100.0/24
Configure server pool 1 (192.168.100.182)
[root@localhost ~]# yum -y install nfs-utils                #####必须要安装 nfs-utils 插件,否则 mount 不识别 nfs 格式;系统最小化安装要装,图形化界面的不要装

[root@localhost ~]# showmount -e 192.168.100.184          
Export list for 192.168.100.184:
/opt/52xue 192.168.100.0/24
/opt/51xue 192.168.100.0/24

[root@localhost ~]# yum -y install httpd                   ####安装阿帕奇服务
[root@localhost ~]# mount 192.168.100.184:/opt/51xue /var/www/html                ####手动挂载NFS共享目录

[root@localhost ~]# vi /etc/fstab                      ####永久挂载NFS共享目录
192.168.100.184:/opt/51xue /var/www/html nfs defaults,_netdev 0 0

[root@localhost ~]# systemctl start nfs httpd
[root@localhost ~]# systemctl restart nfs httpd
[root@localhost ~]# systemctl enable nfs httpd

You can test the webpage first: enter 192.168.100.182 and it will
appear "this is www.51xue.top"

Configure server pool 2 (192.168.100.183)
[root@localhost ~]# yum -y install nfs-utils                   #####必须要安装 nfs-utils 插件,否则 mount 不识别 nfs 格式;系统最小化安装要装,图形化界面的不要装

[root@localhost ~]# showmount -e 192.168.100.184          
Export list for 192.168.100.184:
/opt/52xue 192.168.100.0/24
/opt/51xue 192.168.100.0/24

[root@localhost ~]# yum -y install httpd                  ####安装阿帕奇服务
[root@localhost ~]# mount 192.168.100.184:/opt/52xue /var/www/html              ####手动挂载NFS共享目录

[root@localhost ~]# vi /etc/fstab                 ####永久挂载NFS共享目录
192.168.100.184:/opt/52xue /var/www/html nfs defaults,_netdev 0 0

[root@localhost ~]# systemctl start nfs httpd
[root@localhost ~]# systemctl restart nfs httpd
[root@localhost ~]# systemctl enable nfs httpd

You can test the webpage first: enter 192.168.100.183 and it will
show "this is www.52xue.top"

Real machine test

Enter public address
Insert picture description here
Insert picture description here

Guess you like

Origin blog.csdn.net/XCsuperman/article/details/108714215