LVS load balancing------NAT mode (principle plus case)

Preface

One: Principle of Load Balancing Cluster

1.1: Overview of enterprise cluster applications

The meaning of clusters

Cluster, cluster is
composed of multiple hosts, but externally only appears as a whole.
In Internet applications, as the site has higher and higher requirements for hardware performance, response speed, service stability, and data reliability, a single server Powerless

Solution

Use expensive minicomputers and mainframes
Use ordinary servers to build service clusters
SLB in Alibaba Cloud is a typical load balancing scheduler, and ECS is a cloud host (virtual machine)

SLB schedules ECS, multiple ECSs form a resource pool, forming the basis of cloud computing

1.2: Enterprise cluster classification

According to the target difference of the cluster, it can be divided into three types:
Load balance cluster
High availability cluster
High performance computing cluster
1.2.1: Load Balance Cluster (Load Balance Cluster)
to improve the responsiveness of the application system, as much as possible to deal with more The goal is to access requests and reduce latency to obtain high concurrency and high load (LB) overall performance
. The load distribution of LB depends on the offload algorithm of the master node. The offload algorithm is scheduling
1.2.2: High Availability Cluster (High Availability Cluster)
to improve The reliability of the application system is to reduce the interruption time as much as possible, to ensure the continuity of the service, and to achieve the fault tolerance effect of high availability (HA).
The working mode of HA includes duplex and master-slave two modes
, two parallel levels The collaborative work of the state can replace each other at any time.
With a master-slave mode, one master and multiple slaves, it is called a centralized cluster
decentralization mechanism: there is no real master, if there is a symbolic meaning, all nodes do it Live (Redis cluster is a typical decentralized mechanism)
1.2.3: High Performance Computer Cluster
aims to improve the CPU computing speed of the application system, expand the hardware resources and analysis capabilities, and obtain the equivalent of large and super High-performance computing (HPC) capabilities of computers. The high performance of high-
performance computing clusters depends on "distributed computing" and "parallel computing". The CPU, memory and other resources of multiple servers are integrated through dedicated hardware and software to achieve only The computing power that only large and supercomputers have
1.3: Analysis of the working mode of
load balancing clusters Load balancing clusters are currently the most commonly used type of
clusters in enterprises. There are three working modes for cluster load scheduling technology
Address translation
IP tunnel
Direct routing (DR)

1.3.1: NAT mode
Network Address Translation is
abbreviated as NAT mode, which is similar to the private network structure of a firewall. The load scheduler serves as the gateway of all server nodes, that is, as the access entrance of the client, and each node responds to the client. Access to the egress
server node uses a private IP address and is located on the same physical network as the load scheduler, and the security is better than the other two methods
Insert picture description here

1.3.2: TUN mode
IP tunnel (IP Tunnel) is
abbreviated as TUN mode. It adopts an open network structure. The load scheduler only serves as the client's access entrance. Each node directly responds to the client through its own Internet connection without passing through Load scheduler
Server nodes are scattered in different locations in the Internet, have independent public IP addresses, and communicate with the load scheduler through a dedicated IP tunnel
Insert picture description here

1.3.3: DR mode
Direct Routing

Referred to as DR mode, it adopts a semi-open network structure, similar to the structure of TUN mode, but the nodes are not scattered in various places, but are located on the same physical network as the
scheduler. The scheduler is connected to each node server through the local network. No need to establish a dedicated IP tunnel
Insert picture description here

1.3.4: The difference between the three working modes
Insert picture description here

Two: cluster architecture and virtual server

2.1: Architecture of a load balancing cluster

Load balancing architecture The
first layer, the second layer of the load scheduler (Load Balancer or Director), the
third layer of the server pool (Server Pool)
, and the shared storage (Share Storage)
Insert picture description here

2.2: Overview of LVS Virtual Server

Linux Virtual Server

Load balancing solution for Linux kernel

Founded by Dr. Zhang Wensong in my country in May 1998

[root@localhost~]# modprobe ip_vs'//Confirm kernel support for LVS'
[root@localhost~]# cat /proc/net/ip_vs

LVS load scheduling algorithm

Round Robin distributes
the received access requests to each node (real server) in the cluster in turn, and treats each server equally, regardless of the actual number of server connections and system load
weighted round-robin (Weighted). Round Robin)
According to the processing capacity of the real server, the received access requests are allocated in turn. The scheduler can automatically query the load status of each node and dynamically adjust its weight to
ensure that the server with strong processing capability bears more access traffic.
Least Connections )
Distribution is based on the number of connections established by the real server, and the received access request is preferentially distributed to the node with the
least number of connections. Weighted Least Connections
can be the real server when the performance of server nodes differs greatly. Automatically adjust the weights
. Nodes with higher weights will bear a greater proportion of the active connection load.
2.3: LVS cluster creation and management tool
Use ipvsadm tool to
create virtual servers
Add and delete server nodes
View cluster and node conditions
Report error load distribution strategy

3. Case LVS-NAT deployment

Insert picture description here
Five virtual machines centos7 are prepared, one server is used as LVS (two network cards), two web servers are used as Apache, one is used as server's NFS storage; one test machine (window system).
All hosts are set to host-only mode;
all server firewalls are closed except for the LVS server.
Set up two network cards on the LVS server, one as the private network address, one as the public network address, and for NAT address mapping;

3.1 Setting up the environment

3.1.1 Configure the call management of the LVS module on pc2

Check the LVS load balancing module, the linux system comes with this module

[root@pc-2 ~]# modprobe ip_vs
[root@pc-2 ~]# cat /proc/net/ip_
ip_mr_cache         ip_tables_names     ip_vs_app           ip_vs_stats
ip_mr_vif           ip_tables_targets   ip_vs_conn          ip_vs_stats_percpu
ip_tables_matches   ip_vs               ip_vs_conn_sync
[root@pc-2 ~]# cat /proc/net/ip_vs
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port Forward Weight ActiveConn InActConn
[root@pc-2 ~]#

Install related software:

[root@pc-2 ~]# yum install ipvsadm -y   管理LVS模块工具

3.1.2 Install NFS software on PC-5 and set up folder sharing

setenforce: SELinux is disabled
[root@pc-5 ~]#
[root@pc-5 ~]# yum install rpcbind -y
[root@pc-5 ~]# yum install nfs-utils -y

Create a shared folder

[root@pc-5 ~]# cd /opt
[root@pc-5 opt]# mkdir dog
[root@pc-5 opt]# mkdir pig
[root@pc-5 opt]# ls
dog  pig  rh
[root@pc-5 opt]# 
[root@pc-5 opt]# chmod 777 dog
[root@pc-5 opt]# chmod 777 pig
[root@pc-5 opt]# ll
总用量 0
drwxrwxrwx  2 root root 6 831 16:33 dog
drwxrwxrwx  2 root root 6 831 16:33 pig
drwxr-xr-x. 2 root root 6 1031 2018 rh

Set up a shared directory

[root@pc-5 ~]# vi /etc/exports
/opt/dog 192.168.100.0/24(rw,sync)

/opt/pig 192.168.100.0/24(rw,sync)
~                                                              

~ Post

[root@pc-5 ~]# systemctl start rpcbind
[root@pc-5 ~]# systemctl start nfs
[root@pc-5 ~]# exportfs -rv
exporting 192.168.100.0/24:/opt/pig
exporting 192.168.100.0/24:/opt/dog


3.1.3 Install APACHE on PC3 and PC4, check the mount point, and mount and use

Install httpd

[root@pc-3 ~]# yum install httpd -y
[root@pc-4 ~]# yum install httpd -y
            

View shared directory

[root@pc-3 ~]# showmount -e 192.168.100.5
Export list for 192.168.100.5:
/opt/pig 192.168.100.0/24
/opt/dog 192.168.100.0/24
设置挂载参数
vim /etc/fstsb

Insert picture description here

df-h
文件系统                容量  已用  可用 已用% 挂载点
/dev/sda5                91G  4.1G   87G    5% /
devtmpfs                895M     0  895M    0% /dev
tmpfs                   910M     0  910M    0% /dev/shm
tmpfs                   910M   11M  900M    2% /run
tmpfs                   910M     0  910M    0% /sys/fs/cgroup
/dev/sda2               5.9G   33M  5.9G    1% /home
/dev/sda1              1014M  174M  841M   18% /boot
tmpfs                   182M   12K  182M    1% /run/user/42
tmpfs                   182M     0  182M    0% /run/user/0
192.168.100.5:/opt/dog   91G  4.1G   87G    5% /var/www/html

Insert picture description here

 [root@pc-4 ~]# mount -a  //检查挂载
[root@pc-4 ~]# df -h
文件系统                容量  已用  可用 已用% 挂载点
/dev/sda5                91G  4.1G   87G    5% /
devtmpfs                895M     0  895M    0% /dev
tmpfs                   910M     0  910M    0% /dev/shm
tmpfs                   910M   11M  900M    2% /run
tmpfs                   910M     0  910M    0% /sys/fs/cgroup
/dev/sda2               5.9G   33M  5.9G    1% /home
/dev/sda1              1014M  174M  841M   18% /boot
tmpfs                   182M   12K  182M    1% /run/user/42
tmpfs                   182M     0  182M    0% /run/user/0
192.168.100.5:/opt/pig   91G  4.1G   87G    5% /var/www/html

Insert picture description here

3.1.4 Create a script on PC-2, write to start routing, and configure nat parameters

Vim nat.sh
Insert picture description here
plus permission to execute
Insert picture description here

3.1.5 Visit 12.0.0.1 in the win7 browser, refresh the test, and find that it has been polling

Insert picture description here

4. Introduction to ipvsadm command parameters (extended)

Introduction:
ipvsadm is the management command of LVS at the application layer. We can use this command to manage the configuration of LVS. In the fedora14 system used by the author, LVS related modules have been integrated, but the ipvsadm command still needs to be installed separately using yum.

Basic usage:

ipvsadm COMMAND [protocol] service-address
               [scheduling-method] [persistence options]
ipvsadm command [protocol] service-address
               server-address [packet-forwarding-method]
               [weight options]

    第一条命令用于向LVS系统中添加一个用于负载均衡的virtual server(VS);第二条命令用来修改已经存在的VS的配置,service address用来指定涉及的虚拟服务即虚拟地址,server-address指定涉及的真实地址。

命令:
    -A, --add-service:为ipvs虚拟服务器添加一个虚拟服务,即添加一个需要被负载均衡的虚拟地址。虚拟地址需要是ip地址,端口号,协议的形式。
    -E, --edit-service:修改一个虚拟服务。
    -D, --delete-service:删除一个虚拟服务。
    -C, --clear:清除所有虚拟服务。
    -R, --restore:从标准输入获取ipvsadm命令。一般结合下边的-S使用。
    -S, --save:从标准输出输出虚拟服务器的规则。可以将虚拟服务器的规则保存,在以后通过-R直接读入,以实现自动化配置。
    -a, --add-server:为虚拟服务添加一个real server(RS)
    -e, --edit-server:修改RS
    -d, --delete-server:删除
    -L, -l, --list:列出虚拟服务表中的所有虚拟服务。可以指定地址。添加-c显示连接表。
    -Z, --zero:将所有数据相关的记录清零。这些记录一般用于调度策略。
    --set tcp tcpfin udp:修改协议的超时时间。
    --start-daemon state:设置虚拟服务器的备服务器,用来实现主备服务器冗余。(注:该功能只支持ipv4)
    --stop-daemon:停止备服务器。
    -h, --help:帮助。

参数:
    以下参数可以接在上边的命令后边。
    -t, --tcp-service service-address:指定虚拟服务为tcp服务。service-address要是host[:port]的形式。端口是0表示任意端口。如果需要将端口设置为0,还需要加上-p选项(持久连接)。
    -u, --udp-service service-address:使用udp服务,其他同上。
    -f, --fwmark-service integer:用firewall mark取代虚拟地址来指定要被负载均衡的数据包,可以通过这个命令实现把不同地址、端口的虚拟地址整合成一个虚拟服务,可以让虚拟服务器同时截获处理去往多个不同地址的数据包。fwmark可以通过iptables命令指定。如果用在ipv6需要加上-6-s, --scheduler scheduling-method:指定调度算法。调度算法可以指定以下8种:rr(轮询),wrr(权重),lc(最后连接),wlc(权重),lblc(本地最后连接),lblcr(带复制的本地最后连接),dh(目的地址哈希),sh(源地址哈希),sed(最小期望延迟),nq(永不排队)
    -p, --persistent [timeout]:设置持久连接,这个模式可以使来自客户的多个请求被送到同一个真实服务器,通常用于ftp或者ssl中。
    -M, --netmask netmask:指定客户地址的子网掩码。用于将同属一个子网的客户的请求转发到相同服务器。
    -r, --real-server server-address:为虚拟服务指定数据可以转发到的真实服务器的地址。可以添加端口号。如果没有指定端口号,则等效于使用虚拟地址的端口号。
    [packet-forwarding-method]:此选项指定某个真实服务器所使用的数据转发模式。需要对每个真实服务器分别指定模式。
        -g, --gatewaying:使用网关(即直接路由),此模式是默认模式。
        -i, --ipip:使用ipip隧道模式。
        -m, --masquerading:使用NAT模式。
    -w, --weight weight:设置权重。权重是0~65535的整数。如果将某个真实服务器的权重设置为0,那么它不会收到新的连接,但是已有连接还会继续维持(这点和直接把某个真实服务器删除时不同的)。
    -x, --u-threshold uthreshold:设置一个服务器可以维持的连接上限。0~65535。设置为0表示没有上限。
    -y, --l-threshold lthreshold:设置一个服务器的连接下限。当服务器的连接数低于此值的时候服务器才可以重新接收连接。如果此值未设置,则当服务器的连接数连续三次低于uthreshold时服务器才可以接收到新的连接。(PS:笔者以为此设定可能是为了防止服务器在能否接收连接这两个状态上频繁变换)
    --mcast-interface interface:指定使用备服务器时候的广播接口。
    --syncid syncid:指定syncid,同样用于主备服务器的同步。
    以下选项用于list命令:
    -c, --connection:列出当前的IPVS连接。
    --timeout:列出超时
    --daemon:
    --stats:状态信息
    --rate:传输速率
    --thresholds:列出阈值
    --persistent-conn:坚持连接
    --sor:把列表排序。
    --nosort:不排序
    -n, --numeric:不对ip地址进行dns查询
    --exact:单位
    -6:如果fwmark用的是ipv6地址需要指定此选项。    
    

Guess you like

Origin blog.csdn.net/BIGmustang/article/details/108327306