lvs_nat

Lvs_nat model

write picture description here

one. working principle:

It is implemented based on the NAT mechanism. When the user request arrives at the director, the director changes the destination address (ie VIP) of the request message to the selected realserver address, and at the same time changes the destination port of the message to the corresponding port of the selected realserver, and finally sends the message request Sent to the specified realserver. After the server gets the data, the realserver returns the data to the director, and the director changes the source address and source port of the message to VIP and the corresponding port, and then sends the data to the user to complete the entire load scheduling process.

two. Features:

所有的realserver和director要在同一个网段内
RIP是私有地址,仅用于集群节点之间进行通信
director同时处理请求和应答数据包
realserver的网关要指向DIP
可以实现端口映射
readlserver可以是任意操作系统
director很可能成为系统性能瓶颈

3. Hardware environment

One Director:
Version: Red Hat Enterprise Linux Server release 6.5
Dual NICs:
eth1: VIP: 172.25.254.118/24 (in a real production environment, the gateway must point to the operator's public IP)
eth2: DIP: 172.25.18.1/24 (This IP must be in the same network segment as the RealSever in the background)

Two RealServers:
Version: Red Hat Enterprise Linux Server release 6.4
Single NIC:
RealServer1: RIP1:192.168.91.80/24 (Gateway must point to Director's DIP)
RealServer2: RIP2:192.168.91.90/24 (Gateway must implement Director's DIP)

4. Preparation before the experiment

1. Now LVS is a part of the Linux standard kernel. Before the Linux2.4 kernel, the kernel must be recompiled to support the LVS function module when using LVS, but since the Linux2.4 kernel, the various function modules of LVS have been completely built-in , without any patches to the kernel, you can directly use the various functions provided by LVS.

[root@server1 ~]# uname -r
2.6.32-431.el6.x86_64
Here our environment kernel version is 2.6.32-431.el6.x86_64, so no need to patch

2. The implementation of the LVS-NAT model is similar to the DNAT of iptables, so you know, the Director node cannot be used with iptables at the same time, then there will be conflicts, which is one of the purposes of clearing the rules of iptables later.

5. Experimental steps

Configure DR
[root@server1 ~]# yum install ipvsadm -y ##Install ipvsadm tool
Configure two network cards eth0
Configure intranet network card DIP
[root@server1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE =”eth0”
BOOTPROTO=”static”
ONBOOT=”yes”
IPADDR=172.25.18.1
PREFIX=24
Configure external network card VIP
[root@server1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=” eth1”
BOOTPROTO=”static”
ONBOOT=”yes”
IPADDR=172.25.254.118
PREFIX=24

[root@server1 ~]# cat /proc/sys/net/ipv4/ip_forward ## Check whether the local routing function is enabled (1 enable 0 disable)
[root@server1 ~]# vim /etc/sysctl.conf
net.ipv4. ip_forward=1 ##Change to 1 open
[root@server1 ~]# sysctl -p ##Reload configuration file
net.ipv4.ip_forward=1
net.ipv4.conf.default.rp_filter=1
net.ipv4.conf.default .accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf -call-iptables” is an unknown key
error: “net.bridge.bridge-nf-call-arptables” is an unknown key
kernel.msgmnb=65536
kernel.msgmax=65536
kernel.shmmax=68719476736
kernel.shmall = 4294967296

Configure RS
RS1 and RS2 to install httpd, and start
and write to the release directory
[root@server2 ~]# echo '

server2.RS

’ > /var/www/html/index.html
[root@server3 ~]# echo ‘

server3.RS

’ > /var/www/html/index.html

Clear local firewall policy
[root@server2 ~]# iptables -F
[root@server3 ~]# iptables -F

Test access:
[root@server1 ~]# curl http://172.25.18.2

server2.RS

[root@server1 ~]# curl http://172.25.18.3

server3.RS

Configure Director and add RealServer to the cluster service
Usage of the ipvsadm command
Manage cluster services

添加:
    -A -t|u|f service-address [-s scheduler]
    -t: TCP协议的集群
    -u: UDP协议的集群
        service-address: IP:PORT
    -f: FWM: 防火墙标记
        service-address: Mark Number
修改:
    -E
删除:
    -D -t|u|f service-address

Manage RealServers in Cluster Services

添加:
    -a -t|u|f service-address -r server-address [-g|i|m] [-w weight]
    -t|u|f service-address:事先定义好的某集群服务
    -r server-address: 某RS的地址,在NAT模型中,可使用IP:PORT实现端口映射;
    [-g|i|m]: LVS类型
    -g: DR
    -i: TUN
    -m: NAT
    [-w weight]: 定义服务器权重
修改:
    -e
删除:
    -d -t|u|f service-address -r server-address

Follow-up management of cluster services

查看
    -L|l
    -n: 数字格式显示主机地址和端口
    –stats:统计数据
    –rate: 速率
    –timeout: 显示tcp、tcpfin和udp的会话超时时长
    -c: 显示当前的ipvs连接状况
    例:ipvsadm -L -n –stats
删除所有集群服务
    -C:清空ipvs规则
    例:ipvsadm -C
保存规则
    -S
    例: ipvsadm -S > /etc/sysconfig/ipvsadm
载入此前的规则:
    -R
    例:ipvsadm -R < /etc/sysconfig/ipvsadm

Test whether the Director can access the Realserver service normally

Pay attention to the problem

访问后台的RealServer服务我们此时使用的为Driector的DIP进行访问的,当我们将RealServer加入集群服务后,我们clent要实现访问Driector的VIP来进行访问RealServer的服务,务必要明白此问题。

使用rr调度算法(轮叫)
[root@server1 ~]# ipvsadm -A -t 172.25.254.118:80 -s rr
[root@server1 ~]# ipvsadm -a -t 172.25.254.118:80 -r 172.25.18.2 -m -w 1
[root@server1 ~]# ipvsadm -a -t 172.25.254.118:80 -r 172.25.18.3 -m -w 1
[root@server1 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.254.118:80 rr
-> 172.25.18.2:80 Masq 1 0 0
-> 172.25.18.3:80 Masq 1 0 0

Test with browser:
write picture description here

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326041371&siteId=291194637