一、LVS的DR模式
- route(路由)-> F5 -> lvs(4) -> nginx(7) / haproxy -> web
server1主机(VS:VirtualServer)
1、安装ipvsadm
- 注意:rhel6:需要对yum源配置;rhel7:不需要
[rhel-source]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=http://172.25.12.250/rhel6.5
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
[LoadBalancer]
name=LoadBalancer
gpgcheck=0
baseurl=http://172.25.12.250/rhel6.5/LoadBalancer
[HighAvailability]
name=HighAvailability
gpgcheck=0
baseurl=http://172.25.12.250/rhel6.5/HighAvailability
[ResilientStorage]
name=ResilientStorage
gpgcheck=0
baseurl=http://172.25.12.250/rhel6.5/ResilientStorage
[ScalableFileSystem]
name=ScalableFileSystem
gpgcheck=0
baseurl=http://172.25.12.250/rhel6.5/ScalableFileSystem
2、配置虚拟IP(vip)
[root@server1 html]# ip addr add 172.25.12.100/32 dev eth0
[root@server1 html]# ipvsadm -A -t 172.25.12.100:80 -s rr (rr:算法)
[root@server1 html]# ipvsadm -a -t 172.25.12.100:80 -r 172.25.12.2:80 -g (g:DR工作模式)
[root@server1 html]# ipvsadm -a -t 172.25.12.100:80 -r 172.25.12.3:80 -g
[root@server1 html]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.12.100:http rr
-> server2:http Route 1 0 0
-> server3:http Route 1 0 0
[root@server1 html]# /etc/init.d/ipvsadm save
ipvsadm: Saving IPVS table to /etc/sysconfig/ipvsadm: [ OK ]
server2主机(RS:RealServer)
1、安装httpd、arptables_jf
2、设置http默认发布文件 /var/www/html/index.html
[root@server2 ~]# curl localhost
<h1>server-2</h1>
3、配置虚拟IP(vip)
[root@server2 ~]# ip addr add 172.25.12.100/32 dev eth0
[root@server2 ~]# arptables -A IN -d 172.25.12.100 -j DROP
[root@server2 ~]# arptables -A OUT -s 172.25.12.100 -j mangle --mangle-ip-s 172.25.12.2
[root@server2 ~]# /etc/init.d/arptables_jf save
Saving current rules to /etc/sysconfig/arptables: [ OK ]
server3主机(RS:RealServer)
1、安装httpd、arptables_jf
2、设置http默认发布文件 /var/www/html/index.html
[root@server3 ~]# curl localhost
<h1>server-3</h1>
3、配置虚拟IP(vip)
[root@server3 ~]# ip addr add 172.25.12.100/32 dev eth0
[root@server3 ~]# arptables -A IN -d 172.25.12.100 -j DROP
[root@server3 ~]# arptables -A OUT -s 172.25.12.100 -j mangle --mangle-ip-s 172.25.12.3
[root@server3 ~]# /etc/init.d/arptables_jf save
Saving current rules to /etc/sysconfig/arptables: [ OK ]
物理主机
1、访问 172.25.12.100 实现负载均衡
[kiosk@foundation12 rhel6.5]$ curl 172.25.12.100
<h1>server-3</h1>
[kiosk@foundation12 rhel6.5]$ curl 172.25.12.100
<h1>server-2</h1>
[kiosk@foundation12 rhel6.5]$ curl 172.25.12.100
<h1>server-3</h1>
[kiosk@foundation12 rhel6.5]$ curl 172.25.12.100
<h1>server-2</h1>
2、查看172.25.12.100 的mac地址 (与server1相同)
- 说明lvs服务OK,浮动IP实现负载均衡 client -> vs -> rs -> client
[kiosk@foundation12 rhel6.5]$ arp -an | grep 100
? (172.25.12.100) at 52:54:00:8d:99:6b [ether] on br0
二、LVS健康检查
1、安装管理软件 install ldirectord-3.9.5-3.1.x86_64.rpm
2、修改配置文件 rpm -pql ldirectord.x86_64 (查找配置文件)
[root@server1 ~]# cd /etc/ha.d/
[root@server1 ha.d]# ls
resource.d shellfuncs
[root@server1 ha.d]# cp /usr/share/doc/ldirectord-3.9.5/ldirectord.cf .
[root@server1 ha.d]# ls
ldirectord.cf resource.d shellfuncs
[root@server1 ha.d]# vim ldirectord.cf
# Sample for an http virtual service
virtual=172.25.12.100:80
real=172.25.12.2:80 gate
real=172.25.12.3:80 gate
fallback=127.0.0.1:80 gate
service=http
scheduler=rr
#persistent=600
#netmask=255.255.255.255
protocol=tcp
checktype=negotiate
checkport=80
request="index.html"
# receive="Test Page"
# virtualhost=www.x.y.z
3、清空ipvsadm策略,启动ldirectord.x86_64服务
[root@server1 ha.d]# ipvsadm -C
[root@server1 ha.d]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
[root@server1 ha.d]# /etc/init.d/ldirectord start
Starting ldirectord... success
[root@server1 ha.d]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.12.100:http rr
-> server2:http Route 1 0 0
-> server3:http Route 1 0 0
4、Real Server主机关闭http服务,Virtual Server主机配置
[root@server1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.12.100:80 rr
-> 127.0.0.1:80 Local 1 0 1
[root@server1 ha.d]# curl localhost
此网站正在维护中,请稍后.......
5、物理主机访问 172.25.12.100
- 注意:当安装 php 模块后,默认优先读取 index.php
[kiosk@foundation12 html]$ curl 172.25.12.100
此网站正在维护中,请稍后.......
三、高可用 ##两台主机作Virtual Host
1、server1主机停止 ldirectord 服务
[root@server1 ha.d]# /etc/init.d/ldirectord stop
Stopping ldirectord... success
[root@server1 ha.d]# chkconfig ldirectord off
2、server1和server4主机安装scp服务
- yum install openssh-clients.x86_64 -y
- 配置server4主机yum源,安装ipvsadm
3、server1主机安装keepalived
- 注意:compile报错时,解决依赖性, 安装 openssl-devel
[root@server1 ~]# tar zxf keepalived-1.4.3.tar.gz
[root@server1 ~]# cd keepalived-1.4.3
[root@server1 keepalived-1.4.3]# ./configure --prefix=/usr/local/keepalived --with-init=SYSV
......
[root@server1 keepalived-1.4.3]# make && make install
......
4、配置keepalived服务
##目录 /usr/local/keepalived/etc/rc.d/init.d/keepalived
[root@server1 init.d]# chmod +x keepalived
##设定软链接,实现服务正常运行
[root@server1 local]# ln -s /usr/local/keepalived/etc/keepalived/ /etc
[root@server1 local]# ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@server1 local]# ln -s /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
[root@server1 local]# ln -s /usr/local/keepalived/sbin/keepalived /sbin/
[root@server1 init.d]# vim /etc/keepalived/keepalived.conf
##配置文件修改
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1 ##回环接口
smtp_connect_timeout 30 ##链接休眠时间
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
# vrrp_strict ##火墙设置
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state MASTER ##高可用为热备式,MASTER为主机
interface eth0 ##接口
virtual_router_id 12 ##通话路由
priority 100 ##优先级
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress { ##虚拟IP
172.25.12.100
}
}
virtual_server 172.25.12.100 80 {
delay_loop 3
lb_algo rr ##负载均衡
lb_kind DR ##DR工作模式
# persistence_timeout 50 ##刷新时间
protocol TCP
real_server 172.25.12.2 80 {
weight 1 ##权重
SSL_GET {
connect_timeout 3
retry 3
delay_before_retry 3
}
}
real_server 172.25.12.3 80 {
weight 1
SSL_GET {
connect_timeout 3
retry 3
delay_before_retry 3
}
}
}
5、server4主机进行keepalived配置
- server1主机:scp -r /etc/keepalived/keepalived.conf
server4:/etc/keepalived/keepalived.conf - 主要修改以下部分:
vrrp_instance VI_1 {
state BACKUP ##热备,为辅机
interface eth0
virtual_router_id 12
priority 50 ##优先级
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.12.100
}
}
6、实现高可用和健康检查
server1主机和server4主机:
- 安装mailx服务
- 启动ipvsadm
- 加载keepalived (reload)
server2主机和server3主机:
- http服务正常
- 默认发布文件ok
7、物理主机实验
- 实验时,可删除vip,停止keepalived服务,停止网络服务,刷掉内核
- 注意:手动删除vip时,keepalived服务失效
[kiosk@foundation12 Desktop]$ curl 172.25.12.100
<h1>server-2</h1>
[kiosk@foundation12 Desktop]$ curl 172.25.12.100
<h1>server-3</h1>
[kiosk@foundation12 Desktop]$ curl 172.25.12.100
<h1>server-2</h1>
[kiosk@foundation12 Desktop]$ curl 172.25.12.100
<h1>server-3</h1>
- server1主机keepalived服务开启时: (server1优先级高,显示server1的Mac地址)
- server1主机keepalived服务关闭时:(切换到server2主机,显示server2的Mac地址)
- server1主机keepalived服务再次开启时: (由于server1主机优先级高,再次切换到server1主机)
## server1主机keepalived服务开启时:
[kiosk@foundation12 Desktop]$ arp -an | grep 100
? (172.25.12.100) at 52:54:00:8d:99:6b [ether] on br0
## server1主机keepalived服务关闭时:
[kiosk@foundation12 Desktop]$ arp -an | grep 100
? (172.25.12.100) at 52:54:00:ee:c4:fb [ether] on br0
## server1主机keepalived服务再次开启时:
[kiosk@foundation12 Desktop]$ arp -an | grep 100
? (172.25.12.100) at 52:54:00:8d:99:6b [ether] on br0
[kiosk@foundation12 Desktop]$ curl 172.25.12.100
<h1>server-2</h1>
[kiosk@foundation12 Desktop]$ curl 172.25.12.100
<h1>server-3</h1>
[kiosk@foundation12 Desktop]$ curl 172.25.12.100
<h1>server-2</h1>
[kiosk@foundation12 Desktop]$ curl 172.25.12.100
<h1>server-3</h1>
四、Nat模式
1、server1主机(VS)
##注意:导入modprobe iptable_nat 模块
1、设定虚拟IP: ##注意VIP与真实IP在同一Vlan
[root@server1 ~]# ip addr add 172.25.0.12/24 dev eth0
2、设定ipvsadm策略:
[root@server1 ~]# ipvsadm -A -t 172.25.0.12:80 -s rr
[root@server1 ~]# ipvsadm -a -t 172.25.0.12:80 -r 172.25.12.2:80 -m
[root@server1 ~]# ipvsadm -a -t 172.25.0.12:80 -r 172.25.12.3:80 -m
[root@server1 ~]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.0.12:http rr
-> server2:http Masq 1 0 0
-> server3:http Masq 1 0 0 [root@server1 ~]# service ipvsadm save
ipvsadm: Saving IPVS table to /etc/sysconfig/ipvsadm: [ OK ]
注意:设定网关并打开VS主机内核路由
[root@server1 ~]# route -v
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
172.25.12.0 * 255.255.255.0 U 0 0 0 eth0
172.25.254.0 * 255.255.255.0 U 0 0 0 eth0
link-local * 255.255.0.0 U 1002 0 0 eth0
default server1 0.0.0.0 UG 0 0 0 eth0
[root@server1 ~]# sysctl -a | grep ip_forward
net.ipv4.ip_forward = 1
2、server2主机(RS1)
设定网关
[root@server2 ~]# route -v
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
172.25.12.0 * 255.255.255.0 U 0 0 0 eth0
link-local * 255.255.0.0 U 1002 0 0 eth0
default server1 0.0.0.0 UG 0 0 0 eth0
3、server3主机(RS2)
设定网关
[root@server3 ~]# route -v
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
172.25.12.0 * 255.255.255.0 U 0 0 0 eth0
link-local * 255.255.0.0 U 1002 0 0 eth0
default server1 0.0.0.0 UG 0 0 0 eth0
4、Test主机实验效果
##注意:Test主机与172.25.0.12处于同一Vlan
## 开机后httpd没有启动,需要打开httpd服务
[root@server4 ~]# curl 172.25.0.12
<h1>server-3</h1>
[root@server4 ~]# curl 172.25.0.12
<h1>server-2</h1>
[root@server4 ~]# curl 172.25.0.12
<h1>server-3</h1>
[root@server4 ~]# curl 172.25.0.12
<h1>server-2</h1>
五、Tun模式
1、server1主机(VS)
1、添加tunl0隧道模式
[root@server1 ~]# modprobe ipip
2、添加VIP,激活tunl0
[root@server1 ~]# ip addr add 172.25.12.100/24 dev tunl0
[root@server1 ~]# ip link set up tunl0
3、设定ipvsadm策略
[root@server1 ~]# ipvsadm -C
[root@server1 ~]# ipvsadm -A -t 172.25.12.100:80 -s rr
[root@server1 ~]# ipvsadm -a -t 172.25.12.100:80 -r 172.25.12.2:80 -i
[root@server1 ~]# ipvsadm -a -t 172.25.12.100:80 -r 172.25.12.3:80 -i
[root@server1 ~]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.12.100:http rr
-> server2:http Tunnel 1 0 0
-> server3:http Tunnel 1 0 0
4、Tun特点:数据包原路返回
[root@server1 ~]# route add -host 172.25.12.100 dev tunl0
2、Server2主机(RS1)
1、导入Tun模式,激活、设定VIP和数据包原路返回
[root@server2 ~]# modprobe ipip
[root@server2 ~]# ip link set up tunl0
[root@server2 ~]# ip addr add 172.25.12.100/24 dev tunl0
[root@server2 ~]# route add -host 172.25.12.100 dev tunl0
2、关闭tunl0.rpfilter
[root@server2 ~]# sysctl -a | grep rp_filter
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.tunl0.rp_filter = 0 ##必须关掉
net.ipv4.conf.tunl0.arp_filter = 0
3、设定arptables策略
[root@server2 ~]# arptables -L
Chain IN (policy ACCEPT)
target source-ip destination-ip source-hw destination-hw hlen op hrd pro
DROP anywhere 172.25.12.100 anywhere anywhere any any any any
Chain OUT (policy ACCEPT)
target source-ip destination-ip source-hw destination-hw hlen op hrd pro
mangle 172.25.12.100 anywhere anywhere anywhere any any any any --mangle-ip-s server2
Chain FORWARD (policy ACCEPT)
target source-ip destination-ip source-hw destination-hw hlen op hrd pro
3、Server3主机(RS2)
- 配置同server2主机一致
4、Test主机访问172.25.12.100:
[root@foundation12 images]# curl 172.25.12.100
<h1>server-2</h1>
[root@foundation12 images]# curl 172.25.12.100
<h1>server-3</h1>
[root@foundation12 images]# curl 172.25.12.100
<h1>server-2</h1>
[root@foundation12 images]# curl 172.25.12.100
<h1>server-3</h1>
六、Fullnat 模式
内核编译
- 注意:Fullnat 模式需要编译内核
- mem尽量大一点,虚拟机可用2048实验
1、配置yum源 LoadBalancer (安装ipvsadm)
2、下载 kernel-2.6.32-220.23.1.el6.src.rpm Lvs-fullnat-synproxy.tar.gz
3、安装 rpm-build
4、安装 rom -ivh kernel-2.6.32-220.23.1.el6.src.rpm
5、切换到 ~/rpmbuild/SERC 目录下进行预编译
- rpmbulid -bp kernel.spec
6.解决所有的依赖性 yum install (all)
- 下载3个安装包,yum 安装:asciidoc-8.4.5-4.1.el6.noarch.rpm newt-devel-0.52.11-3.el6.x86_64.rpm slang-devel-2.2.1-1.el6.x86_64.rpm
7、编译过程会随机抓取字符加密,过程太慢!!可做以下操作:
[kiosk@foundation12 Desktop]$ ssh [email protected]
root@172.25.12.5's password:
Last login: Fri Jun 22 09:56:19 2018 from 172.25.12.250
[root@server5 ~]# yum provides */rngd
。。。。。。
[root@server5 ~]# yum install -y rng-tools-2-13.el6_2.x86_64
。。。。。。
[root@server5 ~]# rngd -r /dev/urandom
8、解压文件 Lvs-fullnat-synproxy.tar.gz
cd ~/lvs-fullnat-synproxy
cp ~/lvs-fullnat-synproxy/lvs-tools-2.6.32-220.23.1.el6.patch ~/rpmbuild/BUILD/kernel-2.6.32-220.23.1.el6/Linux-2.6.32-220.23.1.el6.x86_64/
cd ~/rpmbuild/BUILD/kernel-2.6.32-220.23.1.el6/Linux-2.6.32-220.23.1.el6.x86_64/
patch -p1 lvs-tools-2.6.32-220.23.1.el6.patch
vim Makefile (添加版本信息:-220.23.1.el6.x86_64)
9、编译、安装
- make ##编译
- make modules_install ##加载模块
- make install ##安装
10、修改默认加载内核,重启即可
- vim /boot/grub/grub.conf
七、基于Fullnat的keepalivd和ipvsadm
1、解压lvs-tools包 (注意:以下操作为相对路径,/root)
- cd lvs-fullnat-synproxy/
- tar zxf lvs-tools.tar.gz
2、编译并安装keepalived
- cd tools/
- cd keepalived/
- ./configure –with-kernel-dir=”/lib/modules/2.6.32-220.23.1.el6.x86_64/build/”
3、解决依赖性
- openssl-devel、popt-devel
4、编译并安装 (ipvsadm同上)
- make && make install
5、安装ipvsadm后,查看 lvs 模式
- ipvsadm –help
- lvs工作模式 -b(Fullnat) | -m(Nat) | -g(DR)| -i(Tun)