LVS + OSPF IPV6 deployment

 【background】

       This year ipv6 thing again on the agenda, the Ministry of Industry has been a strong push, do not know if you do not have to open? Requires that we receive here is this:


image.png


        Q4 started just say as long as the main business functions to operate smoothly in the test environment ipv6 environment, suddenly became gray line of at least 10% of traffic to ipv6 environment

        This time things are frustrating ipv6, a variety of environments deployed with what tone, read a lot of information, but this information on the Internet is too small, stepped on a lot of pit during the deployment environment, where simple record.


【deploy】



lb-01

lb-02

rs-01

rs-02

vip

ipv4

10.1.1.111

10.1.1.112

10.1.1.113

10.1.1.114

10.21.5.7

ipv6

240e:45e:1111:1ff::1

240e:45e:1111:1ff::2

240e:45e:1111:1ff::3

240e:45e:1111:1ff::4

240e: 97d: 1111: 2ff :: 2



A, ospf

1, open ospf6d process monitor

, Compared to the original need for additional open ospf6d under ipv4 ipv6 environment

root@node-01:/etc/quagga# cat  daemons
zebra=yes
bgpd=no
ospfd=yes
ospf6d=yes
ripd=no
ripngd=no
isisd=no


2, the configuration ospf

root@node-01:/etc/quagga#  cat /etc/quagga/ospf6d.conf
!
! Zebra configuration saved from vty
!   2019/11/21 11:55:20
!
hostname ospf6d
password zebra
log stdout
log file /var/log/quagga/ospf6d.log
!
!

interface eth0
ipv6 ospf6 hello-interval 1
ipv6 ospf6 dead-interval 3
ipv6 ospf6 priority 0
ipv6 ospf6 instance-id 0

interface lo
ipv6 ospf6 hello-interval 1
ipv6 ospf6 dead-interval 3
ipv6 ospf6 priority 0
ipv6 ospf6 instance-id 0

router ospf6
 router-id 10.1.1.111
 interface eth0 area 0.0.0.0
 interface lo area 0.0.0.0
!
line vty


以下是ipv4的ospf 配置,可以对比下配置之间的差异

!
! Zebra configuration saved from vty
!   2019/10/15 16:51:09
!
hostname ospfd
password zebra
log stdout
log file /var/log/quagga/ospf.log
!
!

interface eth0
!
ip ospf hello-interval 1
ip ospf dead-interval 3
ip ospf priority 0

interface eth1
!
ip ospf hello-interval 1
ip ospf dead-interval 3
ip ospf priority 0

!
router ospf
ospf router-id 10.1.1.111

network 10.21.5.7/32 area 0.0.0.0
network 10.1.1.0/24 area 0.0.0.0
!
line vty


注意事项:

  • log file 建议设置一个额外的日志路径,跟原来ipv4 ospfd 日志分开打印,方便日后排查问题

  • router ospf 配置改成router ospf6

  • ospf 进程id一般使用机器IP(保证唯一即可)

  • 还有一个最大的不同就是机器IP网段跟VIP不需要在router ospf 里面宣告,ipv6只需要指定哪些接口需要对外宣告即可


3、启动quagga

root@node-01:/etc/quagga# /etc/init.d/quagga restart
[ ok ] Restarting quagga (via systemctl): quagga.service.


启动后会看到多watch 了一个ospf6d进程

root@node-01:/etc/quagga# ps aux|grep quagga
quagga   25820  0.0  0.0  24496   616 ?        Ss   15:15   0:00 /usr/lib/quagga/zebra --daemon -A 127.0.0.1
quagga   25824  0.0  0.0  26980  2732 ?        Ss   15:15   0:00 /usr/lib/quagga/ospfd --daemon -A 127.0.0.1
quagga   25828  0.0  0.0  24556   628 ?        Ss   15:15   0:00 /usr/lib/quagga/ospf6d --daemon -A ::1
root     25833  0.0  0.0  15428   168 ?        Ss   15:15   0:00 /usr/lib/quagga/watchquagga --daemon zebra ospfd ospf6d


telnet 本地2606 端口

root@node-01:/etc/quagga# telnet ::1 2606
Trying ::1...
Connected to ::1.
Escape character is '^]'.
Hello, this is Quagga (version 0.99.24.1).
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
ospf6d> show ipv6 ospf6 neighbor
Neighbor ID     Pri    DeadTime  State/IfState         Duration I/F[State]
10.1.1.1   255    00:00:02   Full/DR              00:00:09 eth0[DROther]


注:这里遇到一个坑,由于我们跑的是TUNNEL模式的LVS,需要将MTU设小,在ospf起来后,发现没办法跟交换机建立邻居,交换机侧日志显示需要将交换机接口MTU设成一样的值(我们的环境下是1440)

而ipv4 环境下交换机侧并不需要设置MTU。


4、配置VIP

LB 上面启动ipv6 vip

有两种配置方式:(IPV6相关操作命令见文末)

方式一:

root@node01:/etc/quagga#  ip addr add 240E:97D:1111:2FF::2/64 dev lo:vip1 label lo:vip1


方式二:

root@node01:/etc/quagga#  /sbin/ifconfig lo:vip3 inet6 add 240E:97D:1111:2FF::2/64



5、测试ipv6 vip连通性

root@ubuntu:/usr/local/named/etc# ping6 240e:97d:1111:2ff::2 -c 3
PING 240e:97d:1111:2ff::2(240e:97d:1111:2ff::2) 56 data bytes
64 bytes from 240e:97d:1111:2ff::2: icmp_seq=1 ttl=51 time=28.4 ms
64 bytes from 240e:97d:1111:2ff::2: icmp_seq=2 ttl=51 time=28.4 ms
64 bytes from 240e:97d:1111:2ff::2: icmp_seq=3 ttl=51 time=28.3 ms


注意:

测试的机器必须也有ipv6地址,否则会返回网络不可达


二、配置LVS

1、编译安装较新版本keepalived(这里我编译的是keepalived-2.0.18)

注:建议在 ubuntu16.04 以上版本环境下编译,尝试过在12.04 跟 14.04等低版本下ubuntu有些包找不到

root@ubuntu:/usr/local/src/keepalived-2.0.18# apt-get install libnftnl-dev libmnl-dev

root@ubuntu:/usr/local/src/keepalived-2.0.18# apt-get install iptables-dev libipset-dev libnl-3-dev libnl-genl-3-dev libssl-dev

root@ubuntu:/usr/local/src/keepalived-2.0.18# ./configure --prefix=/usr/local/keepalived

root@ubuntu:/usr/local/src/keepalived-2.0.18#  make && make install



2、keepalived配置

方式一:命令行

root@node-01:/etc/quagga# ipvsadm -A -t [240e:97d:2014:1ff::2]:80 -s rr
root@node-01:/etc/quagga# ipvsadm -a -t [240e:97d:2014:1ff::2]:80 -r 10.21.41.43:80 -i
root@node-01:/etc/quagga# ipvsadm -a -t [240e:97d:2014:1ff::2]:80 -r 10.21.41.44:80 -i



方式二:

virtual_server 240e:97d:1111:2ff::2 80 {
    delay_loop 6
    lb_algo  wrr
    lb_kind TUN
    persistence_timeout 0
    protocol TCP

    real_server 240e:45e:1111:1ff::3 80 {
        weight 10
        TCP_CHECK {
            connect_port 80
            connect_timeout 8
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 240e:45e:1111:1ff::4 80 {
        weight 10
        TCP_CHECK {
            connect_port 80
            connect_timeout 8
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}


root@node-01:/etc/quagga# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=1048576)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  [240e:97d:1111:2ff::2]:80 rr
  -> [240e:45e:1111:1ff::3]:80    Tunnel  1      0          0         
  -> [240e:45e:1111:1ff::4]:80    Tunnel  1      0          0


3、配置RS VIP

LVS tunnel模式下,RS上面需要做2个事情,一个是起VIP,一个是跟LB间建立TUN隧道

root@node-03:~ # ip -6 tunnel add lvs6tun0 mode ip6ip6 local 240e:45e:1111:1ff::3 remote 240e:45e:1111:1ff::2 dev eth0
root@node-03:~ # ip link set dev lvs6tun0 up
root@node-03:~ # ip -6 addr add 240e:97d:1111:2ff::2/64 dev lvs6tun0


创建ipv6 tunnel 命令说明,由于是点对点tunnel,这里需要指明local 即本地RSIP,remote 对端IP,也就是LB的IP

ip -6 tunnel add lvs6tun0 mode ip6ip6 local $rs-ip remote $lb-ip dev $interface


点对点顾名思义就是每个节点之间需要建立peer,n个LB节点,m个RS节点,最终需要建立 n * m 个tunnel


这里跟ipv4 比较大的区别是,ipv6没有广播地址,没办法建立一个一对多的tunnel,只能基于ip6ip6协议建立一个点对点的tunnel,下面是ipv4环境下tunnel 创建的方式,可以对比下:

/sbin/ifconfig tunl0 $vip broadcast $vip netmask 255.255.255.255 up
/sbin/route add -host $vip dev tunl0


4、服务测试

在一台拥有ipv6 的机器上(不能是上述集群中的机器),尝试通过ipv6 vip 访问

root@ubuntu:~ # for i in {0..999};do nc -6 -v -w 1 240e:97d:1111:2ff::2 80;done
Connection to 240e:97d:1111:2ff::2 80 port [tcp/http] succeeded!
Connection to 240e:97d:1111:2ff::2 80 port [tcp/http] succeeded!
Connection to 240e:97d:1111:2ff::2 80 port [tcp/http] succeeded!
... ...

root@ubuntu:~ # curl http://[240e:97d:1111:2ff::2]/ -H"Host:ipv6-test.aaa.com"
Test Page


至此,测试环境可以正常通信。


另外说明一点,测试的时候也验证了LVS NAT/DR模式,都可以原生支持,不像TUNNEL 这么麻烦,像DR模式,RS上配置个VIP上去就完事了。之所以没采用DR,是因为DR模式存在LB跟RS必须在同个网段的限制,我们生产环境无法保证LB跟RS一定在同网段。而NAT性能较差,FullNAT 也有一部分业务在用,性能不是特别好,依赖nf_conntrack 表,最后选择了TUNNEL模式。



注意:

部署过程中ipv6 tunnel 的配置最费劲,尝试过各种方式,都不行。现象是LB 给RS 发了SYN包,但收不到ACK, LB上连接处于一个SYNC_RECV状态(通过ipvsadm -lnc 可以看到连接状态)

最后通过上面的点对点tunnel方式解决,暂时没有其他更合适的方式,后续发现了再更新下。

These are just a basic test environment is available, if needed to Zhengshishangxian various performance parameters tuning job.


ipv6 fact, something very much involved, not just only the above-mentioned, as well as DNS, GSLB, business, network, CDN and so will involve all levels of the relevant transformation. Test, then you can play with, but still have to be cautious formally launched, after all, as far as I know a lot of domestic carriers ipv6 support is not particularly good, according to the last exchange with the students Ali cloud in the same scene, cut from ipv4 to ipv6 performance will drop 20% -40%, the latter need to rely on continuous optimization, the whole long way to go.



Attachment:

1, IPV6 environmental testing site

Please use pure ipv6 environment: http://ipv6.test-ipv6.com

Please use a dual-stack environment: http://www.test-ipv6.com/


2, ipv6 related instructions

http://tldp.org/HOWTO/Linux+IPv6-HOWTO/ch06s02.html

http://tldp.org/HOWTO/Linux+IPv6-HOWTO/ch07s02.html

http://tldp.org/HOWTO/Linux+IPv6-HOWTO/ch04s03.html




Guess you like

Origin blog.51cto.com/pmghong/2452773