Those functions in simple terms the new generation of cloud network --VPC and (c) implement OpenStack Neutron based - routing and tunnels

https://www.cnblogs.com/opsec/p/7016631.html

In an article on the series,

Those functions in simple terms the new generation of cloud network --VPC and implementation of OpenStack Neutron (b) based on

http://www.cnblogs.com/opsec/p/6895746.html

Major topics of an important scene in the multi-tenant network function - bandwidth, QOS principle and implementation.

Aio also gives a first style REST API example code, can also be used independently without modifying Neutron code.

opencloudshare/Op_vmqos

https://github.com/opencloudshare/Op_vmqos

In the last logical principle described in more detail, the main is to find host cloud hosting virtual NIC device is located through the OpenStack API, the implementation of rules written by TC paramiko to the host.

Then you can use the flexibility of modifying region, domain and other parameters, use the default code.

Locate the virtual NIC device is not necessarily the way through VM id can be modified as needed. For example, in a previous project, involving the "automatic expansion of bandwidth based on the current network traffic" feature, in short, is the "current cloud assets bandwidth usage reaches X, automatic expansion of cloud assets enjoyed by the bandwidth controller . " At that time the environment is not Ceilometer, scheme used in the host coat of Zabbix agent, automatic discovery of all network cards and then monitor and after the trigger threshold, call-by-QOS of the REST API, the value of the time pass, is a specific NIC device name, rather than the VM id.

Layer realize bandwidth sharing control principle is the same, but in the extracted L3 namespace NIC device name, the router-port is related methods.

Enter today's topic - routing, tunneling and case, in the actual scene of the cloud, hybrid cloud accounted for a large part, in general we define the concept of hybrid clouds, refers to both private cloud content that also incorporates the section on the public cloud, build together, make up the cloud environment.

经典案例,是在各大峰会上被经常拿来说的 新浪微博混合云架构实践。既包含了业务层面 网络、存储、Docker 等领域的挑战,也存在 控制层面的 运维管理、弹性调度等的考量。

The above said "private cloud content", in fact, can be extended to "any private IT environment." Members of cloud computing engineers must have met many in the concept of "cloud" of the previous proposed IT projects and open up new cloud computing environments demand access. If a private cloud project customers often will require energy and IT infrastructure previous visits, rather than simply deploying a fragmented environment or migration. VLAN technology to directly access an external network tenant network may not be a universal selection, as compared with VXLAN Layer + VPC network, the former more invasive, requiring more complex level gateway routing control. NAT sucked Fi technology to improve the level of the floor, there is a clear perception of the user, the cloud assets and a larger number of lower demand for the whole port, the number of IP NAT rules can be used to control the upper network has become the bottleneck. Thus tunneling technology has become a better choice.

Next, I will be involved in several of the case, for example, to talk about in the face of complex interconnection requirements, the program can be considered, with the use of OpenStack Neutron complete the function.

  • case 1
客户已有一套传统IT环境,现要求与新部署的OpenStack环境某一VPC内云资产进行互访。

OpenStack of neutron-vpn-agent can provide IPsec VPN services, the establishment of a tunnel between the routing.

In the enterprise network environment, you can also use the configuration easier IPIP GRE tunnel and the tunnel.
Of course, what kind of protocol specific use, more is to look at the degree of support for routing facility agreement, l3-agent established in the namespace soft route because based on linux kernel, in the scene is not very high performance requirements can support a wide variety of protocols . When the end of the physical devices, then determining whether the obtained support.

Do not do the IPsec VPN example, you can view the official documentation.

Soft simple example of GRE tunnel to get through the realization of:

Copy the code
Load modules # 
modprobe ip_gre 
# establish a tunnel, the tunnel name, type, remote address, local address 
IP GRE Tunnel the Add mytunnel MODE remmote 10.200.11.22 local 10.100.1.11 
# tunnel interface addresses, and not to be any of a network of open conflict to the 
ifconfig mytunnel 10.199.199.1 255.255.255.0 Netmask 
# Add to peer network route 
ip route add 192.168.3.0/24 dev mytunnel
Copy the code

 

At the opposite end:

Copy the code
Load modules # 
modprobe ip_gre 
# establish a tunnel, the tunnel name, type, remote address, local address 
IP GRE Tunnel the Add mytunnel MODE remmote 10.100.1.11 local 10.200.11.22 
# tunnel interface addresses, and not to be any of a network of open conflict to the 
ifconfig mytunnel 10.199.199.2 255.255.255.0 Netmask 
# Add to peer network route 
ip route add 192.168.1.0/24 dev mytunnel
Copy the code

 

After that, the network can be tested within the OpenStack VPC effect to the right to get through the exchange of visits of the existing network.

  • case 2
客户已有一套传统IT环境,现要求与新部署的OpenStack环境内 多个 VPC内云资产进行互访。

This case is a enhanced version. If the network environment in the OpenStack open, under the same project, we will be more connected to the same network routing, if the interface is not the default gateway, adding the corresponding routing table.

In this case, the external network environment and the traditional IT environment OpenStack is actually connected via leased line, and multiple users within OpenStack VPC is the enterprise different groups of items. In short, the plurality of the project team, requires share the same physical line, but can gain access to the respective VPC.

In each case the same VPC original route (default gateway XXX1) establishing a dedicated route, each VPC access network in each of the specific routing (line gateway xxx99), with the route through the physical environment, the traditional line form to get through.

Soft simple example IPIP tunnel to get through the realization of:

Copy the code
Load modules # 
modprobe IPIP 
# establish a tunnel, the tunnel name, type, remote address, local address 
IP Tunnel the Add mytunnel MODE IPIP remmote 10.200.11.22 local 10.100.1.11 
# tunnel interface addresses, and not to be any of a network of open conflict to the 
ifconfig mytunnel 10.199.199.1 255.255.255.0 Netmask 
# Add to peer network route 
ip route add 192.168.3.0/24 dev mytunnel
Copy the code

 

At the opposite end:

Copy the code
# 加载模块
modprobe ipip
# 建立隧道,隧道名,隧道类型,远端地址,本地地址
ip tunnel add mytunnel mode ipip remmote 10.100.1.11 local 10.200.11.22
# 设置隧道接口地址,任一不和待打通的网络冲突的即可
ifconfig mytunnel 10.199.199.2 netmask 255.255.255.0
# 添加到对端网络的路由
ip route add 192.168.1.0/24 dev mytunnel
ip route add 192.168.2.0/24 dev mytunnel
ip route add 192.168.0.0/24 dev mytunnel
Copy the code

 

还没有完,因为对VPC内的云资产来说,默认网关是 x.x.x.1 ,意味着在不改动云资产内路由表的情况下,发往 192.168.3.0/24 的数据包都将发往 x.x.x.1 。所以我们需要对三个VPC的路由设施添加路由表,将数据包转发至 x.x.x.99 。

vpc1_routes: {"destination": "192.168.3.0/24", "nexthop": "192.168.1.99"}
vpc2_routes: {"destination": "192.168.3.0/24", "nexthop": "192.168.2.99"}
vpc3_routes: {"destination": "192.168.3.0/24", "nexthop": "192.168.0.99"}

 

由传统环境到云上多个VPC的互访功能即可实现,且 VPC 与VPC 间仍是不能互访的。

在这个 case 中,我更推荐将用于专线的路由不使用默认的L3-agent —— neutron api 中的router 去创建,而是以云主机的形式建立,理由如下:

  1. 原router所有的内部接口现在是云主机的各NIC,可以通过对云主机的控制来改变配置,避免了直接操作L3节点可能的风险;
  2. 可以在云主机而非L3节点内装 满足复杂网络监控需求的软件/Agent;
  3. 通过调整CPU来调整转发能力;
  4. 通过控制port 来控制各VPC对专线带宽的占用情况与应急断连的保护;
  5. 云主机可以主备,在自动切换、恢复与迁移方面更方便。

在OpenStack环境建立用于转发功能的云主机,除了将 云主机操作系统内核参数的 ip_forward打开外,还需通过Neutron 对该云主机的port 进行 port_security_enabled = False 的操作。这一点在部署云主机版的NAT网关时同样适用。

  • case 3
客户要求将一台高性能物理机接入云上VPC内,方便资源的互访。

走南北方向的其它区域,是最简单的方案。


但难点在于,客户并不想暴露OpenStack环境所在区域南北方向上的其它IP地址。换而言之,想让VPC内租户通过VPC内网络的地址访问到物理机。NAT是个不错的选择,基于前两个case的路由隧道方案也可以不必暴露基础环境中南北方向的IP规划。


但我们也可以利用 OVS 的特性,将以有这个需求的VPC内网络用 VLAN 组网,实现基于物理 VLAN 交换机的跨物理服务器二层虚拟网络 ;或是 仍用 VXLAN 组网,但使用支持 VXLAN 的物理交换机来扩展原本架构。

我们知道, VTEP 和 VXLAN GW 是VXLAN组网在东西向扩展的核心组件,混合Overlay 方案通过结合软硬实现VTEP等组件的功能, 同时对虚拟化环境与物理机提供了很好的支持。

不过这当然需要物理硬件的支撑。如果在宿主机上安装openvswitch 等软实现,不可避免的会在物理机shell暴露东西向VXLAN GW,肯定是在考虑之外的。

关于Neutron VXLAN的内容可以参考云极星创(现海航云)CTO 刘大神写得非常详细的系列:

Neutron 理解 (1): Neutron 所实现的虚拟化网络 [How Netruon Virtualizes Network]

http://www.cnblogs.com/sammyliu/p/4622563.html

在有其它SDN控制器的场景下,通过Neutron 加载不同 driver 实现对控制器的调度,也能达到效果。

之前在和 VMware NSX做技术交流的时候,NSX在对同样需求时给出的参考建议也是通过NSX直接控制 用于物理机接入的交换机。

在公有云上,云资产向VPC的引入更常见的方法是通过私有DNS解析与NAT技术相结合的方式,这方面就不展开多谈了,结合前面的文章可以思考他们的大致实现。

企业内网与专线环境下,对互联互通功能实现的选择可以使用payload更小,性能更好的隧道技术。但在公网环境下,基于IPSec与SSL的隧道技术从传输安全的角度上是必须考虑的。

在系列的上一篇,

深入浅出新一代云网络--VPC中的那些功能与基于OpenStack Neutron的实现(二) ,

http://www.cnblogs.com/opsec/p/6895746.html

主要讲了在多租户场景下一个重要的网络功能——带宽QOS的原理与实现。

先同样给出一份 aio风格的 REST API 示例代码,同样可以在不修改Neutron代码的情况下独立使用。

opencloudshare/Op_vmqos

https://github.com/opencloudshare/Op_vmqos

逻辑原理在上一篇中介绍的比较详细,主要就是通过 OpenStack API 找到云主机的虚拟NIC设备所在的宿主机,通过paramiko到宿主机上执行写入TC规则。

使用的话可以灵活的修改 region、domain等参数,代码里使用的默认的。

找到虚拟NIC设备也不一定通过VM id的方式,可以根据需要修改。比如,在之前的一个项目中,涉及到了 “根据当前网络流量自动扩容带宽” 功能,简单来说是 “当前云资产带宽使用率达到X时,自动通过控制器对云资产所享有的带宽进行扩容”。当时的环境没有Ceilometer ,使用的方案是 在宿主机上装了 Zabbix agent,对所有网卡实现自动发现,然后监控并在触发阈值后,传值调用 QOS的REST API,当时传的值,就是具体的 NIC设备名,而不是 VM id。

实现三层共享带宽与控制原理是相同的,只是提取L3 namespace里的NIC设备名时,为 router-port 相关方法。

进入今天的正题——路由、隧道与案例,在实际的云计算场景中,混合云占了很大一部分,一般我们对混合云的概念定义,是指既有私有云的内容,又结合了在公有云上的部分,共同建设、组成的云环境。

经典案例,是在各大峰会上被经常拿来说的 新浪微博混合云架构实践。既包含了业务层面 网络、存储、Docker 等领域的挑战,也存在 控制层面的 运维管理、弹性调度等的考量。

上面说的“私有云内容”,其实可以扩充为 “任何私有IT环境” 。各位云计算工程师们一定遇到过很多 在“云” 的概念提出以前的IT建设项目 与 新的云计算环境打通接入 的需求。就好像私有云项目中客户经常会要求能和以前的IT设施互访,而不是简单的部署一个割裂的环境或迁移。直接使用VLAN技术将租户网络接入外部网络可能并不是一个通用性的选择,因为与VXLAN二层+VPC网络相比,前者有更多的侵入性、需要更复杂的网关层面的路由控制等。NAT技术则是将网络连接的层次提高了一层,用户有明确的感知,在云资产数量较多及全端口的需求下,对NAT规则的控制与上层网络可使用的IP数量成了瓶颈。因而隧道技术成了更好的选择。

接下来我将以几个参与的case为例,讲讲在面对复杂的互联互通需求时,可以考虑的方案,与利用 OpenStack Neutron 完成的功能。

  • case 1
客户已有一套传统IT环境,现要求与新部署的OpenStack环境某一VPC内云资产进行互访。

OpenStack 的 neutron-vpn-agent 可以提供 IPsec VPN服务,在路由间建立隧道。

在企业内网环境,还可以使用配置更方便的 GRE隧道和IPIP隧道。
当然,具体使用何种协议,更多的是看路由设施对协议的支持程度,l3-agent 在 namespace建立的软路由因为基于linux kernel ,在没有极高性能要求的场景下可以支撑种类繁多的协议。而当对端是物理设备时,则得确定是否支持。

IPsec VPN就不做示例了,官方文档即可查看。

软实现GRE隧道打通的简单示例:

Copy the code
# 加载模块
modprobe ip_gre
# 建立隧道,隧道名,隧道类型,远端地址,本地地址
ip tunnel add mytunnel mode gre remmote 10.200.11.22 local 10.100.1.11
# 设置隧道接口地址,任一不和待打通的网络冲突的即可
ifconfig mytunnel 10.199.199.1 netmask 255.255.255.0
# 添加到对端网络的路由
ip route add 192.168.3.0/24 dev mytunnel
Copy the code

 

在对端:

Copy the code
# 加载模块
modprobe ip_gre
# 建立隧道,隧道名,隧道类型,远端地址,本地地址
ip tunnel add mytunnel mode gre remmote 10.100.1.11 local 10.200.11.22
# 设置隧道接口地址,任一不和待打通的网络冲突的即可
ifconfig mytunnel 10.199.199.2 netmask 255.255.255.0
# 添加到对端网络的路由
ip route add 192.168.1.0/24 dev mytunnel
Copy the code

 

之后,便可测试OpenStack VPC内的这个网络到右侧已有网络的打通互访效果。

  • case 2
客户已有一套传统IT环境,现要求与新部署的OpenStack环境内 多个 VPC内云资产进行互访。

这个case是上一个的加强版。如果是在OpenStack环境内的网络打通,同一project下,我们会将多个网络连接同一路由,如果接口不是默认网关,添加对应的路由表。

在这个案例中,传统IT环境与OpenStack环境的外部网络其实是通过物理专线相连接的,而OpenStack内多个VPC的使用者是企业的不同项目组。简而言之,这多个项目组,要求共用同一条物理专线,但都能访问到各自的VPC内。

在各VPC原有路由不变(默认网关为 x.x.x.1)的情况下,建立一个专用路由,将各VPC内各网络接入该专用路由(专线网关为 x.x.x.99),用该路由通过物理专线与传统环境形成打通。

软实现IPIP隧道打通的简单示例:

Copy the code
# 加载模块
modprobe ipip
# 建立隧道,隧道名,隧道类型,远端地址,本地地址
ip tunnel add mytunnel mode ipip remmote 10.200.11.22 local 10.100.1.11
# 设置隧道接口地址,任一不和待打通的网络冲突的即可
ifconfig mytunnel 10.199.199.1 netmask 255.255.255.0
# 添加到对端网络的路由
ip route add 192.168.3.0/24 dev mytunnel
Copy the code

 

在对端:

Copy the code
# 加载模块
modprobe ipip
# 建立隧道,隧道名,隧道类型,远端地址,本地地址
ip tunnel add mytunnel mode ipip remmote 10.100.1.11 local 10.200.11.22
# 设置隧道接口地址,任一不和待打通的网络冲突的即可
ifconfig mytunnel 10.199.199.2 netmask 255.255.255.0
# 添加到对端网络的路由
ip route add 192.168.1.0/24 dev mytunnel
ip route add 192.168.2.0/24 dev mytunnel
ip route add 192.168.0.0/24 dev mytunnel
Copy the code

 

还没有完,因为对VPC内的云资产来说,默认网关是 x.x.x.1 ,意味着在不改动云资产内路由表的情况下,发往 192.168.3.0/24 的数据包都将发往 x.x.x.1 。所以我们需要对三个VPC的路由设施添加路由表,将数据包转发至 x.x.x.99 。

vpc1_routes: {"destination": "192.168.3.0/24", "nexthop": "192.168.1.99"}
vpc2_routes: {"destination": "192.168.3.0/24", "nexthop": "192.168.2.99"}
vpc3_routes: {"destination": "192.168.3.0/24", "nexthop": "192.168.0.99"}

 

由传统环境到云上多个VPC的互访功能即可实现,且 VPC 与VPC 间仍是不能互访的。

在这个 case 中,我更推荐将用于专线的路由不使用默认的L3-agent —— neutron api 中的router 去创建,而是以云主机的形式建立,理由如下:

  1. 原router所有的内部接口现在是云主机的各NIC,可以通过对云主机的控制来改变配置,避免了直接操作L3节点可能的风险;
  2. 可以在云主机而非L3节点内装 满足复杂网络监控需求的软件/Agent;
  3. 通过调整CPU来调整转发能力;
  4. 通过控制port 来控制各VPC对专线带宽的占用情况与应急断连的保护;
  5. 云主机可以主备,在自动切换、恢复与迁移方面更方便。

在OpenStack环境建立用于转发功能的云主机,除了将 云主机操作系统内核参数的 ip_forward打开外,还需通过Neutron 对该云主机的port 进行 port_security_enabled = False 的操作。这一点在部署云主机版的NAT网关时同样适用。

  • case 3
客户要求将一台高性能物理机接入云上VPC内,方便资源的互访。

走南北方向的其它区域,是最简单的方案。


But the difficulty is that customers do not want to expose OpenStack environment other IP addresses on the area where the north-south direction. In other words, they want within the VPC-tenant access to a physical address through VPC machine within the network. NAT is a good choice, based on two case before routing the tunnel option can also be the basis for planning you do not have to be exposed IP environment of the north-south direction.


But we can also take advantage of the characteristics of OVS, there will be demand for this network within the VPC with a VLAN network, to achieve based on Layer 2 physical servers across physical VLAN switch virtual network; or still use VXLAN network, but the use of support VXLAN physical switch to extend the original architecture.

We know, VTEP and VXLAN GW is VXLAN network in the east-west extension of the core components, mixing Overlay program through a combination of hardware and software functions to achieve VTEP and other components, as well as the virtual environment and the physical machine provides good support.

But this of course requires physical hardware support. If you install software to achieve openvswitch like on the host, inevitably exposed to something in the physical machine shell to VXLAN GW, certainly in consideration beyond.

About content Neutron VXLAN can refer to the pole star to create a cloud (cloud now HNA) CTO Liu God written in great detail the series:

Neutron understand (1): a virtualized network Neutron implemented [How Netruon Virtualizes Network]

http://www.cnblogs.com/sammyliu/p/4622563.html

In other scenarios SDN controller, driver implemented by Neutron loading different scheduling controller can achieve the desired effect.

Before the VMware NSX and technical exchanges do when, in reference to the NSX Similarly advice given demand is directly controlled by the physical machine NSX access switch.

In the public cloud, cloud VPC assets to the introduction of a more common approach is through private DNS resolution and a combination of NAT technology in this area do not start to talk, and think they can generally achieve a combination of the previous article.

Under corporate intranet and green environment, the choice of interconnection function can be used to achieve a smaller payload and better performance of tunneling technology. But in the public network environment, based on IPSec and SSL tunneling from a security point of view of the transmission must be considered.

Guess you like

Origin www.cnblogs.com/liuhongru/p/10994307.html