"Cloud Data Center Network Architecture and Technology" Reading Notes | Chapter 5 Building the Physical Network of the Data Center (Underlay Network)

5.1 Physical networking and basic network

  • Contains the roles of Fabric, Spine, Leaf, Service Leaf, Server Leaf, Border Leaf
  • Fabric provides undifferentiated mutual access between access nodes
  • Server Leaf, Service Leaf, and Border Leaf have no difference at the network forwarding level, only the access devices are different
  • Using the flat structure of Spine-Leaf, the east-west traffic forwarding path is shorter, and the forwarding efficiency is higher
  • The Fabric network architecture can achieve elastic scaling. When the number of servers is increased, the number of leaves can be increased. When the Spine bandwidth is insufficient, the number of Spine nodes can be increased
  • Recommendations for Spine and Leaf equipment:
    • For Spine devices: Stand-alone deployment is recommended, and the convergence ratio of Leaf to Spine is 1:9~1:2
    • For Leaf devices: use M-LAG active-active deployment, or use virtual machine frame technology iStack
  • Leaf and Spine are interconnected through Layer 3 routing interfaces, implemented through dynamic routing protocols OSPF or BGP
  • It is recommended to use ECMP to achieve load sharing, based on the L4 source port number load sharing algorithm (due to the VXLAN destination port 4789 unchanged)

5.2 Design the physical network of the data center

5.2.1 Routing Protocol Selection

  • OSPF is recommended, and EBGP is recommended only when the scale is large, the deployment is required, and the flexible routing control capabilities of BGP are required.
  • 1. Underlay routing uses OSPF
    • Use of less than 100 units
    • For inside a single fabric:
      • All devices are in Area0, use the address of the Layer 3 routing port to establish OSPF neighbors, and the network type is recommended to be P2P
    • For multiple Farbic:
      • If it is still a VXLAN domain, the interconnection devices between multiple fabrics are deployed in OSPF Area0, and Fabric1 and Fabric2 are deployed in Area1 and Area2, and the whole is still an OSPF process
      • If there are two VXLAN domains, it is recommended to deploy OSPF separately for each fabric, and the interconnected devices conduct routing interaction through BGP
    • The BGP EVPN protocol is selected as the Overlay control plane. After OSPF is selected, the fault domain isolation between Underlay and Overlay is realized.
  • 2. Underlay routing uses EBGP
    • Use more than 100 units
    • For a single Fabric:
      • Spine is classified as an AS, each group of Leaf nodes is classified as an AS, and EBGP neighbors are deployed between Leaf and Spine
    • For multiple Fabrics:
      • Spine is classified as an AS, and each group of Leaf nodes is classified as an AS, and EBGP neighbors are deployed between Leaf and Spine. Each fabric is interconnected with each other's AS through DC-connected leaves. The interconnected leaves are separately deployed in an AS, and neighbors are established through EBGP and Spine.
    • The obvious disadvantage of using EBGP is the complexity of configuration, but the fault domain is smaller than OSPF, and it has rich routing control methods.

5.2.2 Server access scheme selection

  • Leaf chooses to support iStack or M-LAG technology, M-LAG is more recommended
  • The server uses 10GE or 25GE access, and the ratio of Leaf downstream bandwidth to upstream bandwidth is generally 1:9~1:2
  • It is recommended that the server Eth-Trunk is connected to the Leaf M-LAG working group

5.2.3 Border Leaf and Service Leaf node design and principles

  • Border Leaf is mainly used for north-south gateways to send north-south traffic to the peer PE and receive traffic from the PE to the data center
  • Service Leaf is mainly used to access VAS equipment, such as firewalls and load balancers
  • The Border Leaf and Service Leaf design needs to consider:
    • Border Leaf Hyperactive + Static Routing/BGP ECMP
    • Border Leaf and Service Leaf choose joint or separate installation based on the number of services and the number of routes
    • Determine the number of Service Leaf groups according to the requirements of VAS access devices
  • 1. Co-location of Service Leaf and Border Leaf
    • L4~L7 devices are mounted on both sides of the co-located node and connected to two switches (Leaf) in dual mode
    • After the traffic reaches the Spine, it is directly redirected to the VAS device, and then sent back to the Spine device after processing
    • Poor scalability, suitable for small and medium data centers
  • 2. Separate scenarios for Service Leaf and Border Leaf
    • Two kinds of Leaf can be expanded separately
    • After the traffic reaches the Spine, it will be sent to the independently deployed Service Leaf device, and then sent to the VAS device, and then sent back to the Spine device after processing.
    • The network cost is high, but it can be flexibly adjusted according to the business scale, which is suitable for enterprises with strong technical capabilities

5.2.4 Export network design

  • The following factors need to be considered when designing
    • PE is deployed independently, using stacking, or through E-Trunk, VRRP to ensure reliability
    • PE and Border Leaf adopt cross-type interconnection
    • Border Leaf recommends using active-active technology (M-LAG) for deployment, and interconnection with PE through Layer 3 routing interfaces or virtual interfaces (such as VBDIF, VLANIF)
    • Each VRF and PE have 4 L3 interfaces to connect with Border Leaf
  • Export routing planning
    • The escape link and the M-LAG peer-link link are deployed separately
    • Configure a static default route on each Border Leaf, and the next hop is the L3 interface address of the peer PE
    • Each PE is configured with a static detailed route, the destination address is the intranet service network segment, and the next hop is the L3 interface address of the opposite Border Leaf device. Recommended for route aggregation
    • For the escape link, Border Leaf configures a low-priority static default route, and the next hop is the L3 interconnection interface of the opposite Border Leaf. In this way, after the static route pointing to the PE fails, this low-priority default route will take effect
    • When the dynamic route is connected, the Border Leaf still configures a static default route and imports the static route into OSPF
    • There is also an L3 link between PE and PE to suggest OSPF neighbors as the escape link for PE equipment

Guess you like

Origin blog.csdn.net/guolianggsta/article/details/115281644