4D Detailed Explanation of Cloud Network Technology in Cloud Computing

This article is shared from Huawei Cloud Community " Re-understanding Cloud Native Series (4) - Cloud Network with Hard Vegetables and Soft Chews ", author: Huang Jun/Research Leader of China Merchants Securities Cloud Native Transformation Project.

In the traditional IT architecture, the network almost refers to the physical network equipment, which is within reach. The network communication between servers is also realized through network cables or optical fiber connections. Most of the traffic control and access control strategies are also implemented in routers/switches. In the era of cloud computing, the network, in addition to traditional physical hardware devices, also includes a large number of virtualized network device software applications, which run on ordinary servers. The connection of virtual network devices not only requires the support of the underlying real hardware, but also needs to pay attention to the forwarding strategy and traffic monitoring of various overlay layers in the virtual network device in the form of software. This is also an unprecedented challenge for network administrators.

Cloud network technology associated with cloud computing

In the era of cloud computing, the virtualization of resources and the automation of allocation provide users with computing and storage resources that can be elastically scaled and flexibly configured, thereby supporting quick and easy business system releases. As an interconnected network infrastructure, how to realize the virtualization of network equipment, so as to further support the rapid expansion and contraction of business systems and the adaptive deployment of network communication strategies, and provide end-to-end network rapid response solutions for business systems, has become a cloud The core key issues that urgently need to be solved in the computing era. In essence, cloud network technology is to realize the abstraction of the underlying physical network equipment of the data center, in order to realize the secondary segmentation, integration or flexible management and control of network resources at the software layer, so as to quickly and flexibly meet the requirements of various business scenarios. Network usage requirements, this is the original intention of software-defined networking (SDN) technology.

Since the development of cloud network technology, it has been centered on the cloud. On the one hand, it has developed the virtualization capability of network hardware to support the comprehensive virtualization of data center infrastructure. The capability in this aspect is characterized by standardized and rapid networking and on-demand expansion. Network resource capacity, flexible allocation of network security management and control strategies, support for metering and billing, and full-stack monitoring; on the other hand, due to the vigorous development of public clouds, cloud-based network products and services for enterprise tenants have also been born Through the simulation of general network equipment or functions at the computer room level, it is provided to users on the cloud in the form of software product services.

Evolution of Cloud Network Basic Technology

As far as cloud network basic technology is concerned, similar to computing virtualization technology, it has also gone through a wonderful road of hardware "softening".

Everything starts with the Spine-Leaf architecture

Since the telephone was invented in 1876, the telephone switching network has gone through multiple stages such as manual switches, step switches, and crossbar switches. With the sharp increase in the number of telephone users, the network scale has rapidly expanded. The crossbar model based on the crossbar switch Both capacity and cost have been unable to meet the requirements. In 1953, a researcher named Charles Clos at Bell Labs published an article called "A Study of Non-blocking Switching Networks", which introduced a "multi-level equipment to realize non-blocking telephone switching" method, the famous CLOS network model was born. Its core idea is to build a complex and large-scale network with multiple small-scale and low-cost units, as shown in the figure below: the rectangles in the figure are all low- cke_114.pngcost forwarding unit. When the input and output increase, the intersection point in the middle does not need to increase very much. In the 1980s, with the extensive use of computers, computer network technology has also made great progress, and various network topologies began to appear, such as star, chain, ring, tree structure, and finally tree network It has become the mainstream architecture of the industry, but the network bandwidth of the early tree structure is convergent. After 2000, the Internet recovered from the economic crisis, and Internet giants represented by Google and Amazon rose rapidly. They began to promote cloud computing technology, building a large number of data centers (IDC), and even super data centers. Faced with the increasingly large scale of computing clusters, the traditional tree network can no longer meet the requirements, so an improved tree network - Fat-Tree architecture was born. Compared with the traditional tree type, the fat tree (Fat-Tree) is more like a real tree. The closer to the root of the tree, the thicker the branches (actually refers to the bandwidth). From leaves to roots, network bandwidth does not need to converge. After the fat tree architecture was introduced into the data center, the data center developed into a traditional three-tier structure (as shown in the figure below).cke_115.png

This architecture consists of core routers, aggregation routers (sometimes called distribution routers, distribution routers), and access switches. For a long time, the three-tier network structure was very popular in the data center. However, under this network architecture, there are still some disadvantages as follows: »Waste of bandwidth, only one uplink of the access switch actually carries traffic, while other uplinks are blocked (as shown by the dotted line in the figure) ;»The fault domain is large, and the STP protocol needs to re-converge when the network topology changes, which is prone to failure, thus affecting the entire VLAN network;»This architecture does not support the construction of a large two-layer network, so it is difficult to adapt to the ultra-large-scale cloud platform Network requirements, and cannot support dynamic migration of virtual machines;cke_116.png

In response to the above problems, a new data center network design scheme was born, called the Spine-Leaf architecture based on the CLOS network (Clos network-based Spine-and-Leaf architecture, also translated into leaf-spine network in Chinese). Practice has proved that this architecture can provide high-bandwidth, low-latency, non-blocking data center-level connection services. Compared with the three-tier architecture, the spine-leaf architecture is flattened and becomes a two-tier architecture, as shown on the right side of the figure above.

The advantages of the spine-leaf architecture are obvious:

1. The bandwidth utilization rate is high, and the uplink of each leaf switch can work in a load-balanced manner;

2. The network delay is predictable, and the connection path between leaf switches only needs to pass through a spine switch, and the east-west network delay is predictable;

3. Good scalability. When the upstream bandwidth is insufficient, the bandwidth can be quickly increased by increasing the number of Spine switches through horizontal expansion; when the number of servers increases, the scale of the data center machine can be quickly expanded by increasing the number of Leaf switches.

4. The performance requirements for the switch are reduced, and the east-west traffic is evenly distributed on multiple links, which reduces the purchase demand for expensive high-performance and high-bandwidth switches.

5. High security and availability. When a device fails, there is no need to re-converge, and the traffic continues to pass through other normal paths. The network connectivity is not affected, and the bandwidth is only reduced by the bandwidth of one path, and the performance impact is minimal.

Based on the above advantages, since the spine-leaf architecture appeared in 2013, it quickly replaced the position of the traditional three-tier network architecture and became a new standard for modern data centers.

The Call of the Times for Large Layer 2 Networks

In the previous section, when we talked about the emergence of the spine-leaf architecture, we have actually pointed to the reason for the emergence of the large two-layer network.

With the emergence of cloud computing technology, after server virtualization, in addition to the substantial increase in the scale of hosts in the data center, it also brings a new requirement - the dynamic migration of virtual machines. When the virtual machine is migrated, the service cannot be interrupted. This requires that not only the IP address of the virtual machine remains unchanged, but also the running state of the virtual machine (such as the TCP session state) must remain the same. Therefore, dynamic migration of virtual machines can only be performed in the same Layer 2 domain, and cannot be migrated across Layer 2 domains. However, the traditional three-tier network architecture restricts the dynamic migration of virtual machines to only a small VLAN range, and the application is greatly restricted.

In order to break this limitation and realize large-scale or even cross-regional dynamic migration of virtual machines, it is required to include all servers that may be involved in VM migration into the same Layer 2 network domain, so as to realize large-scale and barrier-free migration of VMs. This is the large two-layer network.

In the past ten years, three types of typical implementation solutions have appeared in the industry for the realization of large Layer 2 networks, namely, network device virtualization solutions, L2 over L3 solutions, and Overlay solutions. The so-called network device virtualization technology is to combine two or more physical network devices that are redundant with each other and virtualize them into one logical network device, which appears as only one node in the entire network. Combined with the link aggregation technology, the original multi-node and multi-link structure can be transformed into a logical single-node and single-link structure, and the loop problem will be solved along the way. A Layer 2 network built with network device virtualization + link aggregation technology naturally has no loops, and its scale is only limited by the access capabilities that the virtual network device can support. As long as the virtual network device allows it, the Layer 2 network can be as big as it wants Can. However, most of these device virtualization protocols are proprietary to network device manufacturers, so only devices from the same manufacturer can be used to form a network. Moreover, the network scale will eventually be limited by the size of the stacking system itself. The largest stacking/clustering It can probably support access to 10,000 to 20,000 hosts, but it still seems powerless for a 100,000-level super-large data center.

The focus of the L2 over L3 solution is not to eliminate or block loops, but how to avoid loops in logical forwarding paths in the case of physical loops. This kind of scheme can not only prevent broadcast storms under redundant links, but also implement ECMP (equivalent routing) by inserting additional frame headers before Layer 2 messages and using routing calculations to control the forwarding of data on the entire network. , so that the scale of the Layer 2 network can be extended to the entire network without being limited by the number of core switches. Of course, this requires the switch to change the traditional MAC-based Layer 2 forwarding behavior and adopt new protocol mechanisms to forward Layer 2 packets. These new protocols include TRILL, FabricPath, and SPB. For the TRILL protocol, CISCO, Huawei, Broadcom, Juniper, etc. are all its supporters; while Avaya, ALU, etc. are clusters of SPB; and manufacturers such as HP support both types of protocols. Generally speaking, technologies such as TRILL and SPB are the main layer-2 network technology solutions promoted by CT manufacturers, while IT manufacturers engaged in server virtualization technology are left aside, and the right to speak in the network is greatly weakened.

However, with the rapid development of cloud computing technology, IT manufacturers will not sit still. The Overlay solution is a large two-layer network solution mainly promoted by IT manufacturers engaged in server virtualization technology. The principle is to use tunnel encapsulation to encapsulate the original Layer 2 message sent by the source host, then transparently transmit it in the existing network, and then decapsulate it into the original message after reaching the destination, and forward it to the target host. In this way, Layer 2 communication between hosts is realized. The packet encapsulation/decapsulation is performed on the virtual switch vSwitch inside the server, and the external network only performs ordinary layer-2 switching and layer-3 forwarding for the encapsulated packets, so the control of network technology is back to In the hands of IT vendors. Typical technical implementations include VXLAN, NVGRE, STT, etc. Of course, current CT manufacturers are also actively participating in the Overlay solution. After all, cloud computing is the general trend. Therefore, the current VXLAN and NVGRE technologies can also deploy the access points of the Overlay network on network devices such as TOR, and the network access devices can complete the packet encapsulation of VXLAN and NVGRE. The benefits of this are:

» On the one hand, for virtual servers, the performance of hardware network devices is much better than that of software vSwitches. Using TOR and other devices for encapsulation/decapsulation, the overall cloud network performance is naturally better.

» On the other hand, deploying Overlay access points on TOR also facilitates the integration of non-virtualized servers into the Overlay network, which is basically the case for the bare metal solutions of mainstream cloud vendors.

As a result, both CT vendors and IT vendors have achieved a harmonious and win-win situation in the field of large Layer 2 networks. Because of this, the Overlay solution has become the mainstream solution for large Layer 2 networks today.

Underlay and Overlay officially hold up the sun of SDN

As mentioned in the previous section, the Overlay technology is actually a tunnel encapsulation technology. The mainstream protocols include VXLAN, NVGRE, etc. The basic principle is to encapsulate Layer 2 packets through tunnel encapsulation and make them transparent in the existing network Transmission, decapsulation to get the original message after reaching the destination, which is equivalent to overlaying a large Layer 2 network on top of the existing network. The underlay layer becomes a bearer network, including various physical network devices, such as TOR switches, aggregation switches, core switches, load balancing devices, firewall devices, etc. This realizes the isolation of logical networking and physical networking. For example, in Huawei Cloud/Alibaba Cloud solutions, VXLAN technology is used to build an overlay network. Service packets run on the VXLAN overlay network and are separated from the physical bearer network. layer, decoupling.

As for the current popular SDN technology, Underlay/Overlay has become an indispensable part from concept to practice.

In the data center or data center interconnection (DCI) scenario, the operator will first build an underlying Underlay network skeleton, and connect all hardware units (such as servers/storage devices/monitoring devices, etc.) to each other through switches and routers. In this Underlay network, by running IP network protocols such as ISIS or BGP, the reachability between each network hardware unit is guaranteed. After completing the Underlay network architecture, administrators can deploy SDN controllers to regulate and orchestrate all hardware resources on the entire network, and then establish Overlay network connections between tenants to carry real user traffic.

cke_117.png

SDN technology makes closed hardware "soft" open

Software Defined Network (SDN) technology is a reconstruction of the traditional network architecture. Its core idea is to hand over the management authority of the network to the control plane through the separation of the control plane (Control Plane) and the data plane (Data Plane). The controller software at the layer, through the OpenFlow protocol channel, uniformly issues instructions to the data forwarding layer devices, so as to realize the full decoupling of network control and data forwarding, and break the closedness of traditional network devices in one fell swoop.

Due to the openness of the OpenFlow protocol, third parties can also develop corresponding applications and place them in the control layer, which makes the allocation of network resources more flexible and can better meet the personalized scenarios of enterprises. The network management personnel only need to issue instructions to the data layer devices through the controller, and do not need to log in to the devices one by one, which greatly improves the network management efficiency and saves labor costs. It can be said that SDN technology has greatly promoted the development process of network virtualization, and therefore, its network architecture design ideas have been fully realized and implemented in today's major cloud platforms. There have been three mainstream implementations of SDN, namely open source software led by the OpenFlow organization (including support from companies such as Google, IBM, and Citrix), application-centric infrastructure technology (Application Centric Infrastructure, ACI) led by Cisco, and NSX led by VMware. . Clearly, the open source OpenFlow has become the de facto industry standard.

cke_118.png

The value of SDN

Supporting Rapid Innovation of Network Services

The programmability and openness of SDN enable us to quickly develop new network services and accelerate service innovation. If you want to deploy new services on the network, you can modify the SDN software to realize fast network programming and fast service launch. Under the SDN architecture, the underlying hardware only needs to focus on forwarding and storage capabilities, which are completely decoupled from business features. The types and functions of network devices are determined by software configuration, and the operation control and operation of the network are also completed by the control layer server, so the response to business is relatively faster, and various network parameters (such as routing, security policies, QoS, etc.) can be customized. ), and configure it to the network in real time. In this way, the launch cycle of new services can be increased from several years on traditional networks to several months or even faster.

Simplify network protocol structure

The network architecture of SDN simplifies the network topology and can eliminate many IETF protocols. The removal of the protocol means that the learning threshold of operation and maintenance is lowered, the complexity of operation and maintenance is reduced, and the efficiency of business deployment is improved. Because of the centralized control of the network under the SDN network architecture, many protocols inside the network controlled by the SDN controller are basically unnecessary, such as the RSVP protocol, LDP protocol, MBGP protocol, PIM multicast protocol, and so on. The calculation and establishment of paths inside the network are all completed by the controller. The controller calculates the flow table and sends it directly to the forwarder without requiring other protocols.

White labeling of network equipment

Under the SDN architecture, the interface protocol between the controller and the forwarder is gradually standardized (such as the OpenFlow protocol), making it possible to white-label network equipment, such as dedicated OpenFlow forwarding chip suppliers, controller manufacturers, etc. The so-called system moves from vertically integrated development to horizontally integrated development. Vertical integration means that one manufacturer provides integrated product services from software to hardware to services. Horizontal integration refers to the horizontal division of labor in the system. Each manufacturer completes a component of the product, and then the integrator integrates them for sale. The horizontal division of labor is conducive to the independent evolution and update of each part of the system, rapid iterative optimization, and promotion of healthy competition, thereby reducing the purchase unit price of each component, and ultimately greatly reducing the purchase cost of the entire machine product.

business automation

Under the SDN network architecture, since the entire network is controlled by the SDN controller, the SDN controller can independently complete network service deployment and provide various network personalized configuration services, such as L2VPN, L3VPN, etc., shielding the internal details of the network and providing network services automation capabilities. Intelligent optimization of network path traffic In traditional networks, the basis for network path selection is the so-called "optimal" path calculated by various routing protocols. However, this result may lead to traffic congestion on the "optimal" path. Optimal" path bandwidth is idle. When using the SDN network architecture, the SDN controller can intelligently adjust the network traffic path according to the status of the network traffic on each link, and improve the overall network transmission efficiency from the perspective of the entire network.

SDN catalyzed network virtual technology hurricane

Next, let's take a look at the rapid development of network virtualization technology from three levels: network device virtualization, link virtualization, and virtual network.

Network Device Virtualization

Network device virtualization technology is the fastest-growing and most influential field of cloud network technology, which mainly includes two directions: NIC Virtualization and conventional network device virtualization.

1. NIC virtualization

The evolution route of network card virtualization technology is basically the same as the evolution route of I/O virtualization in the article "Recognizing Cloud Native Part III: Computing with Soft Food and Hard Food", and also experienced pure soft virtualization (ie Qemu, VMware Workstation virtual network card implementation solution), paravirtualization (Virtio, Vhost-net, Vhost-user series solutions), then to hardware pass-through solution (SR-IOV), and finally to hardware offloading (vPDA, DPU, etc.).

The product form of network card virtualization is the common virtual network card in the public cloud. It is mainly realized through software control of each virtual machine to share the same physical network card. The virtual network card can have a separate MAC address and IP address. The early structure is shown in the figure below. The virtual network cards of all virtual machines are connected to the physical switch through the virtual switch and the physical network card. The virtual switch is responsible for forwarding the data packets on the virtual machine from the physical network port. As needed, the virtual switch can also support functions such as security control.

Virtual network cards include e1000, Virtio and other implementation technologies. Virtio is currently the most common technical framework. It provides a common solution for data exchange between virtual machines and physical servers. It is supported by most hypervisors and has become a de facto standard.

Virtio is a paravirtualization solution. In the paravirtualization solution, the Guest OS knows that it is a virtual machine, and realizes IO virtualization through the cooperation of the front-end driver and the back-end simulation device. Compared with full virtualization, this method can greatly improve the IO performance of the virtual machine.

In the original Virtio communication mechanism, the Guest communicates with the Hypervisor in the user space, and there will be multiple data copies and CPU privilege level switching. When the Guest sends a packet to the external network, it needs to switch to the KVM in the kernel state, and then the KVM notifies the QEMU in the user space to process the Guest network request. Obviously, this method of communication is not efficient. Soon, in the evolution of Virtio technology, Vhost-net, a kernel state offloading solution, appeared.

Vhost-net is a back-end implementation of Virtio, matching it with a new set of Vhost protocols. The Vhost protocol allows VMM to offload Virtio's data plane to another component, and this component is Vhost-net. Vhost-net is implemented in the OS kernel, which communicates directly with the Guest, bypassing KVM and QEMU. QEMU and Vhostnet use ioctl to exchange Vhost messages, and use eventfd to implement event notifications at the front and back ends. When the Vhost-net kernel driver is loaded, it will expose a character device identifier in /dev/Vhost-net, and QEMU will open and initialize the character device, and call ioctl to communicate with Vhost-net on the control plane, its content includes Virtio's feature negotiation, passing the virtual machine memory map to Vhost-net, etc. Compared with the most primitive Virtio network implementation, the control plane is transformed into the ioctl operation defined by the Vhost protocol on the basis of the original execution (for the front end, it is still an interface exposed through the PCI transport layer protocol), and the Vring data transmission based on shared memory is transformed into Virtio-net is shared with Vhost-net; the other side of the data plane is changed to Vhost-net, and the front-end and back-end notification methods are also implemented based on eventfd.

cke_119.png cke_120.png

In the process of single sending and receiving, the data channel reduces the context switching process between the user mode and the kernel mode twice, and realizes the unloading of the data channel in the kernel mode, thus greatly improving the data transmission efficiency.

However, for some communication between user-mode processes, such as data plane communication solutions (such as Open vSwitch and similar SDN solutions), the Guest still needs to exchange data with the Host user-mode vSwitch. If Vhost-net solution, there are still multiple context switches and data copies between the Guest and the Host. In order to avoid this situation, the industry has come up with a solution to move Vhost-net from the kernel state to the user state. This is the implementation idea of ​​the Vhost-user solution. .

cke_121.png

Based on the Vhost protocol, DPDK has designed a new set of user mode protocols - Vhost-user. This protocol offloads network packet processing (data plane) to the DPDK application in user mode, and the control plane is still configured by QEMU to configure Vhost-user. In addition, DPDK has optimized and implemented technologies such as processor affinity management, NUMA scheduling, using large page memory instead of ordinary memory, using lock-free technology to solve resource competition problems, and polling mode drivers. As a result, the famous DPDK network The uninstallation plan has taken shape.

Even so, the iterative evolution of technology is endless. Although the above solutions have significantly improved network performance, in general, the Virtio network virtualization solution has always focused on software layer implementation. On the Host, due to software virtualization, The extra overhead incurred is always unavoidable. In order to further improve the performance of cloud network services, RedHat's technical experts proposed that the Virtio function can be offloaded to dedicated hardware, and tasks that have nothing to do with the business Bypass the operating system and CPU, and then directly handed over to dedicated hardware for execution, thereby further improving network performance , This is the original design intention of the vDPA hardware offloading solution, and it is also the early hardware solution implementation idea of ​​the Virtio semi-hardware virtualization solution.

The framework was proposed by Redhat, which realizes the hardware offloading of the Virtio data plane. The control plane still uses the original control plane protocol. When the control information is transmitted to the hardware and the hardware completes the configuration of the data plane, the data communication process is completed by the hardware device SmartNIC (smart network card), and the Guest virtual machine communicates directly with the network card. . The interrupt information is also directly sent to the Guest by the network card without the intervention of the Host. This method is close to the SR-IOV hardware pass-through solution in terms of performance, and at the same time can continue to use the Virtio standard interface to maintain the compatibility and flexibility of cloud services. However, its control plane processing logic is relatively complex. The first packet of traffic forwarded by OVS still needs to be processed by the OVS forwarding plane on the host, and subsequent packets corresponding to the data flow can be directly forwarded by the hardware network card, which is difficult to implement in hardware.

However, as more and more hardware manufacturers begin to natively support the Virtio protocol, offload the network virtualization function to the hardware, and integrate the embedded CPU into the SmartNIC (smart network card), the network card can handle all network data, and the embedded The standard CPU is responsible for the initialization of the control path and handling exceptions. This kind of SmartNIC with hardware Offload capability is the DPU that is becoming more and more popular nowadays.cke_122.png

2. Network device virtualization

There are two main directions for hardware device virtualization: Install a specific operating system on a traditional x86-based machine to implement routing functions.

The early typical product of the former is RouterOS developed by Mikrotik, which is developed based on the Linux kernel and can be installed on a standard x86 architecture machine, so that ordinary Linux servers can also be used as routers. This kind of equipment has occupied many low-end router markets due to its low price and characteristics such as not being restricted by the hardware platform. Nowadays, mainstream cloud vendors at home and abroad generally have the same ideas for the realization of the SDN control plane.

The latter was mainly born in response to the market demand of the first-generation cloud computing technology. After computing virtualization, the communication scale of the data center has increased significantly, and the single routing table in the traditional router can no longer meet the demand. Routing and Forwarding (VRF) technology is to virtualize the routing information base (Forwarding Information Base, FIB) into multiple routing and forwarding tables. The background of this technology is mainly to increase the port utilization rate of large-scale communication equipment, reduce equipment cost investment, and virtualize one physical device into multiple virtual devices. Each virtual device only needs to maintain its own routing and forwarding table, such as Cisco's N7K A series of switches can be virtualized into multiple VDCs. All VDCs share the computing resources of the physical chassis, but work independently without affecting each other. In addition, in order to facilitate maintenance, management and control, there is also a certain market for converged virtualization technology that virtualizes multiple physical devices into one virtual device, such as H3C's IRF technology.

link virtualization

Link virtualization is one of the most widely used network virtualization technologies, which enhances the reliability and convenience of the network. Common link virtualization technologies include link aggregation and tunneling protocols. This part of the content has been described in detail in "Part 4 - Hard and Soft Network (Part 1)", so I will only briefly mention it here.

Link aggregation (Port Channel) is the most common Layer 2 virtualization technology. Link aggregation bundles multiple physical ports together and virtualizes them into one logical port. When the switch detects that one of the physical port links fails, it stops sending packets on this port, and selects the port to send packets from the remaining physical links according to the load sharing policy. Link aggregation can increase link bandwidth and achieve high availability at the link layer.

Tunneling Protocol refers to the interconnection of two or more subnets of one technology/protocol through the network of another technology/protocol. The tunneling protocol re-encapsulates data frames or packets of other protocols and sends them through the tunnel. The new frame header provides routing information to pass the encapsulated payload data through the network. The tunnel can force the data flow to a specific address, hide the network address of the intermediate node, and provide the function of encrypting data as needed. Typical current tunnel protocols are VXLAN, GRE, and IPsec. The network implementation solutions of the cloud platforms of major cloud vendors today are all based on this.

virtual network

A virtual network is a network composed of virtual links. The connection between virtual network nodes is not connected by physical cables, but by specific virtualized links. Typical virtual networks include virtual layer 2 extended networks, virtual private networks, and overlay networks that are widely used in data centers.

Virtual L2 Extended Network (Virtual L 2 Extended Network) can actually be regarded as an early solution to the Overlay network. In order to meet the dynamic migration requirements of virtual machines, the traditional VPL (MPLS L2VPN) technology, as well as the emerging Cisco OTV and H3C EVI technologies, all use tunnels to encapsulate Layer 2 data packets in Layer 3 packets, across the middle A three-tier network is used to realize the intercommunication of two-tier data between the two places.

The Virtual Private Network (VPN, full name Virtual Private Network) is a communication method that has been used for a long time and is often used to connect private networks between medium and large enterprises or groups. A virtual private network transmits intranet information over a public network infrastructure such as the Internet. The encrypted tunnel protocol is used to achieve security effects such as confidentiality, terminal authentication, and information accuracy. This technology can transmit reliable and secure information on an insecure network, and it is generally earlier than cloud computing technology in enterprises.

The Overlay network has also been explained in detail in the previous article, so I won't repeat it here.

Development of cloud network products

On the whole, cloud network products have undergone three major evolutions, from the initial classic cloud network, to the private network (Virtual Private Cloud), and then to the cloud network with a wider range of connections. The development of cloud network technology is also a microcosm of the digitalization and globalization of enterprises.

Classic (Basic) Cloud Networking

Since AWS launched s3 and ec2 cloud services in 2006 and began to provide cloud services for public cloud users, in 2010, Alibaba Cloud officially launched the Classic classic cloud network solution, realizing network support for cloud computing users, and then Tencent Cloud , Huawei Cloud has also provided similar basic network products.

These are the early cloud products of the public cloud. At that time, the main demand of users for the cloud network was to provide public network access capabilities. The product feature of the classic cloud network is that all users on the cloud share the public network resource pool, and the intranet IP addresses of all cloud servers are uniformly allocated by the cloud vendor, and the network segment division and IP address cannot be customized.

Private network VPC

Since around 2011, with the development of the mobile Internet, more and more enterprises have chosen to go to the cloud. Enterprises have the ability to isolate the network security on the cloud, the ability to communicate with each other, and the enterprise's self-built data center to interconnect with the network on the cloud and build a hybrid cloud. Ability, and multi-regional network interconnection capabilities after multi-regional deployment on the cloud have raised many demands. At this time, the classic network solution can no longer meet these needs. AWS timely launched the VPC product service on the cloud (that is, the private dedicated network service, the full name is Virtual Private Cloud). VPC is a logically isolated network space established by enterprise users on the cloud—— In the VPC, users can freely define network segment division, freely allocate IP addresses in the network segment, and customize routing policies. Later, cloud network products such as NAT gateway, VPN gateway, and peer-to-peer connection were successively launched, which greatly enriched the connection capabilities of the cloud network and basically realized the original design goal of the private network solution.

At present, the cloud network product system with VPC as the core is still developing and growing, and supporting derivative network products such as load balancer, elastic network card, elastic public network IP, and cloud firewall have been born one after another.

cloud networking

In recent years, with the rapid development of economic globalization, the emergence of big data and AI applications requires cloud platform networks to provide more extensive and flexible access capabilities and distribution capabilities, which has led to the emergence of private network connections, cloud enterprise networks, and cloud connections. Access cloud network products for various scenarios such as servers and cloud networking, so as to meet various complex interconnection and intercommunication needs in the globalization scenario of enterprises.

At the same time, the traditional CDN distribution service, after being deeply integrated with the cloud network, also has a second spring. Today, major cloud vendors have launched CDN global content distribution services and application acceleration services. At the same time, the continuous improvement of SD-WAN access services has also made the physical edge of enterprise information architecture integration stride forward towards the world.cke_123.png

Prospects for the follow-up evolution of cloud network technology

From the perspective of the evolution of basic core technologies, cloud network is a continuous optimization process for network addressing flexibility and network transmission performance improvement: classic cloud network and VPC network both use Overlay technology in the addressing scheme, and the A virtual network layer that supports multi-tenancy is superimposed on it, thus effectively supporting the security of network isolation between tenants on the cloud and the flexibility demands of interconnection and intercommunication of applications within tenants; and in terms of network performance improvement,

Continuously improving data forwarding performance without losing the flexibility of forwarding control by reconstructing the form of network equipment is the evolution direction of today's mainstream technology. Taking virtual switches as an example, with the upgrade of hardware server network bandwidth from 10G to 25G, Even to 100G, vSwitch technology has experienced the evolution process of Linux kernel mode data exchange, DPDK user mode data exchange, hardware passthrough, and intelligent network card hardware offloading (vPDA, DPU, etc.) in the classic network.

From a product perspective, after the development of three generations of cloud network products, namely classic network, VPC network, and cloud networking products, the capabilities of cloud network products have been greatly extended—from focusing on the virtualization of the internal network of the data center to focusing on the internal network virtualization of the data center. Interconnection between enterprises, and further use SD-WAN access capabilities to virtualize enterprise network access capabilities to support access in various scenarios. Cloud network products are not only used to connect computing and storage resources in the cloud, but also gradually expand the scope of access under the cloud to connect the enterprise's headquarters/branch networks and various terminals outside the cloud, in order to eventually enable enterprises to build a complete network on the cloud. IT information service ecosystem.

Now that cloud computing technology has developed into the third-generation cloud-native era, with the maturity of various virtualization technologies, the cloud capabilities of various software-defined hardware can be implemented, and with the large-scale commercial use of container technology, it will eventually make The technical gap between business R&D and basic operation and maintenance integration and cooperation will be completely broken. The full-stack observability requirements of business applications will surely enable business service monitoring to drill down to the basic network layer, allowing network traffic monitoring to bring business attributes, so that Business full-link monitoring truly has productivity value. And this will all depend on the integration depth and application breadth of container network and cloud network technology. This field will also become the strategic commanding heights for major cloud vendors to win the cloud-native era in the future!

References

Analysis of cloud network products and technologies: https://zhuanlan.zhihu.com/p/351449927

Talking about cloud computing network (1): Introduction to cloud computing network technology: https://blog.csdn.net/qiansg123/article/details/80123088

Talking about cloud computing network (2): Application scenarios of cloud computing network: https://blog.csdn.net/qiansg123/article/details/80124473

Talking about the development history of cloud network: https://baijiahao.baidu.com/s?id=1621622261059186571&wfr=spider&for=pc

The formation and development process of computer network: https://www.sohu.com/a/540490960_121350545

Network basics in cloud computing: https://blog.csdn.net/qq_46254436/article/details/104588055

Brief description of the fifteen-year evolution history of cloud network: https://zhuanlan.zhihu.com/p/307684414

History of computer network development: https://zhuanlan.zhihu.com/p/150417958

Very exciting! 12,000 words summarizing the theory of network technology, reviewing the past and learning the new

https://zhuanlan.zhihu.com/p/371546795

Virtio devices and drivers overview: The headjack and the phone

https://www.redhat.com/zh/blog/virtio-devices-and-drivers-overview-headjack-and-phone

A journey to the vhost-users realm

https://www.redhat.com/zh/blog/journey-vhost-users-realm

Click to follow and learn about Huawei Cloud's fresh technologies for the first time~

The country's first IDE that supports multi-environment development——CEC-IDE Microsoft has integrated Python into Excel, and Uncle Gui participated in the framework formulation. Chinese programmers refused to write gambling programs and were pulled out 14 teeth, with 88% body damage . Podman Desktop, an open-source imitation Song font, breaks through 500,000 downloads. Automatically skips opening screen advertisements. The application "Li Tiao Tiao" stops updating indefinitely. There is a remote code execution vulnerability Xiaomi filed mios.cn website domain name
{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/4526289/blog/10102381