Network Transmission and Network Routing

Table of contents

1. QoS requirements of different network application services

1) The emergence of QoS technology

2) QoS metrics/indicators

3) QoS requirements of different network application services

4) Three mainstream QoS service models

2. Data packet transmission and encapsulation process

3. Data center network routing transmission

1) Traditional data center network routing

2) SDN-based data center network routing architecture

3) SDN-based data center network routing algorithm

(1) Routing algorithm based on load balancing

(2) Dynamic routing algorithm

(3) Energy-saving routing algorithm


1. QoS requirements of different network application services

1) The emergence of QoS technology

With the rapid development of network technology, IP network has changed from a single data network to a multi-service network integrating data, voice, video, and games. This change also brings the following problems:

  • The data carried on the network is increasing exponentially, and these services have extremely high requirements on network bandwidth and delay.
  • Due to the difficulty, long cycle, and high cost of hardware chip development, bandwidth has gradually become the bottleneck of Internet development, leading to problems such as network congestion, data packet loss, and service quality degradation, and even service unavailability in severe cases.

Therefore, to carry out these services on the IP network, the problem of network congestion must be solved, and the best solution is to increase the network bandwidth. QoS technology is to allocate bandwidth for various services in a balanced manner according to the different needs of various services under limited bandwidth resources, and provide end-to-end service quality assurance for them.

2) QoS metrics/indicators

In the traditional sense, factors affecting network quality include: transmission link bandwidth, packet transmission delay and jitter, and packet loss rate, etc., as shown in the following table:

Table 1 QoS metrics
index definition other
bandwidth/throughput The maximum number of data bits from the network sender to the receiver within a fixed time (1s); or the average rate of a specific data flow between two network nodes, in bit/s Related concepts: uplink rate and downlink rate
time delay The delay time required for a message or packet to travel from the sending end of the network to the receiving end, generally consists of transmission delay and processing delay
jitter When the network is congested, the delay variation of packets transmitted through the same link, that is, the time difference between the maximum delay and the minimum delay Using cache can overcome excessive jitter, but it will increase latency
Packet loss rate The percentage of lost packets during network transmission to the total number of transmitted packets

3) QoS requirements of different network application services

Table 2 QoS requirements of different network application services
web application service QoS requirements
Remote connection (Telnet) Latency, packet loss rate
Simple Mail Transfer Protocol (SMTP) Packet loss rate
File Transfer Protocol (FTP) bandwidth/throughput
Remote data transmission (Telnet) Packet loss rate
real time multimedia Latency, packet loss rate, jitter
control message time delay

4) Three mainstream QoS service models

The QoS model is not a specific function, but a scheme of end-to-end QoS design. International organizations such as IETF and ITU-T have designed QoS models for the services they care about. The three mainstream QoS models are as follows.

Table 3 Comparison of three mainstream QoS service models
service model propose Model description advantage shortcoming Applicable Business/Scenario

Best-Effort

One of the five QoS service types in IEEE 802.16 WiMAX Devices in the network only need to ensure the reachability of routes between networks, and do not need to deploy additional functions Simple, the application can send any number of packets at any time without notifying the network. No guarantees are made for latency, reliability, etc. Services that do not require high performance such as delay and reliability, such as FTP and E-Mail

IntServ

IETF proposed in RFC1633 in 1994

Before sending a message, the application first describes its traffic parameters to the network through Resource Reservation Protocol (RSVP) signaling;

Then the network reserves resources (such as bandwidth, priority) within the scope described by the traffic parameters to promise to meet the request;

After receiving the confirmation information, the application program starts to send the message.

The network node maintains a state for each data flow, reserves a dedicated channel for a certain service, and provides end-to-end guarantee

Difficult to implement (requires all E2E network nodes to support this model);

Poor resource utilization (1 path only serves 1 data flow);

Brings additional bandwidth occupation (to ensure that the channel is not occupied, RSVP will send a large number of protocol packets to refresh periodically, resulting in the inability to multiplex other data streams)

DiffServ

IETF proposed in 1998 Traffic in the network is divided into multiple classes or marked with different priorities based on various conditions. When the network is congested, different classes enjoy different priority processing, thus realizing differentiated services No signaling is required, and there is no need to apply for resources to the network in advance

2. Data packet transmission and encapsulation process

Data packet encapsulation: The communication between devices is transmitted through the network. If a device needs to transmit data to other devices, the data needs to be transmitted from the upper layer to the lower layer, and the corresponding header must be added to each layer protocol.

Figure 1 Data encapsulation process in TCP/IP model
Table 1 Main functions of each layer of TCP/IP
Function
transport layer Pack the message into the data segment
Network layer Pack the source IP and destination IP addresses of the data and write them to the data segment
data link layer Map IP address to hardware address (MAC)
physical layer Processes bit streams (0s and 1s), converts them into electrical, optical, or microwave signals, and transmits them through coaxial cables, twisted-pair wires, or optical fibers

1) IP packet format (32bits)

(1) Version number: 4bits, expressed as 0100B for IPv4.

(2) Header length: 4bits, the length of the packet header. It indicates how many 32-bit long integer data are included in the data packet header (how many lines are understood in the figure), and it is 5 when there is no option (red part)

(3) Service type, including 8bits, as follows:

Table 2 Size and value of various data types
type of data data size value
process field 3bits The importance of the data packet is set. The larger the value, the more important the data. The value range is: 0 (normal) ~ 7 (network control)

delay field

1bit 0 (normal), 1 (extra low latency)
traffic field 1bit 0 (normal), 1 (extremely high traffic)
reliability field 1bit 0 (normal), 1 (extremely high reliability)
cost field 1bit 0 (normal), 1 (special minimum cost)
reserved text 1bit  Unused

(4) Total package length: 16bits, the total length of the current data package, the unit is byte. The current maximum is 65535, 64KB.

3. Data center network routing transmission

Routing action includes two basic contents of path finding and forwarding .

Path finding: determine the best path to the destination or the path that meets the requirements, which is realized by the routing algorithm. Routing algorithm plays a vital role in routing protocols, and the algorithm used often determines the final routing result.

  • The routing algorithm must be started and updated to maintain the routing table containing the routing information, where the routing information depends on the routing algorithm used is not the same.
  • Routing protocol: routers exchange information for routing updates, update and maintain routing tables to correctly reflect network topology changes, and routers determine the best path based on metrics. Such as Routing Information Protocol (RIP, based on distance vector), Open Shortest Path First Protocol (OSPF, based on link state) and Border Gateway Protocol (BGP).

Forwarding: Transmitting information packets along the best path for routing, realized by routing and forwarding protocols.

1) Traditional data center network routing

传统网络中的网络路由具有如下特征:

  • 各节点通过局部网络状态进行分布式路由计算,难以发挥算法的最佳性能。
  • 一个节点需要保存所有网络节点信息,导致路由的可扩展性较差。

2)基于SDN的数据中心网络路由架构

基于SDN的数据中心网络适用于组网灵活和多路径转发的环境,具有如下特征:

  • SDN可管可控,网络运营者可及时获取全网状态信息,包括拓扑、链路、网络拥塞、服务质量、路由限制和网络故障等,也可集中配置设备、发放策略和运行新应用服务,实现大规模数据中心网络的高效运维和管理。
  • SDN技术可提高网络资源利用率,在保障QoS前提下,实施流量工程提高数据中心内链路带宽利用率和吞吐量,节约建网成本,满足快速频繁的资源调度和实时配置需求。
  • SDN可有效管理云计算使用的海量虚拟机VMs,实现虚拟机管理自动化;同时与VM服务器配合,可实现VM自动部署和快速迁移。

基于SDN架构的数据中心网络路由具有全网视图的SDN在很大程度上提高路由算法性能,增加网络吞吐率,主要原因如下:

  • 在集中控制的SDN控制平面具有全网拓扑,可优化路由算法性能,不需要考虑算法的收敛性;
  • 控制平面维护全局网络信息(包括流量信息、链路信息、路由限制等),根据网络能力和不同流量的服务需求计算路由,有效管理和调度流量;
  • 转发平面实时上报网络信息到控制平面,进行流量和网络状态分析,及时调整路由并统一下发流表更改路径,灵活、动态调度流量,实现网络性能优化、提高网络链路利用率和负载均衡等。

3)基于SDN的数据中心网络路由算法

(1)基于负载均衡的路由算法

数据中心网络属于高带宽网络,增加网络链路虽可提高带宽,但也增加了网络成本和网络结构复杂度。此外,最短路径路由算法总会选择代价最小的路径,导致流量集中于相同链路,增加的带宽也不能得到有效利用。

实时的网络状态是确保路由有效性的关键,基于SDN的数据中心网络具有全网视图,可获得链路利用率和剩余容量等信息,有助于基于负载均衡的网络路由计算,从而提高链路利用率和网络吞吐量,满足高带宽需求。

基于负载均衡的路由算法主要利用网络多路径特性,即源-目的节点对之间存在多条可用路径。在路径的选择上,主要有2种方式:

表3 负载均衡约束下的多路径传输路由
传输路径选择方式 具体描述 优点 缺点
单路径传输路由算法 在多条可用路径中,根据网络状态信息和网络性能约束条件,选择相对空闲的一条路径进行传输,或选择两条作为主/备用路径,备用路径仅在主路径失效后启用 减少路径查找和安装时延、路由更加稳定

选择1条路径时,不能对网络状态变化快速作出反应;

主/备用路径选择,备用路径可能存在过期问题,增加交换机的维护路径开销

多路径并行传输路由算法 选取源-目的节点对之间的多条路径,并按比例承载数据流 有效使用带宽,提高了端到端的稳定性,网络状态发生变化时能有效避免网络拥塞 目的端接收的数据分组是乱序的,一定程度上影响数据传输效率

(2)动态路由算法

数据中心网络中的流量具有高动态性,需要实时监测网络状态,将负载过重链路上的流量进行路径转换或重新路由,是网络达到动态的负载均衡。因此,何时检测并触发负载过重、如何判断负载过重、链路过载后,应该选择哪条数据流进行重新路由等问题需要考虑。

表4 现有动态路由算法解决方案
问题 解决方案
何时检测并触发负载过重 使用周期性触发、门限触发和二者协同触发方式
如何判断负载过重

1. 根据链路负载超过所占链路容量比重(如75%);

2. 对全网设置一个负载均衡参数

链路过载后,应该选择哪条数据流进行重新路由 SDN is flow-based forwarding, which distinguishes data flows (such as dynamically distinguishing throughput- sensitive large flows and delay-sensitive small flows according to the size of the flows )

(3) Energy-saving routing algorithm

For the energy consumption of the data center, it can generally be considered from the equipment level and the network level. The device level only focuses on the energy consumption of a single hardware device, and the network level needs to plan the entire network, including topology selection, network routing, and traffic scheduling.

In terms of network routing, the core of energy-saving routing: on the premise of meeting performance requirements, configure energy-saving routing paths according to changes in network status, concentrate business traffic on major network links, shut down or sleep idle network devices , and reduce unnecessary The necessary switches and links are used.

Table 5 Existing energy-saving routing solutions
plan specific description shortcoming
Route selection based on network status changes First calculate the initial path for each flow, and iteratively delete the nodes and links that meet the conditions in the path set according to a certain strategy, and at the same time schedule the traffic to other paths (scheduling strategy can be based on traffic size, link utilization, energy consumption , connectivity, etc.) Frequent routing changes will affect network stability, and a large number of rerouting will increase the computational complexity of the controller, the information overhead between the controller and the switch, and the complexity of traffic scheduling
Direct calculation of energy-saving paths based on network topology

Directly calculate the route according to the entire network topology, use the energy consumption as the path cost, and select the path with the least cost;

At the same time, route calculation can be performed based on existing paths, so that new routes can maximize the reuse of existing paths, thereby concentrating traffic and saving more network resources

Too concentrated traffic can easily lead to network performance degradation, and the number of nodes and links that need to provide services should be based on actual performance requirements.

Guess you like

Origin blog.csdn.net/smiling_sweety/article/details/116534226