QOS (1) Service Model

table of Contents

1. Factors affecting network communication

2. QOS service model

2.1 Best-effort service model

2.2 Integrated service model

2.3 Differentiated Services Model

3. Comparison of the three models


With the continuous development of the network, the continuous increase of network scale and traffic types has caused the Internet traffic to increase sharply, causing network congestion, increasing forwarding delay, and even packet loss in severe cases, resulting in degradation of service quality or even unavailability. Therefore, to develop these real-time services on the IP network, it is necessary to solve the network congestion problem, and the most direct way to solve the network congestion is to increase the network bandwidth, but considering the cost of network construction, this is unrealistic. QoS (Quality of Service) technology is developed under this background. In the case of limited bandwidth, this technology applies a "guaranteed" strategy to manage network traffic, and realizes that different traffic can obtain different priority services.

Specifically, QOS refers to allowing user services to obtain predictable service levels in terms of bandwidth, delay, delay jitter, and packet loss rate during the communication process .

1. Factors affecting network communication

The traditional IP network treats all messages indiscriminately. The strategy adopted by network devices to process messages is First In First Out (FIFO), which allocates the resources required for forwarding according to the sequence of message arrival time. All messages share resources such as the bandwidth of the network and equipment, but the amount of resources ultimately depends on the timing of the message arrival. FIFO does its best to deliver the message to the destination, but does not provide any promises and guarantees for the delay, jitter, packet loss rate and reliability of the message. Therefore, for some key services (such as voice, video, etc.) The communication quality cannot be guaranteed.

Network bandwidth

Network bandwidth refers to the amount of data that can be transmitted per unit time. As shown in the figure, the maximum bandwidth on a path depends on the minimum bandwidth on the transmission path. So the link with small bandwidth is the key to affecting the transmission rate.

Network delay

Latency refers to the time required for a message to pass from one end of the network to the other end. Real-time application communication quality pays more attention to the delay size, such as voice and video. Taking voice transmission as an example, time delay refers to the time from when the speaker starts speaking to when the other party hears the content. If the delay is too long, it will cause unclear, incoherent or broken conversations.

Jitter

Since the end-to-end delay of each message is different, these messages cannot reach the destination at equal intervals. This phenomenon is called jitter. Generally speaking, the smaller the delay, the smaller the range of delay jitter. Certain service types (especially real-time services such as voice and video) are extremely intolerant of jitter. The difference in message arrival time will cause discontinuities in voice or video; in addition, jitter will also affect the processing of some network protocols. Some protocols send interactive messages at fixed time intervals. Excessive jitter will cause protocol oscillations. In fact, all transmission systems have jitter, but as long as the jitter is within the specified tolerance, it will not affect the quality of service. In addition, buffering can be used to overcome excessive jitter, but this will increase the delay.

Packet loss rate

The packet loss rate refers to the percentage of lost packets in the network transmission process. The packet loss rate can measure the reliability of the network. Packet loss may occur in all links, for example:

  • Processing process: When the router receives a packet, the CPU may be busy and cannot process the packet, which may cause packet loss.
  • Queuing process: When dispatching packets to the queue, packet loss may occur due to the queue being full.
  • Transmission process: The packet loss may be caused by various reasons (such as link failure, etc.) during the transmission of the message on the link .

2. QOS service model

2.1 Best-effort service model

In the best-effort service model, network communication quality can be improved by increasing network bandwidth and upgrading network equipment.

  • Increase network bandwidth: The amount of data transmitted per unit time can be increased, so that more data can be transmitted per unit time according to the traditional first-in-first-out method, and network congestion problems can be improved.
  • Upgrade network equipment: It can increase the data processing capacity so that it can process more data in a unit time according to the traditional first-in first-out method, and improve the network congestion problem.

The traditional first -in- first-out forwarding is the Best-Effort (best effort) service model :

  • Best-Effort is a single service model and the simplest service model. The application can send any number of messages at any time without prior approval or notification to the network.
  • The network using the Best-Effort service model does its best to send messages, but does not provide any guarantees for delay, reliability and other performance; but it is suitable for most network applications, such as FTP, Email, etc.
  • Best-Effort service is now the default service model of the Internet, which is implemented through a first-in first-out queue.

2.2 Integrated service model

Make the device run some protocols to ensure the communication quality of key services. Advantages: It can provide bandwidth and delay guarantee for certain specific services. Disadvantages: complex implementation; when no traffic is sent, the bandwidth is still monopolized, and the utilization rate is low; this solution requires all nodes to support and run the RSVP protocol from end to end. Therefore, this model is rare in real networks.

Integrated Services Model (Integrated Services Model):

IntServ is one of the most complex service models. It requires RSVP (Resource Reservation Protocol) protocol (resource reservation protocol). RSVP protocol working process: Before the application sends a message, it needs to apply to the network for a specific bandwidth and a specific service quality request, and then send the message after receiving the confirmation message.

Once recognized and allocated resources for the application's message, as long as the application's message is controlled within the range described by the flow parameters, the network node will promise to meet the QoS requirements of the application . The network node on the reserved path can fulfill its commitment to the application program by performing actions such as message classification, traffic monitoring, and low-delay queuing scheduling. The IntServ model is often combined with multicast applications and is suitable for real-time multimedia applications that require guaranteed bandwidth and low latency, such as video conferencing, video on demand, etc.

At present, the IntServ model using RSVP protocol defines two types of services:

  • Guaranteed services provide guaranteed delay and bandwidth limitations to meet application requirements. For example, VoIP (Voice over IP) applications can reserve 10M bandwidth and require a delay of no more than 1 second.
  • Load-balanced service guarantees that even when the network is overloaded, it can still provide the message with a quality of service similar to the best Effort model when it is not overloaded, that is, in the case of network congestion, it guarantees low latency for certain application messages And low packet loss rate requirements.

The ability to provide end-to-end QOS delivery services is the biggest advantage of the Intserv model. The biggest disadvantage of the Intserv model is poor scalability. The network node needs to reserve and maintain some necessary soft state information for each resource. When combined with a multicast application, it also periodically sends resource requests and path update information to the network to support the joining and exiting of multicast members. When the network scale expands, the above operations greatly increase the maintenance overhead, which has a serious impact on the processing performance of network node messages. The IntServ model is not suitable for a large number of applications on the backbone network where traffic is collected.

2.3 Differentiated Services Model

In order to solve the problems of the complexity of the protocol implementation of the integrated service model and the low bandwidth utilization, the Diffserv differentiated service model can be deployed in the network to ensure the communication quality of the service. This is currently the most widely used model.

DiffServ differentiates the service process: First, the traffic in the network is divided into multiple classes, and then the corresponding processing behaviors are defined for each class to have different priority forwarding, packet loss rate, delay, etc.

Overview of DiffServ service model:
  • The service flow classification and marking are completed by the edge router. Border routers can flexibly classify packets through a variety of conditions (such as the source and destination addresses of the packets, priority in the ToS field, protocol type, etc.), and then set different tag fields for different types of packets. Other routers only need to simply identify these tags in the message, and then perform corresponding resource allocation and flow control on them. Therefore, DiffServ is a QOS model based on message flow.
  • It contains a limited number of service levels and a small amount of status information to provide differentiated flow control and forwarding.
  • DS node: A network node that implements the DiffServ function is called a DS node.
  • DS boundary node: A node responsible for connecting to another DS domain or a domain without DS function. The DS boundary node is responsible for classifying and adjusting the traffic that enters the DS domain.
  • DS internal nodes are used to connect DS boundary nodes and other internal nodes in the same DS domain. The DS internal nodes only need to perform simple flow classification based on fields such as EXP, 802.1P, and IPP in the message, and perform flow control on the corresponding flow.
  • DS domain: A group of connected DS nodes that adopt the same service provision strategy and implement the same PHB. A DS domain is composed of one or more networks of the same administrative department. For example, a DS domain can be an ISP or an internal network of an enterprise.

The DiffServ model fully takes into account the flexibility and scalability of the IP network itself, and converts the complex service quality assurance into a single-hop behavior through the information carried by the message itself, thereby greatly reducing the signaling work. The model is currently the most widely used service model.

3. Comparison of the three models

  advantage Disadvantage
Best effort model Simple implementation mechanism Different business traffic cannot be treated differently
Integrated service model Provide end-to-end QOS service, and guarantee bandwidth and delay Need to track and record the status of each data stream, the implementation is more complicated, and the scalability is poor, and the bandwidth utilization rate is low
Differentiated service model There is no need to track the status of each data stream; the resource occupancy is less, and the scalability is strong; and different business streams provide different service quality Each node needs to be manually deployed end-to-end, which requires high personnel capabilities.

Guess you like

Origin blog.csdn.net/weixin_43997530/article/details/109253576