QoS-Congestion Management and Congestion Avoidance (Part 2)

--------------------------- FIFO ---------------------- ---------
Insert picture description here
· The lower one is the software queue, the upper one is the hardware queue
· When we have more data in and less data out, with the accumulation of data, the hardware queue of our router is already full In the event of a situation, then our software queue can come in handy. When the software queue is not set, that is, the FIFO, in fact, whether it is emergency, sub-urgent data or non-emergency data, the first-in first-out model is adopted. The scheduler adopts the first-come-first-served First send messages to the hardware queue in sequence, there will be no difference in the data when the message is sent, so when the data arrives at the hardware queue, because the hardware queue uses FIFO, so the software queue uses FIFO will not transmit the message Any guarantee of quality.

Insert picture description here
Insert picture description here
· In fact, all data packets in the FIFO are a queue
. The queue is added according to the order, and discarded according to the tail discard principle . The FIFO can be
seen on the outgoing interface of the firewall on the Huawei device.
Only the FIFO can be modified for the FIFO. length

PQ
Insert picture description here

· PQ is a priority queue: queues that are scheduled according to strict priority, PQ classifies queues, and only queues with high priority will be dispatched to queues at the lower level, so this is an important business ratio Other services get service in advance
· PQ scheduling mechanism: divided into 4 queues, namely high priority queue (Top), middle priority queue (Middle), normal priority queue (Normal) and
low priority queue (bottom)
Insert picture description here
Insert picture description here
Insert picture description here
· In each queue There is a length, and if the queue is full, it will be discarded.
· PQ features:
Insert picture description here


  • Advantages and disadvantages of PQ: 1. Advantages:
    ① The delay control of high priority queues is very good.
    ② Simple implementation, which can distinguish multiple services.
    2. Disadvantages:
    ① Reasonable bandwidth allocation is not possible, and high priority traffic is relatively large. Time, resulting in "starvation" of low-priority traffic
    ② High-priority experiments are guaranteed at the cost of sacrificing low-priority delay
    ③ If high-priority TCP traffic is transmitted and low-priority UDP traffic is transmitted, then TCP Increasing the transmission rate results in insufficient bandwidth for UDP traffic.

WRR
· To understand WRR, you must first understand RR

  • RR (Round Robin, Round Robin Scheduling Algorithm): Implementation principle: scheduling multiple queues RR polls multiple queues in a circular manner. If the polled queue is not empty, then take a message from the queue;
    if it is empty, then skip this queue, the scheduler will not wait. Somewhat similar to a completely fair policy.
  • The advantages and disadvantages of RR:
    1. Advantages:
    ① Isolate different flows to achieve equal utilization of bandwidth between queues
    ② The remaining bandwidth can be evenly distributed by other queues
    2. Disadvantages:
    ① The weight of queue bandwidth cannot be set, which is completely fair
    ② When the length of the packets in different queues is different, the scheduling is not accurate;
    ③When the scheduling rate is low, the problems of delay and jitter are more prominent, for example, a packet reaches an empty queue, and this queue has just been scheduled
    , Then this packet needs to wait for all other queues to finish scheduling before it can get the opportunity of the outgoing interface, which will result in relatively large jitter. However, if the scheduling speed is very high,
    this delay can be ignored. RR has many applications in high-speed routers.

· WRR
Insert picture description here
WRR advantages and disadvantages:
1. Advantages
① Can allocate bandwidth according to weight, and the remaining bandwidth in the queue can be occupied fairly by other queues
② Simpler and lower complexity
③ Realize the port after diffserv aggregation
2, Disadvantages
① and RR scheduling algorithm Consistent, when the packet length is inconsistent, the scheduling is inaccurate
②When the scheduling rate is low, the packet delay control is not good, and the delay jitter cannot be expected.

· For WRR, it is actually scheduled according to bytes.
· In response to the problem of inaccurate scheduling according to bytes, there is actually a scheduling algorithm called DRR
(deficit polling): similar to ant flower chanting, For example, there are 2000 bytes of data in queue A, but now we can only schedule 1500 bytes.
In this case, we can borrow 500 bytes and send 2000 out first. After sending out, it is just under When the second content was sent,
1000 bytes came, then in this case, the original 500 was borrowed, and just 1000 was sent out.

WFQ
Insert picture description here
Insert picture description here

· WFQ uses a five-tuple method for classification. If there is a priority, it is a six-tuple. This priority is the embodiment of DSCP or IPP,
so WFQ must be a six-tuple.
· Each stream is assigned to a queue. This process is called hashing. It is automatically completed by the HASH algorithm. This method will try to divide streams with different characteristics into different queues. The number of queues allowed by WFQ is limited, and users can configure this value as needed.
· When dequeuing, WFQ allocates the egress bandwidth that each flow should occupy according to the priority of the flow. The smaller the priority value, the less bandwidth is obtained. The larger the priority value, the more bandwidth is obtained. In this way, fairness between services of the same priority is ensured, and the weights between services of different priorities are reflected.
The advantage of WFQ lies in the simple configuration, but because the flow is automatically classified and cannot be intervened manually, it lacks certain flexibility; and due to resource constraints, when multiple flows enter the same queue, it cannot provide accurate services and cannot guarantee the Actual resources. WFQ equalizes the delay and jitter of each stream, and is also not suitable for delay-sensitive business applications.
Through the above analysis, it will be found that if all queues use a scheduling algorithm, there are respective advantages and disadvantages, and can not meet business needs well, but through analysis, it will be found that the advantages and disadvantages of some scheduling algorithms are complementary, Imagine : Is it possible to set up different queues and apply different scheduling algorithms, so that they can meet business requirements to a great extent?

CBQ
Insert picture description here
CBQ (Class-based Queueing) class-based weighted fair queuing-is an extension of WFQ functionality, providing users with custom class support. CBQ first classifies packets according to rules such as IP priority or DSCP priority, incoming interface, and quintuple of IP packets, and then allows different types of packets to enter different queues. For messages that do not match any category, the default category defined by the system will be sent.
CBQ provides three types of queues:
EF queue: meet low-latency services.
The EF queue has absolute priority. Only when the packets in the EF queue are scheduled, the packets in other queues will be scheduled.
AF queuing: meet key data services that require bandwidth guarantee.
Each AF queue corresponds to one type of packet, and the user can set the bandwidth occupied by each type of packet. When the system dispatches packets out of the queue, the packets will be dequeued and sent according to the bandwidth set by the user for various types of packets, which can achieve fair scheduling of various types of queues.
BE Queue: Satisfy the best effort to send services without strict QoS guarantee.
When the message does not match all the categories set by the user, the message will be sent to the default BE (Best Effort, best effort transmission) category defined by the system. The BE queue uses the remaining bandwidth of the interface and WFQ scheduling to send.

Advantages: Provide support for custom classes; can define different scheduling strategies for different services.
Disadvantages: Due to the complex flow classification involved, enabling CBQ will consume certain system resources.

Congestion avoidance mechanism:
Insert picture description here

Disadvantages of tail drop:
①TCP global synchronization
Insert picture description here
②TCP starvation ③high
Insert picture description here
delay and high jitter
Insert picture description here

Insert picture description here

Solution to the problem of TCP global synchronization:
RED:
To avoid TCP global synchronization, you can randomly discard some packets when the queue is not full. Delay the arrival of TCP global synchronization as much as possible by reducing the transmission rate of some TCP connections in advance. This behavior of randomly discarding packets in advance is called early random detection (RED).
Features: RED sets a threshold for the length of each queue, and stipulates that
when the length of the queue is less than the low threshold, no packets are discarded.
When the length of the queue is greater than the high threshold, all received packets are discarded.
When the length of the queue is between the low threshold and the high threshold, it starts to randomly discard incoming packets. The method is to assign a random number to each incoming message and use this random number to compare with the drop probability of the current queue. If it is greater than the drop probability, the packet is dropped. The longer the queue, the higher the probability of packets being dropped.

Problem: But for RED, it will cause indiscriminate discarding and starve to TCP, so the following techniques can solve all tail discarding problems:

Solve the problem:
WRED:
Based on the RED technology, the WRED (Weighted Random Early Detection) technology is implemented, which can independently set the high threshold, low threshold and packet loss rate of packet loss for each priority level. When the low threshold is reached, packet loss begins. When the high threshold is reached, all packets are discarded. As the threshold increases, the packet loss rate continues to increase. The maximum packet loss rate does not exceed the set maximum packet loss rate until the high threshold is reached. The text is discarded. In this way, the packets in the queue are actively discarded according to a certain probability of discarding, and to a certain extent, all the disadvantages caused by tail discarding are avoided.

Published 28 original articles · won 15 · views 872

Guess you like

Origin blog.csdn.net/weixin_45948002/article/details/105286161