Delay of ip network: sending delay (transmission delay), propagation delay, processing delay, queuing delay

1. Definition of delay of ip network

  • The delay of an ip network usually refers to the time required for a message or packet to be transmitted from one end of a network to another.
  • It mainly includes 4 parts: sending delay, propagation delay, processing delay, and queuing delay.
  • Total delay = sending delay + propagation delay + processing delay + queuing delay.

Generally speaking, processing delay and queuing delay mainly depend on CPU speed, system load and design and implementation of application software design. The transmission delay and propagation delay are determined by the network transmission distance of the bandwidth of the IP network. So when we talk about IP network delay, it mainly refers to sending delay and propagation delay.

1.1. Sending delay (transmission delay)

Sending delay, also known as transmission delay, is the time required to send data, and is the time required for the host or router to submit and send data from the network card or router queue to the network link to complete sending the data frame. Note that it occurs inside the machine. The biggest difference between it and the propagation delay mentioned below is that it has nothing to do with the distance of the transmission medium. Sending delay = data frame length/sending rate.

1.2. Propagation delay

Propagation delay is the time it takes for an electromagnetic wave to travel a certain distance in a channel. Its delay = transmission medium length/propagation rate of electromagnetic waves on the channel. This means that the farther a signal travels, the greater its delay!

1.3, processing delay,

After the host or router receives the packet, it takes a certain amount of time to process it, such as analyzing the header, extracting data, error checking, and routing selection. The processing delay of a general high-speed router is usually in the order of microseconds or lower.

1.4, queuing delay

The amount of time a host, router, or switch spends processing queued packets. The queuing delay for a particular packet depends on the number of earlier arriving packets that are queued for transmission on the link. If the queue is empty and no other packets are currently being transmitted, the queuing delay for this packet is 0; on the other hand, if the traffic is heavy and many other packets are waiting to be transmitted, the queuing delay will be large . The actual queuing delay is usually on the order of milliseconds to microseconds. Generally the queuing delay depends on the traffic of the network.

2. Each delay on the router:

insert image description here(picture from the Internet)

3. Simplification of the delay model between two hosts:

Usually, especially when testing the network with iperf, what we usually consider is the overall delay. In such a test scenario, we can simplify the delay as shown in the following figure

insert image description here

  • T = T2 + T3 + T4 + T5 + T6, for the application, it means that the application sees the total delay of the system.
    For this network, because T1/T2/T6/T7 are relatively fixed values ​​under specific configurations,
    so
  • T4 = propagation delay + queuing delay of network equipment
  • T2 + T3 = transmission delay + queuing delay on the host side
  • T5 + T6 = transmission delay + queuing delay on the host side

3.1. From the application point of view:

Can be simplified as, delay = transmission delay + propagation delay

  • T2+T3 is regarded as transmission delay
  • T4 is regarded as the propagation delay

3.2. From the perspective of TCP/IP protocol stack:

Can be simplified, delay = transmission delay + propagation delay

  • T3 is regarded as the transmission delay
  • T4 is regarded as the propagation delay

4. How to consider the impact of propagation delay and transmission delay in the network

Under a fixed network (such as the delay between the two hosts mentioned above), the bandwidth and propagation delay are fixed, so:

  • The larger the message length,
    the longer the required sending delay, and the larger the ratio of "sending delay/(sending delay + propagation delay)", so the impact of sending delay is mainly considered.
  • The shorter the message length,
    the shorter the sending delay, the larger the ratio of "sending delay/(sending delay + propagation delay)", and the larger the proportion of the corresponding propagation delay to the total delay , the influence of the corresponding propagation delay is greater.

5. Attachment description:

Transmission rate and bandwidth
The rate in network technology refers to the data transmission rate, also known as data rate or bit rate, and the unit is bit/s (b/s, bps).
How many bits a host can inject (send) into the connected media or network per second, that is, the transfer rate.
The bandwidth in a computer network represents the ability of a channel in the network to transmit data, that is, the "highest data rate" that the network can pass within a unit of time. So the unit of bandwidth is the unit of data rate bit/s.

In a normal host network or in an idle network, bandwidth = sending rate, because: sending delay = data frame length/sending rate, so sending delay = data frame length/bandwidth , so bandwidth determines when sending The size of the delay (transmission delay) .

Guess you like

Origin blog.csdn.net/meihualing/article/details/129585743