Calculation of network delay

concept

Delay (delay or latency) refers to the time it takes for data (a packet or packet, or even a bit) to travel from one end of a network (or link) to the other. Sometimes also called delay or delay.

Latency in a network consists of several distinct components:

①Send  delay

②Propagation delay

Processing delay

④Queue delay


send delay

Also called transmission delay .

When sending data, the time required for the data frame to enter the transmission medium from the node.

That is, the time required from the time the first bit of the data frame is sent to the completion of sending the last bit of the frame.


propagation delay

The time it takes for an electromagnetic wave to travel a certain distance in a channel.

The transmission delay is fundamentally different from the propagation delay . The signal transmission rate and the propagation rate of the signal on the channel are completely different concepts .

 


processing delay

The time a host or router spends processing a packet (such as analyzing headers, extracting data, error checking, or finding a route) when it is received.


queuing delay

The delay experienced by packets queuing in router input and output queues for processing.

The length of the queuing delay often depends on the traffic in the network at that time .


The total delay experienced by data in the network is the sum of transmission delay, propagation delay, processing delay and queuing delay.

 It must be pointed out that in the total delay, which kind of delay dominates, it needs to be analyzed in detail.


where the four delays occur

 


prone to misconceptions

For high-speed network links, we only increase the rate at which data is sent, not the rate at which bits travel over the link .

Increasing the link bandwidth reduces the data transmission delay.

This statement is false: "On high-speed links (or high-bandwidth links) bits travel faster"


delay-bandwidth product

The delay-bandwidth product of a link is also called the link length in bits .


round trip time 

Information on the Internet is not only transmitted in one direction, but interactive in both directions. Therefore, it is sometimes necessary to know the time required for a two-way interaction.

The round-trip time RTT (round-trip time) represents the total elapsed time from when the sender sends data to when the sender receives an acknowledgment from the receiver.

In the Internet, the round-trip time also includes the processing delay of each intermediate node, the queuing delay and the sending delay when forwarding data.

When using satellite communications, the relatively long round-trip time RTT is an important performance indicator.


Utilization

It is divided into channel utilization and network utilization .

Channel utilization indicates what percentage of the time a channel is used (with data passing through).

The utilization of a completely idle channel is zero.

The network utilization rate is the weighted average of the channel utilization rate of the whole network.

Channel utilization is not as high as possible . When the utilization rate of a channel increases, the delay caused by the channel also increases rapidly


The relationship between latency and network utilization

 

 

Guess you like

Origin blog.csdn.net/u012632105/article/details/123663751