Computer Network Notes - Transport Layer

transport layer

From the perspective of communication and information processing, the transport layer is the 4th layer in the 5-layer reference model.It provides communication services to the application layer above. It belongs to the highest level of the communication-oriented part and is also the lowest level of user functions.

5.1 Services provided by the transport layer

5.1.1 Functions of the transport layer

The transport layer provides communication between application processes between two hosts , also known as end-to-end communication . Since the network layer protocol is unreliable and can cause packet loss, out-of-order, and duplication, the transport layer is deployed to provide reliable services for data transmission.

Insert image description here

Since UDP at the transport layer is unreliable, why is it said that the transport layer provides reliable services?

It is true that UDP in the transport layer is unreliable, because using UDP cannot guarantee that the datagrams will arrive at the destination correctly. After UDP detects an error, it can choose to discard it, or it can choose to report the error to the application layer. butThe key is still to be chosen by the users themselves, if the user chooses TCP (such as FTP software), the natural transport layer is reliable. However, if the user uses UDP (such as QQ software, video conferencing software, etc.), the transport layer is unreliable.Therefore, whether the transport layer is reliable has a lot to do with the protocol used by the transport layer, but generally the transport layer is reliable by default.

Transport layer functions :

Insert image description here

  • Provide logical communication between application processes (the network layer provides logical communication between hosts). "Logical communication" means that the communication between transport layers seems to transmit data in the horizontal direction as shown in Figure 5-2, butIn fact, there is no horizontal physical connection between the two transport layers.
  • Error detection: On receipt ofThe header and data part of the messageError detection is performed (the network layer only checks the IP datagram header, not the data part).
  • Provide connectionless or connection-oriented services. Depending on the application , for example, some data transmission requires real-time performance (such as real-time video conferencing), the transport layer needs to have two different transmission protocols, namely connection-oriented TCP and connectionless UDP.TCP provides a highly reliable transmission service, while UDP provides an efficient but unreliable transmission service.
  • Reuse and deuse:Multiplexing means that different application processes on the sender can use the same transport layer protocol to transmit data.Demultiplexing means that the transport layer of the receiver can correctly deliver the data to the destination application process after stripping off the header of the message.

Connection-oriented services also have the following two functions :

  • Connection management: The process of defining and establishing a connection is usually called a handshake. For example, TCP's "three-way handshake" mechanism.
  • Flow control and congestion control: Send data at a speed generally accepted by the other party and the network to prevent data loss caused by network congestion.

What principles should be used to determine whether to use connection-oriented services or connectionless services at the transport layer?

It needs to be distinguished based on the nature of the upper-layer application. For example, the File Transfer Protocol (FTP) must be used when transferring files, and the file transfer must be reliable without errors or loss, so connection-oriented TCP must be used at the transport layer. However, if the application is to transmit packetized voice or video on demand information, then in order to ensure the real-time nature of information transmission, the transport layer must use connectionless UDP

The application process seems to see an end-to-end logical communication channel between the two transport layer entities. How do you understand this?

Insert image description here

TCP is connection-oriented, but the IP used by TCP is connectionless. What are the main differences between these two protocols?

Insert image description here

TCP is connection-oriented, but the network used by TCP can be connection-oriented (such as the X.25 network, which has been eliminated, and is only used as an example here, no need to understand it) or connectionless (such as the IP network that is now widely used ). Choosing a connectionless network makes the entire system very flexible, but of course it also brings some problems.

Obviously, TCP provides many more functions and services than IP can provide. This is because TCP uses mechanisms such as acknowledgments, sliding windows, and timers, so it can detect erroneous messages, duplicate messages, and out-of-sequence messages.

5.1.2 Transport layer addressing and ports

Basic concepts of ports

As mentioned earlier, the data link layer is addressed by MAC address, and the network layer is addressed by IP address.The transport layer is addressed by port number

The port (software port) is the transport layer service access point. The port allows various application processes in the application layer to deliver their data downward to the transport layer through the port and lets the transport layer know that the data in its message segments should be passed upward through the port. Delivered to the corresponding process of the application layer. In this sense, the port is used to identify the process of the application layer. In other words, the port is similar to the dormitory number, and the dormitory houses the application process.

The port number

Since there are a large number of network application processes running on a host at the same time, a large number of port numbers are needed to identify different processes.

The port is identified by a 16-bit port number, and a total of 2 16 = 65536 port numbers are allowed. The port number only has local meaning , that is, the port number is only used to identify each process in the application layer of this computer.. For example, there is no connection between port 8080 of host A and port 8080 of host B.

Ports can be divided into three categories according to the port number range:

  • Well-known ports (reserved ports):The value is generally 0~1023. When a new application appears, it must be assigned a well-known port so that other application processes can interact with it.

Insert image description here

  • Registration port:The value is 1024~49151. It is used by applications that do not have well-known port numbers. Use of such port numbers must be registered with IANA to prevent duplication.

  • Client port or ephemeral port: the value is49152~65535. Because this type of port number is dynamically selected only when the client process is running, it is called an ephemeral port or an ephemeral port. After the communication is completed, this port is automatically freed for use by other client processes.

  • Socket: A host with an IP address can provide many services, such as Web services, FTP services, SMTP services, etc. These services can be implemented entirely through an IP address.Only through the IP address and port number can the port of a connection be uniquely determined, which is called a socket.

    Socket = (host IP address, port number). itUniquely identifies an application process on a host in the network

5.1.3 Connectionless services and connection-oriented services

The transport layer provides two types of services:Connectionless services and connection-oriented services, the corresponding implementations areUser Datagram Protocol (UDP) and Transmission Control Protocol (TCP)

when adoptedTCP, the transport layer provides upwards aFull duplex reliable logical channel;

when adoptedUDP, the transport layer provides upwards aunreliable logical channel

Main features of UDP

  • There is no need to establish a connection before transmitting data, and no confirmation is required after data arrives.
  • Unreliable delivery.
  • The message header is short, the transmission overhead is small, and the delay is short.

Main features of TCP

  • Connection-oriented, does not provide broadcast or multicast services
  • Reliable delivery.
  • The message segment header is long and the transmission overhead is high.

At the network transport layer, there is a TCB (Transmit ControlBlock) in the TCP module, whichUsed to record variables during TCP operationFor TCP with multiple connections, there is a TCB for each connection. Definition of TCB structureIncluding the source port, destination port, destination IP, sequence number, response sequence number, other party's window size, own window size, TCP status, etc. used by this connection.

5.2 UDP

5.2.1 UDP datagram

Basic concepts of UDP

The biggest difference between UDP and TCP is that it is connectionless.UDP actually only adds port functions (in order to find processes) and error detection functions on top of IP datagram services.

advantage:

  • There is no need to establish a connection before sending data.
  • UDP hosts do not need to maintain complex connection state tables.
  • UDP user datagrams have only 8 bytes of header overhead.
  • Congestion in the network will not reduce the source host's sending rate (there is no congestion control). This is important for some real-time applications (such as IP telephony, real-time video conferencing).
  • UDP supports one-to-one, one-to-many, many-to-one and many-to-many interactive communications

The composition of UDP datagram

UDP datagrams have two fields:Data fields and header fields. The header field is 8B and consists of 4 fields

Insert image description here

  • Source port: 2B. As mentioned earlier, 16bit is used to represent the port number, so 2B length is required.
  • Destination port: 2B.
  • Length: 2B.
  • Checksum: occupies 2B, used to test whether there are errors in UDP user datagrams during transmission (Both the header and the data part are checked), if there is an error, just discard it. If this field is optional, when the source host does not want to calculate the checksum, it can directly set this field to all 0s.Inspection scope: pseudo header, UDP datagram header and data. Among them, the pseudo header is temporarily generated when calculating the checksum.

5.2.2 UDP verification

UDP checksum only provides error detection.When calculating the checksum, a 12B pseudo header is temporarily added before the UDP user datagram.

The pseudo header includes the source IP address field, the destination IP address field, the all-0 field, the protocol field (UDP is fixed to 17), and the UDP length field (Figure 5-5 assumes the user data length is 15B). It is important to remember that the header is only used to calculate and verify the checksum, it is neither passed down nor passed up .

Insert image description here

  • During verification, if the length of the data part of the UDP datagram is not an even number of bytes, an all-0 byte needs to be filled in.
  • If the UDP checksum verifies that the UDP datagram is incorrect, it can be discarded or delivered to the upper layer, but an error report needs to be attached, that is, the upper layer is told that this is an incorrect datagram.
  • Through the pseudo header, you can not only check the source port number, destination port number and data part of the UDP user datagram, but also check the source IP address and destination address of the IP datagram.

After calculation according to binary one's complement code,When there are no errors the result should be all 1's;Otherwise, it indicates that an error has occurred, and the receiver should discard the UDP message.

The operation rules for the one's complement sum of two numbers are as follows:

  • The operation is performed column by column from low bit to high bit.
  • 0+0-0, 0+1=1, 1+1=0 (carry 1 and add to the next column).
  • If the addition of the highest bit results in a carry, then 1 needs to be added to the final result.

5.3 TCP

5.3.1 TCP segment

Insert image description here

  • Source port and destination port: each occupy 2B. Like UDP, the TCP header also has a source port and a destination port.
  • Serial number: 4B. Although TCP segments are delivered from the application layer, TCP is a byte stream (that is, TCP is transmitted byte by byte), so the byte stream transmitted in a TCP connection needs to be numbered. , so as to ensure in-order delivery.
  • Confirmation number: 4B.TCP has a confirmation mechanism, so the receiver needs to send a confirmation number to the sender. You only need to remember one thing about this confirmation number:If the confirmation number is equal to N, it means that all data up to sequence number N-1 have been received correctly.
  • Data offset: 4 bits. The data offset here is not the data offset of the fragment in the data, butIndicates the length of the header, do not confuse. Taking up 4 bits can represent a total of 15 states from 001 to 1111, and the basic unit is 4B, so the data offset determines that the maximum length of the header is 60B.
  • Reserved field: 6 bits. Reserved for future use, but should currently be set to 0 and this field can be ignored.
  • Emergency URG:When URG=1, the emergency pointer field is valid. It tells the system that there is urgent data in this segment and should be transmitted as soon as possible (equivalent to high-priority data). It's like there is a long motorcade waiting for a red light. At this time, an ambulance comes. It is an emergency. The ambulance can bypass all the cars without waiting for the red light. butEmergency URG needs to be used in conjunction with emergency pointer, for example, there are many ambulances coming, and now you need an emergency pointer to point to the last ambulance. Once the last ambulance passes, TCP tells the application to resume normal operation, that is, the data starts from the first byte to The byte pointed to by the urgent pointer is the urgent data.
  • Acknowledgment bit ACK:Only when ACK=1, the confirmation number field is valid; when ACK=0, the confirmation number is invalid TCP regulations, once the connection is established, all transmitted segments must have the ACK bit set to 1 .
  • Push Bit PSH:When TCP receives a segment with the push bit set to 1, it delivers it to the receiving application process as quickly as possible, instead of waiting until the entire cache is filled before delivering it upwards.
  • Reset bit RST:When RST=1, it indicates that a serious error occurred in the TCP connection (such as due to host crash or other reasons), the connection must be released, and then the transmission connection must be re-established.
  • Synchronization bit SYN:The synchronization bit SYN is set to 1, indicating that this is a connection request or connection reception message.
  • Termination bit FIN:Release a connection. When FIN=1, it indicates that the data of the sending end of this message segment has been sent and the transmission connection is required to be released.
  • Window field: 2B.The window field is used to control the amount of data sent by the other party, in bytes (B). Remember one sentence: The window field clearly indicates the amount of data the other party is allowed to send now . For example, assume the confirmation number is 701 and the window field is 1000. This shows that starting from No. 701, the party sending this segment still has the receive buffer space to receive 1000B of data.
  • Checksum field: 2B.The scope of checksum field verification includes header and data parts.. When calculating the checksum, like UDP, a 12B pseudo header (Just change the 17 in the 4th field of the UDP pseudo header to 6, the rest is the same as UDP)。
  • Emergency pointer field: 2B.
  • Option fields: Variable length. TCP originally specified only one option, the maximum segment length MSS. MSS tells the other party TCP: "The maximum length of the data field of the message segment that my cache can receive is MSS bytes."
  • Populate fields:In order to make the entire header length an integer multiple of 4B

5.3.2 TCP connection management

The TCP transmission connection is divided into 3 stages:Connection establishment, data transfer and connection releaseThe management of TCP transport connections is to ensure that the establishment and release of transport connections can proceed normally.

TCP regards connections as the most basic abstraction.Each TCP connection has two endpoints, the endpoint of the TCP connection is not the host, not the IP address of the host, not the application process, nor the protocol port of the transport layer.The endpoint of a TCP connection is called a socket or socket. The port number is spliced ​​to the IP address to formsocket

Each TCP connection is uniquely identified by the two endpoints (two sockets) at both ends of the communication. For example, TCP connection::={socket1, socket2}={(IP1:port1), (IP1:port2)}.

TCP connection and establishment are all done usingclient/server approach(C/S)。The application process that actively initiates connection establishment is called Client, and the application process that passively waits for connection establishment is called Server.

The TCP transport connection is established using "three handshakes"method :

Insert image description here

  • Client A's TCP sends a connection request segment to server B.The synchronization bit SYN=1 in the header (TCP stipulates that the SYN segment cannot carry data, but it consumes a sequence number), and select the serial numberseq=x, indicating that the sequence number of the first data byte when transmitting data is X
  • The server receives the datagram and knows from the SYN bit that it is a request to establish a connection. If agreed, send back confirmation.B should set SYN=1, ACK=1, its confirmation number ack=x+1, and its chosen sequence number seq=y in the confirmation message segment.. Note that this segment cannot carry data at this time (mnemonic: because SYN=1, it cannot carry data).
  • After receiving this message segment, A gives a confirmation to B, with ACK=1 and confirmation number ack=y+1. A's TCP notifies the upper application process that the connection has been established. After B's TCP receives the confirmation from host A, it also notifies its upper-layer application process. At this time, the TCP connection has been established. The ACK message can carry data (without the SYN field). If it does not carry data, the sequence number will not be consumed.

Using the "three-way handshake" method,The purpose is to prevent errors in message segments during the establishment of a transmission connection.. After three exchanges of message segments, a transmission connection is established between the processes of both communicating parties, and thenUse full-duplex modeThe data segment is transmitted normally on the transport connection.

Insert image description here

Insert image description here

Insert image description here

The process of TCP releasing connection:

  • After the data transmission is completed, both communicating parties can release the connection. Now A's application process first sends a connection release segment to its TCP, stops sending data, and actively closes the TCP connection. A sets the FIN in the header of the connection release message segment to 1, its sequence number seq=u, and waits for B's confirmation. It should be noted here thatBecause TCP is duplex, that is to say, you can imagine that there are two data paths on a pair of TCP connections. When sending a FIN message, the end that sends the FIN cannot send data, that is, one of the data paths is closed, but the other end can still send data.

    Insert image description here

  • B sends a confirmation, the confirmation number ack=u+1, and the sequence number of this segment is seg=v.The TCP server process notifies higher-level application processes. The connection in this direction from A to B is released, and the TCP connection is in a semi-closed state. If B sends data, A still needs to receive

Insert image description here

  • If B no longer has data to send to A, its application process notifies TCP to release the connection.

Insert image description here

  • After A receives the connection release segment, it must send an acknowledgment. In the confirmation message segment, ACK=1, the confirmation number ack=w+1, and its own sequence number seq=u+1.

Insert image description here

The TCP connection must elapse for 2MSL before it is actually released.

Insert image description here

Summarize:

  • Connection established
    • SYN=1,seq=x.
    • SYN=1,ACK=1,seq=y,ack=x+1。
    • ACK=1,seq=x+1,ack=y+1。
  • connection release
    • FIN=1,seq=u。
    • ACK=1,seq=v,ack=u+1。
    • FIN=1,ACK=1,seq=w,ack=u+1。
    • ACK=1,seq=u+1,ack=w+1。

It is necessary to ensure that the sequence number of the late TCP segment is not in the sequence number range used by the new connection.The initial sequence number selected by TCP when establishing a new connection must be different from the sequence numbers used in previous connections. Therefore, different TCP connections cannot use the same initial sequence number

5.3.3 TCP reliable transmission

TCP data numbering and confirmation

TCP is byte-oriented. TCP regards the message to be transmitted asdata stream composed of bytes, and make each byte correspond to a sequence number. When the connection is established, both parties must agree on an initial sequence number. The sequence number field value in the header of each message segment sent by TCP represents the sequence number of the first byte of the data part in the message segment.

TCP's acknowledgment is an acknowledgment of the highest sequence number of the received data.. The confirmation number returned by the receiving end is the highest sequence number of the received data plus 1 . therefore,The acknowledgment number indicates the sequence number of the first data byte in the data that the receiving end expects to receive next.

TCP retransmission mechanism

Every time TCP sends a segment, it sets a timer for this segment. As long as the retransmission time set by the timer reaches the specified time and no confirmation is received, the segment will be retransmitted.

An adaptive algorithm for TCP

  • Record the time when each segment is sent and the time when the corresponding acknowledgment segment is received. thisThe difference between the two times is the round-trip delay of the message segment.

  • The weighted average of the round-trip delay samples of each message segment isObtain the average round-trip delay (RTT) of the message segment

  • Every time a new round-trip delay sample is measured, the average round-trip delay is recalculated according to the following formula.

    RTT=(1-α)×(old RTT)+α×(new round-trip delay sample)

In the above formula, 0≤α<1.If α is very close to 1, it means that the newly calculated average round-trip delay RTT has changed greatly compared with the original value, that is, the value of RTT is updated quickly.If α is chosen close to 0, it means that the weighted calculated RTT is not greatly affected by the new round-trip delay sample, that is, the value of RTT is updated slowly., it is generally recommended that α be 0.125 .

The timer's retransmission timeout (RTO) shouldslightly larger thanThe RTT obtained above is

RTO=β×RTT(β>1)

Karn algorithm: Every time a message segment is retransmitted, the RTO is increased.

New RTO = γ × (old RTO), the typical value of γ coefficient is 2. When retransmission of the message segment no longer occurs, the weighted RTT and RTO values ​​are updated based on the round-trip delay of the message segment.

The embodiment of TCP reliable mechanism

  • Each IP datagram is routed independently, so it is possible for it to arrive at the destination host out of order.
  • IP datagrams travel in circles across the network due to errors in routing calculations. Finally, the value of the time-to-live TTL in the datagram header dropped to zero, and the datagram was discarded midway.
  • There is suddenly a large amount of traffic on a certain router, so that the router has no time to process the arriving datagrams, so some datagrams are discarded.

5.3.4 TCP flow control

Generally speaking, people always want data to be transferred faster. But if the sender sends the data too fast, the receiver may not have time to receive it, which will cause data loss.

Flow Control is to prevent the sender from sending too fast, so that the receiver can receive it in time without causing network congestion.. The sliding window mechanism can be used to easily implement flow control on TCP connections.

Insert image description here

  • TCP has a duration timer for each connection.As long as one party of the TCP connection receives the zero window notification from the other party, the continuous timer is started. If the time set by the continuous timer expires, a zero-window detection segment (carrying only 1B of data) is sent, and the other party gives the current window value when confirming the detection segment.. If the window is still zero, the party receiving this segment resets the duration timer. If the window is not zero, the deadlock deadlock can be broken.
  • Different mechanisms can be used to control the timing of sending TCP segments.
    • TCP maintains a variable equal to the maximum segment length (MSS). if onlyWhen the data stored in the cache reaches MSS bytes, it is assembled into a TCP segment and sent out.
    • The application process of the sender indicates that the message segment is required to be sent, that is,Push operations supported by TCP(It has been compared with the emergency pointer before and will not be explained here).
    • When a timer on the sender expires, the currently existing cached data is loaded into the message segment (but the length cannot exceed MSS) and sent out.

5.3.5 Basic concepts of TCP congestion control

At a certain period of time, if the demand for a certain resource in the network exceeds the available part that the resource can provide, the performance of the network will deteriorate - congestion will occur.

The conditions for resource congestion to occur are:Sum of resource requirements > available resources

If congestion occurs in the network, network performance will significantly deteriorate, and the throughput of the entire network will decrease as the input load increases.

Comparison of the properties of congestion control and flow control :

  • Congestion control has only one purpose, which isEnable the network to withstand existing network loads
  • Congestion control is aA global process involving all hosts, all routers, and all factors related to reducing network transmission performance
  • Flow control often refers to the control of point-to-point traffic between a given sender and receiver.
  • What flow control has to do is to suppress the rate at which the sender sends data so that the receiver has time to receive it.
  • Congestion control is difficult to design because it is a dynamic (rather than static) problem.
  • The current network is developing in the direction of high speed, which can easily lead to packet loss due to insufficient cache size. butPacket loss is a symptom, not a cause, of network congestion
  • In many cases, exactlyCongestion control itself becomes the cause of network performance deterioration or even deadlock., this point should be paid special attention to.

Congestion control is divided into closed-loop control and open-loop control

  • The open loop control method isWhen designing the network, consider factors related to congestion in advance and strive to avoid congestion when the network is working.
  • Closed loop control isBased on the concept of feedback loop. The following measures belong to closed-loop control:
    • Monitor network systems to detect when and where congestion occurs.
    • Get information about congestion occurrences where action can be taken.
    • Adjust the operation of network systems to resolve problems that arise.

5.3.6 Four algorithms for congestion control

When the host at the sending end determines the rate at which message segments are sent, it must not only consider the receiving capability of the receiving end, but also consider the overall situation so as not to cause network congestion.

TCP requires the sender to maintain the following two windows:

  • Receiver window rwnd: The latest window value promised by the receiver based on its current receive buffer size,Reflects the capacity of the receiving end. The receiving end notifies the sending end by placing it in the window field in the header of the TCP message.
  • Congestion window cwnd:The window value set by the sender based on its own estimate of network congestion reflects the current capacity of the network.

The upper limit of the sending window of the sender should beTake the smaller of the two variables, the receiver window rwnd and the congestion window cwnd., the smaller of rwnd and cwnd controls the rate at which the sender sends data.

The upper limit of the sending window=Min[rwnd,cwnd]

Insert image description here

The size of the receiving window can be notified to the sending end according to the window field in the TCP message header. How does the sending end maintain the congestion window?

The principle of slow start algorithm

  • When the host just starts sending message segments, it can first set the congestion window cwnd=1, that is, set it to a value of the maximum segment length MSS.
  • After receiving an acknowledgment for a new segment, the congestion window is increased by 1., that is, increase the value of MSS by one.
  • Using this method to gradually increase the congestion window cwnd of the sender can make the rate of packet injection into the network more reasonable.

After using the slow start algorithm, the congestion window cwnd doubles after each transmission round, that is, the size of cwnd increases exponentially. In this way, slow start always increases the congestion window cwnd to a specified slow start threshold ssthresh (threshold), and then switches to the congestion avoidance algorithm (the time experienced by a transmission round is actually the round-trip time RTT. For example, the congestion window cwnd= 4. The round trip time RTT at this time is the total time it takes for the sender to send 4 consecutive message segments and receive the confirmation of these 4 message segments).

The principle of congestion avoidance algorithm

In order to prevent the growth of the congestion window cwnd from causing network congestion, a state variable is also needed, namely the slow start threshold ssthresh.

When cwnd<ssthresh, the slow start algorithm is used.
When cwnd>ssthresh, stop using the slow start algorithm and use the congestion avoidance algorithm instead.
When cwnd=ssthresh, either the slow start algorithm or the congestion avoidance algorithm can be used

The congestion avoidance algorithm works as follows:The congestion window cwnd of the sender increases by the size of one MSS every time a round-trip delay RTT passes. It usually grows linearly.

No matter in the slow start phase or the congestion avoidance phase, as long as the sender judges that the network is congested (the basis is that the acknowledgment is not received on time), it mustThe slow start threshold ssthresh is set to half of the sending window value when congestion occurs (but not less than 2), then the congestion window cwnd is reset to 1, and the slow start algorithm is executed.The purpose of this is to quickly reduce the number of packets sent by the host to the network, so that the router experiencing congestion has enough time to process the backlog of packets in the queue.

  • When a TCP connection is initialized,Set congestion window to 1, as shown in Figure 5-16. The window unit in the figure does not use bytes but segments. The initial value of the slow start threshold is set to 16 segments, that isssthresh=16The sending window of the sender cannot exceed the minimum value of the congestion window cwnd and the receiving window rwnd.. It is assumed that the receiver window is large enough. So now the value of the sending window is equal to the value of the congestion window .

Insert image description here

  • When executing the slow start algorithm, the initial value of the congestion window cwnd is 1, and the first message segment M 0 is sent .

Insert image description here

  • Each time the sender receives an acknowledgment, it increments cwnd by 1, so the sender can then send two message segments M 1 and M 2 .

    Insert image description here

  • The receiving end sends back a total of two acknowledgments.Each time the sender receives an acknowledgment of a new segment, it increases the sender's cwnd by 1. Now cwnd increases from 2 to 4, and the next 4 segments can be sent

    Insert image description here

  • Each time the sender receives an acknowledgment of a new segment, it increases the sender's congestion window by 1. Therefore, the congestion window cwnd increases exponentially with the transmission rounds.

Insert image description here

  • When the congestion window cwnd grows to the starting threshold value ssthresh (when cwnd=16), the congestion avoidance algorithm is executed instead, and the congestion window grows according to a linear law.

    Insert image description here

  • Assume that when the congestion window value increases to 24, the network times out, indicating that the network is congested.

Insert image description here

  • The updated ssthresh value becomes 12 (half the send window value of 24), the congestion window is reset to 1, and the slow start algorithm is executed.

Insert image description here

  • When cwnd=12, the congestion avoidance algorithm is implemented instead. The congestion window grows linearly, and the size of the MSS increases by one for each round-trip delay.

Insert image description here

Summarize:

  • Multiplicatively decreases. It means that no matter in the slow start phase or the congestion avoidance phase, as long as a timeout occurs (a network congestion occurs), theThe slow start threshold ssthresh is set to half of the current congestion window value. When the network is frequently congested, the ssthresh value will decrease quickly to reduce the number of packets injected into the network.
  • Addition increases. It meansWhen executing the congestion avoidance algorithm, after receiving acknowledgments for all message segments (after a round trip time), the congestion window cwnd is increased by an MSS size., slowly increasing the congestion window to prevent premature network congestion.
  • Congestion avoidance does not mean that congestion can be completely avoided. It is still impossible to completely avoid network congestion using the above measures.Congestion avoidance means controlling the congestion window to grow linearly during the congestion avoidance phase, making the network less prone to congestion.

Insert image description here

Fast retransmission algorithm

First, the receiver is required to send a duplicate acknowledgment immediately after receiving an out-of-order message segment. This allows the sender to know early that there are segments that have not reached the receiver. As long as the sender receives three consecutive duplicate acknowledgments, it should immediately retransmit the message segments that the other party has not yet received.
Insert image description here

Quick recovery algorithm

  • When the sender receives three consecutive repeated acknowledgment frames, it performs the "multiplicative reduction" algorithm and sets the slow start threshold ssthresh to half of the current congestion window. But then the slow start algorithm is not executed.

  • Since the sender now believes that the network is probably not congested, it does not perform the slow start algorithm now, that is, the congestion window cwnd is not set to 1 now, but the slow start threshold ssthresh is set to half of the current congestion window, and then congestion avoidance is started. algorithm ("additive growth"), causing the congestion window to slowly increase linearly.

Insert image description here

The quick recovery algorithm is as follows:

  • When the sender receivesWhen 3 consecutive repeated ACKs, reset the slow start threshold ssthresh (half of the congestion window).
  • The difference from the beginning is that the congestion window cwnd is not set to 1, but to the new ssthresh.
  • If the sending window value still allows the sending of message segments, the segment will continue to be sent according to the congestion avoidance algorithm.

Guess you like

Origin blog.csdn.net/pipihan21/article/details/129572306