[HuKe University Teacher] Computer Network Lecture Notes Chapter 5 (Computer Network Transport Layer)

Table of contents

5.1. Overview of transport layer

concept

communication between processes

Communication process between processes

Summarize 

5.2. Transport layer port numbers, concepts of multiplexing and demultiplexing

Why use port number?

Multiplexing on the sender and demultiplexing on the receiver

Edit

Edit

Transport layer transmission process 

5.3. Comparison between UDP and TCP

concept

User Datagram Protocol UDP (User Datagram Protocol)

Transmission Control Protocol TCP (Transmission Control Protocol)

Summarize

5.4. TCP flow control

concept

Summarize

5.5. TCP congestion control

concept

Factors causing network congestion

General principles of congestion control

Open loop control and closed loop control

Monitor network congestion

Congestion control algorithm

Slow start and congestion avoidance

slow-start

congestion avoidance

Complete schematic diagram of the two algorithms

Fast retransmission and fast recovery

Fast retransmission (fast retrasmit)

fast recovery

Schematic diagram of the improved overall algorithm

5.6. Selection of TCP timeout retransmission time

RFC6298 recommends using the following formula to calculate the timeout retransmission time RTO

The measurement of round trip time RTT is more complicated

Calculation of TCP timeout retransmission 

Summarize 

5.7. Implementation of TCP reliable transmission 

5.8. TCP transport connection management

concept

TCP connection establishment

TCP connection establishment needs to solve the following three problems:

TCP uses the "three-packet handshake" to establish a connection

Summarize

TCP connection release

TCP releases the connection through "four message wave"

The role of TCP keepalive timer

5.9. Header format of TCP segment 

The role of each field


5.1. Overview of transport layer

concept

communication between processes

  • From the perspective of communication and information processing, the transport layer provides communication services to the application layer above it. It belongs to the highest layer of the communication-oriented part and is also the lowest layer of user functions .

  • When two hosts in the edge part of the network use the functions of the core part of the network for end-to-end communication, only the protocol stack of the host located in the edge part of the network has a transport layer , and the router in the core part of the network forwards the packet. They only use the functions of the third layer (to the network layer).

Communication process between processes

"Logical communication" means that the communication between the transport layers seems to transmit data in the horizontal direction, but in fact, the two pieces of data do not have a physical connection in the horizontal direction. The data to be transmitted is along the dotted lines up and down multiple times in the figure. transmitted in direction

Network-based communication is performed between processes Ap1 and Ap4, and network-based communication is performed between processes Ap2 and Ap3.

Use different ports at the transport layer to correspond to different application processes

Application layer messages are then transmitted through the network layer and its lower layers.

The transport layer of the receiver delivers the received application layer messages to the corresponding application process in the application layer through different ports.

The port here does not refer to the visible and tangible physical port, but to the identifier used to distinguish different application processes.

Summarize 


5.2. Transport layer port numbers, concepts of multiplexing and demultiplexing

Why use port number?

Multiplexing on the sender and demultiplexing on the receiver

Multiple processes (here a port represents a process)  use a transport layer protocol (or transport layer interface) to send data called  multiplexing

When multiple processes (where a port represents a process)  use a transport layer protocol (or transport layer interface) to receive , it is called  splitting .

Transport layer transmission process 

Enter the domain name in the browser and press Enter to browse

Then the DNS client process in the user's PC will send a DNS query request message.

DNS query request messages need to use the UDP protocol of the transport layer

The value of the source port field in the header, select an unoccupied one among the ephemeral port numbers 49151~65535, to represent the DNS client process

The value of the destination port field in the header: 53, which is a well-known port number used by the DNS server process.

 

Afterwards, the UDP user datagram is encapsulated in an IP datagram and sent to the DNS server via Ethernet. 

After receiving the IP datagram, the DNS server decrypts the UDP user datagram from it.

The destination port number in the UDP header is 53, which indicates that the data payload part of the UDP user datagram, that is, the DNS query request message, should be delivered to the DNS server process in this server.

The DNS server process parses the content of the DNS query request message and then finds the corresponding IP address according to its requirements.

After that, a DNS response message will be sent to the user PC. The DNS response message needs to be encapsulated into a UDP user datagram using the UDP protocol of the transport layer.

The value of the source port field in the header is set to the well-known port number 53, indicating that this is a UDP user datagram sent by the DNS server process, and the value of the destination port is set to 49152, which is the DNS query request message sent by the user PC before. The ephemeral port number used by the file's DNS client process

Encapsulate UDP user datagrams in IP datagrams and send them to the user PC through Ethernet

After the user PC receives the datagram, it unblocks the UDP user datagram from it.

The destination port number in the UDP header is 49152, which indicates that the data payload part of the UDP user datagram, that is, the DNS response message, should be delivered to the DNS client process in the user's PC.

The DNS client process parses the content of the DNS response message to know the IP address corresponding to the domain name of the web server it requested previously.

Now the HTTP client process in the user's PC can send HTTP request messages to the Web server (similar to the DNS sending and receiving process)


5.3. Comparison between UDP and TCP

concept

  • UDP and TCP are two important protocols in the transport layer of TCP/IP architecture

  • When the transport layer adopts the connection-oriented TCP protocol, although the underlying network is unreliable (only best-effort service is provided), this logical communication channel is equivalent to a full-duplex reliable channel .

  • When the transport layer uses the connectionless UDP protocol, this logical communication channel is an unreliable channel .

Reliable channel and unreliable channel

  • The data unit transmitted by two peer transport entities during communication is called Transport Protocol Data Unit (TPDU).

  • The data unit protocol transmitted by TCP is the TCP segment .

  • The data unit protocol transmitted by UDP is UDP message or user datagram .

UDP communication is connectionless and does not require sockets (Socket)

TCP is connection-oriented. Communication between TCP must establish a connection between two sockets (Socket).

User Datagram Protocol UDP (User Datagram Protocol)

Can send broadcast

A multicast can be sent to a multicast group

You can also send unicast

UDP supports unicast, multicast, and broadcast

In other words, UDP supports one-to-one, one-to-many, and one-to-all communication

transport process

UDP neither merges nor splits the packets handed over by the application process, but retains the boundaries of these packets.

In other words, UDP is oriented to application packets

UDP provides connectionless and unreliable transmission services to the upper layer

UDP structure

Transmission Control Protocol TCP (Transmission Control Protocol)

Both communicating parties using the TCP protocol must use the "three-message handshake" to establish a TCP connection before data transmission.

After the TCP connection is successfully established, there seems to be a reliable communication channel between the communicating parties. The communicating parties use this reliable channel based on the TCP connection to communicate.

Obviously, TCP only supports unicast, which is one-to-one communication

transport process

sender

  • TCP will regard the data blocks delivered by the application process as a series of unstructured byte streams. TCP does not know the meaning of these byte streams to be transmitted.
  • And number them and store them in your own sending cache
  • TCP will extract a certain amount of bytes to construct a TCP message according to the sending strategy and send it

receiver

  • On the one hand, the data payload part is taken out from the received TCP message segment and stored in the receive cache; on the other hand, some bytes in the receive cache are delivered to the application process
  • TCP does not guarantee that the data blocks received by the receiver application process and the data blocks sent by the sender have a corresponding size relationship (for example, the sender application process hands over 10 data blocks to the sender's TCP, but the receiver's TCP It may only take 4 data blocks to deliver the received byte stream to the upper application process, but the byte stream received by the receiver must be exactly the same as the byte stream sent by the sender application process)
  • The receiving application process must be able to identify the received byte stream and restore it into meaningful application layer data

TCP is oriented to byte streams, which is the basis for TCP to achieve reliable transmission, flow control, and congestion control.

This figure only draws data flow in one direction. In the actual network, based on the two ends of the TCP connection, TCP segments can be sent and received at the same time.

TCP provides connection-oriented reliable transmission services to the upper layer

 TCP structure

Summarize


5.4. TCP flow control

concept

In the picture above, host A can now delete all the byte data with sequence numbers 1 to 200 in the sending buffer, because it has received the cumulative confirmation of them from host B.

In the picture above, host A can now delete all the byte data with serial numbers 201 to 500 in the sending buffer, because it has received the cumulative confirmation of them from host B.

In the picture above, host A can now delete all the byte data with serial numbers 501~600 in the sending buffer because it has received the cumulative confirmation of them from host B.

In the figure above, if the zero-window detection message is lost during the sending process, the deadlock situation can still be broken.

Because the zero window detection segment also has a retransmission timer, after the retransmission timer times out, the zero window detection segment will be retransmitted.

Summarize


5.5. TCP congestion control

concept

Factors causing network congestion

  1. The capacity of the point cache is too small;

  2. Insufficient capacity of the link;

  3. The processing speed of the processor is too slow;

  4. Congestion itself can further exacerbate congestion;

General principles of congestion control

  • Prerequisite for congestion control: the network can bear the existing network load.

  • Practice has proven that congestion control is difficult to design because it is a dynamic problem .

  • Packet loss is a symptom rather than a cause of network congestion .

  • In many cases, congestion control itself becomes the cause of network performance deterioration or even deadlock.

Open loop control and closed loop control

Monitor network congestion

The main indicators are:

  1. Percentage of packets dropped due to lack of buffer space;

  2. average queue length;

  3. Number of packets retransmitted after timeout;

  4. average packet delay;

  5. Standard deviation of packet delay, etc.

The rise in these indicators indicates the growth of congestion.

Congestion control algorithm

Real sending window value = Min (receiver window value, congestion window value)

In the example below, the meaning of the horizontal and vertical coordinates

Transmission rounds:

  • After the sender sends the data segment to the receiver, the receiver sends back the corresponding confirmation segment to the sender.

  • The time experienced by a transmission round is actually the round-trip time, and the round-trip time is not a constant value.

  • The purpose of using transmission rounds is to emphasize that all message segments allowed to be sent in the congestion window are sent continuously and receive confirmation of the last message segment sent.

Congestion window:

  • It changes dynamically with the degree of network congestion and the congestion control algorithm used.

Slow start and congestion avoidance

slow-start
  • Purpose: Used to determine the load capacity or congestion level of the network.

  • The idea of ​​the algorithm: gradually increase the congestion window value from small to large.

  • Two variables:

    • Congestion window (cwnd) : Initial congestion window value: 2 setting methods. The window value gradually increases.

      • 1 to 2 maximum segments (old standard)

      • 2 to 4 maximum segments (RFC 5681)

    • Slow start threshold (ssthresh) : Prevents the congestion window from growing too large and causing network congestion.

In the picture swnd is the sending window

After each transmission round, the congestion window is doubled

The window size increases exponentially , 2 to the n-1 power

congestion avoidance
  • Idea: Let the congestion window cwnd increase slowly to avoid congestion.

  • After each transmission round, the congestion window cwnd = cwnd + 1 .

  • Make the congestion window cwnd grow slowly linearly.

  • In the congestion avoidance phase, it has the characteristics of " Additive Increase".

If some segments are lost during the sending process, this will inevitably cause the sender to timeout and retransmit these lost segments.

At this time, it returns to the slow start

Complete schematic diagram of the two algorithms

Fast retransmission and fast recovery

Fast retransmission (fast retrasmit)

fast recovery

Schematic diagram of the improved overall algorithm


5.6. Selection of TCP timeout retransmission time

If the value of the timeout retransmission time RTO is set much smaller than the value of RTT0, this will cause unnecessary retransmission of the message segment and increase the network load.

If the value of the timeout retransmission time RTO is set much larger than the value of RTT0, this will delay the retransmission time for too long, increase the idle time of the network, and reduce transmission efficiency.

RFC6298 recommends using the following formula to calculate the timeout retransmission time RTO

The measurement of round trip time RTT is more complicated

Calculation of TCP timeout retransmission 

Summarize 


5.7. Implementation of TCP reliable transmission 

 


5.8. TCP transport connection management

concept

TCP connection establishment

  • The process of establishing a TCP connection is called a handshake .

  • The handshake requires the exchange of three TCP segments between the client and server. It's called a three-message handshake .

  • The three-message handshake is mainly used to prevent the invalid connection request segment from being suddenly transmitted again, thus causing errors.

TCP connection establishment needs to solve the following three problems:

TCP uses the "three-packet handshake" to establish a connection

  • The TCP connection is established using the client-server method .

  • The application process that actively initiates connection establishment is called a TCP client.

  • The application process that passively waits for the connection to be established is called a TCP server .

The "handshake" requires the exchange of three TCP segments between the TCP client and server

process

Initially the TCP processes on both ends are closed 

At the beginning, the TCP server process first creates a transmission control block to store some important information in the TCP connection. For example, TCP connection table, pointers to send and receive buffers, pointers to retransmission queues, current send and receive sequence numbers, etc.

After that, prepare to accept the connection request from the TCP client process

At this point, the TCP server process enters the listening state, waiting for the connection request from the TCP client process.

The TCP server process passively waits for connection requests from the TCP client process, so it becomes a passive open connection .

The TCP client process also first creates the transmission control block

Since TCP connection establishment is initiated by the TCP client, it is called actively opening the connection .

 

Then, when intending to establish a TCP connection, send the TCP connection request segment to the TCP server process and enter the synchronized sent state.

In the TCP connection request segment header

  • The synchronization bit SYN is set to 1, indicating that this is a TCP connection request segment.
  • The sequence number field seq is set to an initial value x, which is the initial sequence number selected by the TCP client process.

Please note: TCP stipulates that a message segment with SYN set to 1 cannot carry data, but a sequence number must be consumed.

After the TCP server process receives the TCP connection request segment, if it agrees to establish a connection, it sends

TCP connection request confirms the message segment and enters the synchronization received state

In the header of the TCP connection request confirmation message segment

  • The synchronization bit SYN and the confirmation bit ACK are both set to 1, indicating that this is a TCP connection request confirmation segment.
  • The sequence number field seq is set with an initial value y, which is the initial sequence number selected by the TCP server process.
  • The value of the confirmation number field ack is set to x+1, which is a confirmation of the initial sequence number (seq) selected by the TCP client process

Please note: This segment cannot carry data because it is a segment with SYN set to 1, but it also consumes a sequence number.

 After the TCP client process receives the TCP connection request confirmation message segment, it also sends an ordinary TCP confirmation message segment to the TCP server process and enters the connection connected state.

In the header of the ordinary TCP acknowledgment message segment

  • The acknowledgment bit ACK is set to 1, indicating that this is a normal TCP acknowledgment segment.
  • The sequence number field seq is set to x+1. This is because the sequence number of the first TCP segment sent by the TCP client process is x, so the sequence number of the second segment sent by the TCP client process is x+1.
  • The acknowledgment number field ack is set to y+1, which is a confirmation of the initial sequence number selected by the TCP server process

Please note: TCP stipulates that ordinary TCP confirmation message segments can carry data, but if they do not carry data, the sequence number will not be consumed.

After receiving the confirmation message segment, the TCP server process also enters the connection established state.

Now, both TCP parties have entered the connection established state, and they can perform reliable data transmission based on the established TCP connection.

Why does the TCP client process finally send an ordinary TCP confirmation segment? Can I use "two-packet handshake" to establish a connection?

The example in the figure below is the "handshake between two messages"

In order to prevent the invalid connection request message segment from suddenly being transmitted to the server and causing an error, this situation is: the first connection request message sent by client A is not > lost, but because For some unknown reason, the message is stuck on a certain network node, resulting in a delay in reaching the other end (server) B until a certain time after the connection is released. Originally, this was a segment that had already expired, but B received this After the invalid message, it will be mistaken for a new connection request sent by A again, so B sends another confirmation message to A, indicating that it agrees to establish the connection. If the "three-way handshake" is not used, then as long as B When the end sends a confirmation message, it will think that the new connection has been established, but the A side has not sent a request to establish a connection, so it will not send data to the B side. If the B side does not receive the data, it will keep waiting. In this way, B A lot of resources will be wasted on the end.
So it is not redundant . This is to prevent the expired connection request segment from suddenly being transmitted to the TCP server, thus causing errors.

Summarize

TCP connection release

  • The TCP connection release process is more complicated.

  • After the data transmission is completed, both parties to the communication can release the connection.

  • The TCP connection release process is a four-packet handshake .

TCP releases the connection through "four message wave"

  • The TCP connection is established using the client-server method .

  • The application process that actively initiates connection establishment is called a TCP client.

  • The application process that passively waits for the connection to be established is called a TCP server .

  • Either party can issue a connection release notification after the data transfer is completed.

process

Now both the TCP client process and the TCP server process are in the connection established state

The application process of the TCP client process notifies it to actively close the TCP connection

The TCP client process will send a TCP connection release message segment and enter the termination wait 1 state.

TCP connection release segment header

  • The termination bit FIN and the confirmed ACK value are both set to 1, indicating that this is a TCP connection release segment and also confirms the previously received segment.
  • The value of the sequence number seq field is set to u, which is equal to the sequence number of the last byte of data previously transmitted by the TCP client process plus 1
  • The value of the confirmation number ack field is set to v, which is equal to the sequence number of the last byte of data previously received by the TCP client process plus 1

Please note: TCP stipulates that a message segment with the termination bit FIN equal to 1 will consume a sequence number even if it does not carry data.

After the TCP server process receives the TCP connection release segment, it will send an ordinary TCP confirmation segment and enter the shutdown waiting state.

In the header of the ordinary TCP acknowledgment message segment

  • The value of the acknowledgment bit ACK is set to 1, indicating that this is an ordinary TCP acknowledgment segment.
  • The value of the sequence number seq field is set to v, which is equal to the sequence number of the last byte of data previously transmitted by the TCP server process plus 1. This also matches the confirmation number in the previously received TCP connection release message segment.
  • The value of the confirmation number ack field is set to u+1, which is a confirmation of the TCP connection release segment.

The TCP server process should notify the higher-level application process that the TCP client process should disconnect from its own TCP connection.

At this time, the connection from the TCP client process to the TCP server process is released.

At this time, the TCP connection is in a semi-closed state, that is, the TCP client process has no data to send.

But if the TCP server process still has data to send, the TCP client process still needs to receive it, which means that the connection from the TCP server process to the TCP client process is not closed.

After receiving the TCP confirmation message segment, the TCP client process enters the termination wait 2 state and waits for the TCP connection release message segment sent by the TCP server process.

If the application process using the TCP server process has no data to send, the application process notifies its TCP server process to release the connection.

Since the release of the TCP connection is actively initiated by the TCP client process, the release of the TCP connection by the TCP server process is called passive closing of the connection.

The TCP server process sends a TCP connection release segment and enters the final confirmation state.

In the header of this message segment

  • The values ​​of the termination bit FIN and the acknowledgment bit ACK are both set to 1, indicating that this is a TCP connection release segment and also confirms the previously received segment.
  • The value of the sequence number seq field is w. This is because in the semi-closed state, the TCP server process may send another
  • The value of the confirmation number ack field is u+1, which is a repeated confirmation of the previously received TCP connection release segment.

After receiving the TCP connection release message segment, the TCP client process must send a normal TCP confirmation message segment for the message segment, and then enter the time waiting state.

In the header of this message segment

  • The ACK value is set to 1, indicating that this is a normal TCP acknowledgment segment.
  • The value of the sequence number seq field is set to u+1. This is because although the TCP connection release message segment previously sent by the TCP client process does not carry data, it consumes a sequence number.
  • The value of the confirmation number ack field is set to w+1, which is a confirmation of the received TCP connection release segment.

The TCP server process enters the closed state after receiving the message segment, and the TCP client process has to go through 2MSL before it can enter the closed state.

Why doesn't the TCP client process directly enter the shutdown state after sending the last confirmation message? But to enter the time waiting state?

Because of the time waiting state and the 2MSL duration in this state, it can be ensured that the TCP server process can receive the last TCP confirmation message segment and enter the closed state.

In addition, after the TCP client process sends the last TCP confirmation message segment, after 2MSL, all the message segments generated during the duration of this connection can disappear from the network, so that the next one can In the new TCP connection, the message segments in the old connection will not appear.

The role of TCP keepalive timer

The TCP two parties have established a connection. Later, the host where the TCP client process is located suddenly fails.

The TCP server process will no longer receive data from the TCP client process in the future.

Therefore, there should be measures to prevent the TCP server process from waiting in vain.


5.9. Header format of TCP segment 

The role of each field

Source port and destination port

Serial number, confirmation number and confirmation flag

Data offset, retention, windowing and parity

Synchronization flag, termination flag, reset flag, push flag, emergency flag and emergency pointer

Options and padding

Guess you like

Origin blog.csdn.net/weixin_73077810/article/details/133270736