transport layer
- 1. Overview of the transport layer
- 2. Port number
- 3. Transport layer multiplexing and demultiplexing
- 4. Well-known port numbers in the transport layer used by common protocols in the application layer
- 5. TCP protocol vs. UDP protocol
- 6. TCP flow control
- 7. TCP congestion control
- 8. Selection of TCP timeout retransmission time
- 9. Implementation of TCP reliable transmission - sliding window
- 10. TCP connection establishment and release
- 11. TCP segment header format
1. Overview of the transport layer
- The network layer, data link layer, and physical layer solve the communication between hosts and hosts in the network;
- The transport layer solves the communication between processes on different hosts in the network , and this service is end-to-end ;
- A process is identified by a unique port number ;
- According to different application requirements, the transport layer mainly provides two different transport protocols to the application layer: 1) connection-oriented TCP; 2) connectionless UDP ;
2. Port number
- The operating systems used by different hosts in the network may be inconsistent, so the processes on these hosts will be identified by pids in different formats;
- In order to enable mutual communication between processes on different hosts, the transport layer uses port numbers to identify processes;
- The value range of the port number is [0,65535] , occupying two bytes, and can be divided into the following three categories:
1) Well-known port numbers: [0,1023] , IANA (Internet Assigned Numbers Authority) assigns ports within this range Numbers are assigned to some important application protocols, such as HTTP uses 80, DNS uses 53, and FTP uses 21/20 ;
2) Register port numbers: [1024, 49151] , for some unfamiliar applications, such port numbers must be Register in IANA according to the regulations, and place the port number repeatedly;
3) Ephemeral port numbers : [49152, 65535], for short-term use by user processes, and the port numbers will be released after the communication is over;
3. Transport layer multiplexing and demultiplexing
reuse
- Different processes on the sender send application messages to the transport layer, and the transport layer selects different protocols according to application requirements;
- The network layer IP protocol puts the data (segment or user datagram) sent by the transport layer into the data payload, and uses the protocol field to indicate which protocol is used for the transport layer data: TCP uses the 6 mark, and UDP uses the 17 mark;
share
- After the IP datagram sent by the sender reaches the receiver, the receiver starts to analyze the datagram;
- The network layer IP protocol analyzes the transport layer protocol used by the data in the data payload through the protocol field, and then parses the data into the corresponding type of data (segment or user datagram);
- The transport layer transmits the corresponding data to different application processes through different ports;
4. Well-known port numbers in the transport layer used by common protocols in the application layer
5. TCP protocol vs. UDP protocol
TCP (Transmission Control Protocol) transmission control protocol | UDP (User Datagram Protocol) User Datagram Protocol |
---|---|
Connection-oriented: 1) Three handshakes to establish a connection; 2) Four waves to release the connection; | no connection |
Only supports unicast, that is, one-to-one communication | Support unicast, multicast, broadcast, that is, support one-to-one, one-to-many, one-to-all communication |
stream-oriented | application-oriented message |
Provide connection-oriented reliable transmission to the upper layer, use flow control, congestion control machine and confirmation response mechanism, timeout retransmission mechanism, cumulative confirmation mechanism, piggyback response mechanism | Provide connectionless and unreliable transport to upper layers, best-effort delivery |
The minimum length of the message segment header is 20 bytes, and the maximum length is 60 bytes | The datagram header occupies only 8 bytes |
6. TCP flow control
- Purpose: To control the sending rate of the sender so that the receiver can receive data in time ;
- Solution: use the sliding window mechanism to achieve flow control ( the receiver of the TCP connection limits the size of the sending window of the sender through its own receiving window );
1) Both sides of the TCP connection have caches, namely the sending cache and the receiving cache. When establishing a TCP connection, the receiver tells the sender the size of its own receiving window, and the sender immediately sets the sending window to the corresponding size; 2) The sender
can only send data inside the sliding window, and the data in the sliding window is sent and received by the receiver After the cumulative confirmation of , move the window, adjust the window size, and delete the sent data from the cache;
3) If the sender receives the zero window notification from the receiver , it starts a continuous timer , and the sender sends zero when the timer expires Window detection message (carries only one byte of data) , the receiver confirms the message and informs itself of the current receiving window size:
(1) If the receiving window size is still 0, restart the timer and repeat the operation;
( 2) If the size of the receiving window is not 0, the deadlock situation between the two parties of the TCP connection will be broken, and the sender will adjust the sending window and continue sending;
7. TCP congestion control
- Network congestion: The demand for network resources exceeds the current available amount , and the network performance decreases. This is called network congestion;
- Network congestion will cause network throughput to decrease as the input load increases;
ideally, before the network throughput reaches saturation, the input load is equal to the throughput;
- Four congestion control algorithms of TCP : slow start, congestion avoidance, fast retransmission, fast recovery ;
7.1 Slow start algorithm, congestion avoidance algorithm
- Slow start algorithm: In the initial stage, the number of segments input to the network is small and gradually increases;
- Congestion avoidance: reduce the expansion rate of the congestion window and change it to linear growth (increase 1 each time), reducing the possibility of network congestion;
- Criteria for the sender to judge whether the network is congested: Overtime retransmission occurs (that is, part of the message segment sent by the sender does not receive the confirmation message on time);
- The maintenance principle of the congestion window: if there is no congestion, it will expand, and if there is congestion, it will shrink ;
- The size of the congestion window is dynamic , depending on the degree of network congestion;
- sender send window swnd = congestion window cwnd ;
- Congestion control process:
1) Execute the slow start algorithm at the beginning (maintain the slow start threshold ssthresh), initialize cwnd=1, and expand cwnd exponentially; 2)
When cwnd<ssthresh, continue the slow start algorithm until cwnd=ssthresh , to implement the congestion avoidance algorithm, and expand cwnd linearly;
3) During the execution of the above algorithm, if network congestion occurs, the following work will be performed:
(1) Change the slow start threshold ssthresh to half of the current congestion window size, That is, ssthresh=cwnd/2;
(2) Set the congestion window size to cwnd=1, and restart the slow start algorithm;
7.2 Fast retransmission algorithm
- The sender judges whether network congestion occurs by whether timeout retransmission occurs. There is a problem with this method of judgment: sometimes when a segment is lost and timeout retransmission occurs, the network is not congested. If the slow start algorithm is restarted at this time, the network will be reduced. performance;
- The fast retransmission algorithm is to avoid unnecessary timeout retransmission by retransmitting the lost segment in advance , so as to avoid the wrong slow start algorithm;
- Fast retransmission algorithm flow: Inform the sender of segment loss as soon as possible, and retransmit as soon as
possible
2) The receiver immediately sends a duplicate acknowledgment message to the sender after receiving the out-of-order segment ;
3) Once the sender receives 3 consecutive duplicate acknowledgment messages , it immediately retransmits the corresponding message part;
7.3 Fast Recovery Algorithm
- When the sender receives 3 duplicate acknowledgments , it knows that some segments are lost, so it retransmits the corresponding packets immediately. At the same time, execute the fast recovery algorithm ;
- The execution flow of the fast recovery algorithm:
(1) Change the slow start threshold ssthresh to half of the current congestion window size, that is, ssthresh=cwnd/2; (
2) Set the congestion window size to cwnd=ssthresh (or cwnd=ssthresh+3), Start to execute the congestion avoidance algorithm;
8. Selection of TCP timeout retransmission time
The selection of TCP overtime retransmission time (RTO) is a relatively complicated problem. The actual round-trip time (RTT) of the known message segment: 1)
If the selection of RTO is less than RTT, unnecessary retransmissions will occur, increasing the network Load;
2) If the selection of RTO is much greater than RTT, it will increase the idle time of the network and reduce the network transmission efficiency;
from the above analysis, it can be seen that the selection of RTO is more complicated, and usually RTO is slightly greater than RTT .
8.1 Calculation of overtime retransmission time
- RTT is not a fixed value, so simply measured RTT cannot be directly used to calculate timeout retransmission time;
- The standard recommends calculating RTO by:
- During the communication process, if overtime retransmission occurs , the RTO will be increased every time it occurs. The typical method is to set the RTO to twice the old RTO ;
9. Implementation of TCP reliable transmission - sliding window
- TCP uses a sliding window mechanism to achieve reliable transmission;
9.1 Precautions
- TCP stipulates that the receiver can only confirm the highest sequence number that arrives in order ;
10. TCP connection establishment and release
- It needs to be clear that both sides of the TCP connection have their own roles: 1) client: the party that actively requests to establish a connection; 2) server: the party that passively agrees to establish a connection;
- The purpose of establishing a TCP connection between the two communicating parties is to mutually confirm that the sending and receiving capabilities of the other party are normal and that reliable transmission can be performed ;
10.1 Three-way handshake to establish a connection
- Connection establishment workflow ( client and server are both in the CLOSED state at the beginning ), and steps 2 to 4 of the following steps are the "three-way handshake" process:
- 1) The server process creates a transmission control block TCB (including TCP connection table, pointers to sending and receiving buffers, current sending and receiving serial numbers and other information), enters the LISTEN monitoring state, and waits for the client to send a TCP connection request ;
- 2) The client process creates a transmission control block TCB, sends a TCP connection request (segment header SYN=1, sequence number seq is assigned a random value x), and enters the SYN-SENT synchronization sent state ;
- 3) The server receives the message segment sent by 2), agrees to establish a TCP connection, and sends the corresponding confirmation message segment (the header of the message segment is SYN=1, ACK=1, the serial number seq is assigned a random value y, and the confirmation number ack=x +1), the server enters the SYN-RCVD synchronization received state;
- 4) The client receives the confirmation message segment of the server's TCP request message, and sends the confirmation message segment of the confirmation message segment (the header of the message segment ACK=1, the sequence number seq=x+1, confirm No. ack=y+1), the sender enters the ESTABLISHED connection established state;
- 5) After the server receives the confirmation segment in 3), it enters the ESTABLISHED connection established state. So far, the two parties have successfully established a TCP connection, and reliable data transmission can be carried out! !
- Note: 1) TCP stipulates that the segment of SYN=1 cannot carry data, but it will consume a serial number; 2) TCP stipulates that if the ordinary confirmation segment does not carry data, the serial number will not be consumed! !
- Two handshakes to establish a connection will cause the invalid TCP connection request segment of the client to reach the server, causing the server to mistakenly enter the connection established state, resulting in a waste of resources! !
- The four-way handshake is to split the second step of the three-way handshake process into two steps, namely: 1) the server sends an acknowledgment of the TCP connection request segment (ACK=1, ack=x+1); 2) the server Send a TCP connection request segment (SYN=1, seq=y). These two steps can be combined and sent together, so the three-way handshake just meets the requirements for connection establishment! ! !
10.2 Wave four times to release the connection
- The TCP connection has a semi-closed characteristic. The A->B connection is released, but B can still send data to A, and the release of the TCP connection means that both parties release the connection;
- Connection release workflow ( client and server are both in the ESTABLISHED connection established state at the beginning ), and steps 2 to 4 of the following steps are the "four waved" process:
1) The client actively sends a TCP connection release request message (FIN=1, ACK=1, seq=u, ack=v), and enters the FIN-WAIT1 stop waiting 1 state;
2) After receiving the connection release request, the server sends Ordinary confirmation message (ACK=1, seq=v, ack=u+1), and notify the application process that the client wants to release the connection and enter the CLOSE-WAIT closing waiting state. The client enters the FIN-WAIT2 stop waiting 2 state after receiving the confirmation message . At this point, the client->server one-way connection is closed, the TCP connection is half-closed, and the server can still send data to the client. After that, the server sends a connection release request message (FIN=1, ACK=1, seq=w, ack=u+1) to the client, and enters the final confirmation state of LAST-ACK .
3) After receiving the release request message from the server, the client sends a normal confirmation message (ACK=1, seq=u+1, ack=w+1), and enters the TIME-WAIT time waiting state .
4) After receiving the confirmation message, the server enters the CLOSED closed state. At the same time, the client enters the CLOSED state after experiencing 2MSL. At this point, the TCP connection is completely released! !
- The effect of waiting for 2MSL (maximum segment survival time) in TIME-WAIT state :
- 1) Ensure that the server can receive the confirmation message of the TCP connection release request to avoid continuous retransmission of the release request message. The server is always in the LAST-ACK final confirmation state and cannot close the connection, wasting resources;
- 2) If the client TCP process is in the time waiting state for 2MSL, all segments related to the TCP connection in the network will disappear, avoiding affecting subsequent new TCP connections.
- The role of the keep alive timer :
- Existing problems: the client fails, the server cannot be notified in time, and is always in a waiting state, wasting computer resources.
- Solution: keep alive timer;
- Working principle: 1) Every time the server receives data from the client, it restarts the keep-alive timer (2 hours); 2) If the keep-alive timer times out, it means that the client has not sent data within a fixed period of time. At this time, the server sends a detection message to the client, and then sends it every 75 seconds. If the 10 consecutive detection messages are confirmation messages received from the client, it is considered that the client is faulty, and the server closes the TCP connection.
- Three wavings cannot release the TCP connection , because: the TCP connection has a half-closed characteristic, after two wavings release the client->server connection, the server can still send data to the client, if the third waving and the second waving merge , then the server->client one-way data transmission in the semi-closed state cannot be realized. Therefore, two waves can only release a one-way connection, and four waves are required to release a two-way connection.
10.3 TCP connection state transition diagram
- During the establishment and release of the TCP connection, the state transition diagram of both parties:
the thick solid line indicates the client, and the thick dotted line indicates the server
11. TCP segment header format
- The TCP segment adopts a byte stream-oriented method to achieve reliable transmission ;
- A TCP segment consists of two parts: header and data payload;
- Send some or all of the bytes in the cache as the data payload of the TCP segment, and after adding the header information, it can be transmitted as a segment;
- The minimum header size is 20 bytes (fixed size), and the maximum is 60 bytes;
11.1 Analysis of fields in the header
field name | effect |
---|---|
source port | Occupies 16 bits, identifying the client TCP application process |
destination port | Occupies 16 bits, identifying the client TCP application process |
data offset | Occupies 4 bits, in units of 4 bytes, indicating the size of the header, the minimum is 5, and the maximum is 15 |
reserve | Reserved field, used for subsequent function expansion, occupying 6 bits, temporarily set to 0 |
window | Occupies 16 bits, in bytes. Indicates the size of the receiving window corresponding to the host as the receiver, and the window value combined with the congestion window size of the sender determines the size of the sending window of the sender |
checksum | Occupies 16 bits, filled by the sender, so that the receiver can check whether there is any error in the header and data of the message segment |
SYN flag | Occupies 1 bit, used to identify the TCP connection request message |
FIN flag | Occupies 1 bit, used to identify the TCP connection release request message |
ACK flag | Occupies 1 bit, identifying the message segment as an acknowledgment message segment, TCP stipulates that all message segments must be set to ACK=1 after the connection is established |
serial number seq | Occupies 32 bits, indicating the first byte sequence number of the message segment |
Confirmation number ack | Occupying 32 bits, it is valid when ACK=1, indicating that the serial number of the received message segment is confirmed, and at the same time, the serial number of the first byte of the subsequent received message segment is expected |
PSH flag | Occupies 1 bit, indicating a push operation, and the segment should be delivered as soon as possible |
RST flag | Occupies 1 bit, indicating the connection reset operation, indicating that there is an exception in the TCP connection, and the connection needs to be re-established |
URG flag | Occupies 1 bit, indicating emergency operation |
emergency pointer | Occupies 16 bits, marking the size of urgent data, only valid when URG=1 |
options | To expand the TCP function, there are currently the following options: |
filling | It is used to fill the header, and the size of the header is guaranteed to be an integer multiple of 4 bytes, because the data offset is in units of 4 bytes |
Reference: "Computer Network Micro Classroom"