Computer Networks (8th Edition) - Chapter 5 Transport Layer

5.1 Overview of transport layer protocols

5.1.1 Communication between processes

In Figure 5-1, there is a thick dark two-way arrow between the two transport layers, which states that " the transport layer provides logical communication between application processes ."

Figure 5-1 The transport layer provides logical communication for application processes that communicate with each other.

5.1.2 Two main protocols at the transport layer

5.1.3 Transport layer ports

Please note that this abstract protocol port between protocol stack layers is a software port , which is a completely different concept from the hardware port on a router or switch. Hardware ports are interfaces through which different hardware devices interact, while software ports are the locations where various protocol processes in the application layer interact with transport entities .

When a client initiates a communication request, it must first know the IP address (used to find the destination host) and port number (used to find the destination process) of the other party's server. Therefore, the port numbers of the transport layer are divided into the following two categories.

(1) The port numbers used by the server         are divided into two categories. The most important category is called the well-known port number or the global port number , with a value of 0~1023 .

Table 5-2 Commonly used well-known port numbers
app FTP TELNET SMTP DNS TFTP HTTP SNMP SNMP(trap) HTTPS
Well-known port numbers 21 23 25 53 69 80 161 162 443

The other type is called the registration port number , with values ​​ranging from 1024 to 49151. This type of port number is used by applications that do not have a well-known port number.

(2) The port number used by the client         is 49151~65535. Because this type of port number is dynamically selected only when the client process is running, it is also called an ephemeral port number .

5.2 User Datagram Protocol UDP

5.2.1 UDP Overview

The main features of UDP are : simple and convenient, but unreliable (multiple application types)

(1) UDP is connectionless , that is, there is no need to establish a connection before sending data ( of course, there is no connection to release when sending data ends), thus reducing overhead and delay before sending data.

(2) UDP uses best-effort delivery , that is, reliable delivery is not guaranteed, so the host does not need to maintain a complex connection status table (there are many parameters in it).

(3) UDP is message-oriented . The UDP packet handed over by the sender to the application program adds a header and then delivers it down to the IP layer. UDP neither merges nor splits the packets handed over by the application layer, but retains the boundaries of these packets .

(4) UDP has no congestion control , so network congestion will not reduce the sending rate of the source host.

(5) UDP supports one-to-one, one-to-many, many-to-one and many-to-many interactive communications .

(6) The UDP header has a small overhead , only 8 bytes , which is shorter than the 20-byte header of TCP.

5.2.2 UDP header format

The method for calculating UDP checksums is similar to the method for calculating IP datagram header checksums. But the difference is: the IP datagram checksum only checks the header of the IP datagram, but the UDP checksum checks both the header and the data part .

5.3 Transmission Control Protocol TCP Overview

5.3.1 The main features of TCP

(1) TCP is a connection-oriented transport layer protocol .

(2) Each TCP connection can only have two endpoints , and each TCP connection can only be point-to-point (one-to-one).

(3) TCP provides reliable delivery services. The data transmitted through the TCP connection is error-free, not lost, not repeated, and arrives in order .

(4) TCP provides full-duplex communication . TCP allows application processes on both sides of the communication to send data at any time.

(5) Oriented to byte stream . A " stream " in TCP refers to a sequence of bytes flowing into or out of a process . The meaning of "byte stream oriented" is that although the interaction between the application and TCP is one data block (of varying sizes) at a time, TCP only regards the data handed over by the application as a series of unstructured byte streams .

5.3.2 TCP connection

5.4 How reliable transmission works

5.4.1 Stop waiting protocol

Both parties in full-duplex communication are both the sender and the receiver. For the convenience of discussing the problem below, we only consider A sending data and B receiving data and sending confirmation. Therefore A is called the sender and B is called the receiver . Because the principle of reliable transmission is discussed here, the transmitted data units are called packets, regardless of the level at which the data is transmitted. "Stop waiting" means to stop sending every time a packet is sent and wait for the other party's confirmation. Send the next packet after receiving the confirmation.

1. No error conditions

2. Something went wrong

3. Confirmation lost and late confirmation

4. Channel utilization         (advantages: simplicity  ; disadvantages: channel utilization is too low)

5.4.2 Continuous ARQ protocol

5.5 TCP segment header format

The first 20 bytes of the TCP segment header are fixed (as shown in Figure 5-13), followed by 4n bytes of options that are added as needed (n is an integer) . Therefore the minimum length of the TCP header is 20 bytes .

The meaning of each field in the fixed part of the header is as follows:

(1) Source port and destination port

(2) Serial number

The sequence number field value in the header refers to the sequence number of the first byte of data sent in this segment .

(3) The confirmation number        occupies 4 bytes and is the sequence number of the first data byte expected to be received from the other party's next message segment .

If the confirmation number = N, it means that all data up to sequence number N-1 have been received correctly.

(4) Data offset

(5) Reserved

(6) Emergency URG (URGent)

(7) Confirm ACK (ACKnowledgment)

(8) Push PSH (PuSH)

(9) Reset RST (ReSeT)

(10) Synchronous SYN (SYNchronization)

(11) Terminate FIN (FINish, meaning "finished" and "terminated")

(12) Window

(13) Checksum

(14) Emergency pointer

(15) Options

TCP initially specified only one option, namely the maximum segment length MSS (Maximum Segment Size) [RFC879]. Please note the meaning of the term MSS. MSS is the maximum length of the data field in each TCP segment . The data field plus the TCP header equals the entire TCP segment .

5.6 Implementation of TCP reliable transmission

5.6.1 Sliding window in bytes

5.6.2 Selection of timeout retransmission time

5.6.3 Select SACK

5.7 TCP flow control

5.7.1 Use sliding window to achieve flow control

Flow control is to prevent the sender's sending rate from being too fast, so that the receiver can receive it in time .

5.7.2 TCP transmission efficiency

5.8 TCP congestion control

5.8.1 General principles of congestion control

The link capacity (i.e. bandwidth) in the computer network, the cache and processor in the switching node, etc. are all network resources. At a certain period of time, if the demand for a certain resource in the network exceeds the available part that the resource can provide, the performance of the network will deteriorate. This situation is called congestion . The conditions for network congestion can be written as the following relationship:

ΣDemand for resources > Available resources (5-7)

Congestion control is to prevent too much data from being injected into the network, so that routers or links in the network will not be overloaded . There is a prerequisite for congestion control, which is that the network can bear the existing network load . Congestion control is a global process that involves all hosts, all routers, and all factors related to reducing network transmission performance.

5.8.2 TCP congestion control method

There are four algorithms for TCP congestion control: slow-start, congestion avoidance , fast retransmit , and fast recovery (see draft standard RFC 5681). The principles of these algorithms are introduced below. To focus our discussion on congestion control, we assume:

(1) Data is transmitted in one direction, and the other party only transmits confirmation messages.

(2) There is always enough buffer space in the receiving direction, so the size of the sending window is determined by the congestion level of the network.

1. Slow start and congestion avoidance

The purpose of the congestion avoidance algorithm is to slowly increase the congestion window cwnd (see [RFC 5681] for the specific algorithm). The result after executing the algorithm is approximately as follows: every time a round trip time RTT passes, the size of the sender's congestion window cwnd increases by 1, instead of doubling as in the slow start stage. Therefore, the congestion avoidance phase is called " Additive Increase " AI (Additive Increase), which indicates that during the congestion avoidance phase, the congestion window cwnd slowly increases linearly .

However, please note that "congestion avoidance" does not completely avoid congestion, but allows the congestion window to grow more slowly, making the network less prone to congestion .

5.8.3 Active queue management AQM

5.9 TCP transport connection management

5.9.1 TCP connection establishment

The process of establishing a TCP connection is called a handshake. The handshake requires the exchange of three TCP segments between the client and the server . Figure 5-28 illustrates the three-packet handshake process of establishing a TCP connection.

Figure 5-28 Establishing TCP connection using three-packet handshake

The connection establishment process given above is called a three-packet handshake . Please note that in Figure 5-28, the message segment sent by B to A can also be split into two message segments. You can first send an acknowledgment segment (ACK = 1, ack = x + 1), and then send a synchronization segment (SYN = 1, seq = y). This process becomes a four-packet handshake (releasing the connection) , but the effect is the same.

Why does A need to send a confirmation in the end? This is mainly to prevent the invalid connection request segment from being suddenly transmitted to B, thus causing an error .

5.9.2 TCP connection release

5.9.3 Finite state machine of TCP

Key concepts in this chapter

exercise

Guess you like

Origin blog.csdn.net/qq_50564231/article/details/133517583