Computer Network (Data Link Layer)

Preface

There are a lot of things in this part, so let’s go through it step by step.

text

(1) Functions of the data link layer

1. Provide services to the network layer

1. Unacknowledged Connectionless Service:

In unacknowledged connectionless services, the data link layer does not need to establish a connection before data transmission, nor does it require the receiver to send an acknowledgment. The sender sends the data frame directly to the receiver and does not care whether it can be successfully received. This service is typically used in scenarios where reliability is not required, such as real-time applications such as audio and video transmission.

Features:

  • No need to establish a connection, send data frames directly.
  • Without confirmation, the sender has no way of knowing whether the data was successfully received.
2. Acknowledged Connectionless Service:

The confirmed connectionless service adds a confirmation mechanism for the receiver on the basis of the connectionless service. After sending the data frame, the sender will wait for confirmation information from the receiver. If no acknowledgment is received within a certain period of time, the sender will try to resend the data frame to ensure reliable transmission of data.

Features:

  • No need to establish a connection, send data frames directly.
  • The receiver will send a confirmation message to inform the sender that the data frame has been successfully received.
  • The sender retransmits the data frame until an acknowledgment is received.
3. Acknowledged Connection-Oriented Service:

Confirmed connection-oriented services are the most reliable type of service. After a connection is established, the data link layer ensures reliable transmission of data frames and corrects errors even if they occur. The receiver will send a confirmation message to ensure that the sender knows that the data has been successfully received. If an error occurs, the data link layer performs error detection and correction and maintains the reliability of the connection.

Features:

  • A connection needs to be established to establish the relationship between the communicating parties.
  • The transmission of data frames is reliable, with error detection and correction.
  • The receiver will send a confirmation message to inform the sender that the data frame has been successfully received.

2.Link management

Link management is actually the establishment, maintenance and release process of the data link layer. It is mainly used for connection-oriented services.

3. Frame delimitation, frame synchronization and transparent transmission

Frame Delimitation, Frame Synchronization and Transparent Transmission are important concepts in the data link layer, and they play a key role in data communication. These three concepts will be introduced in detail below:

1. Frame Delimitation:

Frame delimitation refers to dividing a continuous bit stream into data frames with specific start and end marks in data communication. Data frame is the basic unit of data transmission at the data link layer. The purpose of frame delimitation is to ensure that the receiver can correctly identify the beginning and end of the data frame in order to accurately extract the data part of the frame.

In frame delimitation, special bit patterns are usually used as start and end marks of the frame. For example, in Ethernet, frame delimitation usually uses the preamble (Preamble) and the start frame delimiter (Start Frame Delimiter) to identify the beginning of the frame, and uses the frame check sequence (Frame Check Sequence, FCS) to identify the frame. end.

 2. Frame Synchronization:

Frame synchronization means that the receiver can accurately identify the delimitation information of the frame when receiving data to ensure that the content of the data frame is correctly extracted. Frame synchronization usually involves two aspects: clock synchronization and delimited synchronization.

- **Clock Synchronization:** The receiver needs to maintain clock synchronization with the sender in order to read the transmitted bits at the correct timing. Clock synchronization ensures that the receiver reads bits during the correct clock cycle, avoiding data sampling errors.

- **Delimited synchronization:** Delimited synchronization means that the receiver can accurately identify the start and end marks of the frame to ensure the correct extraction of the frame. Delimited synchronization usually uses specialized hardware circuits or software algorithms to identify frame delimitation information in order to accurately extract the data part of the frame.

 3. Transparent Transmission:

Transparent transmission means that in data communication, the data link layer does not modify or process the data content when transmitting data, and directly transmits the data from the sender to the receiver, maintaining the originality of the data. The purpose of transparent transmission is to ensure that the data is not tampered with, modified or interpreted during the transmission process, and to maintain the integrity and accuracy of the data.

Transparent transmission is usually applied to data that needs to be transferred in a specific format, such as file transfer, image transfer, etc. These data usually need to be interpreted and processed as-is at the receiving end. The data link layer does not add any additional control information in transparent transmission, ensuring that data transmission is lossless.

Taken together, frame delimitation ensures the start and end marks of the frame, frame synchronization ensures the correct extraction of the frame, and transparent transmission maintains the originality of the data. These three concepts cooperate with each other in data communication to ensure that the data reliable transmission and correct interpretation.​ 

4. Flow control

Flow Control is a mechanism for managing the transmission rate in data communications. Its purpose is to ensure that the data transmission between the sender and the receiver is balanced and to prevent the sender from sending too much data, causing the receiver to be unable to transmit data in time. processing, resulting in data loss or overflow. The main goal of flow control is to maintain network stability and ensure reliable transmission of data.

The implementation of flow control usually relies on some protocols, algorithms and policies. The following are some common flow control methods:

### 1. **Sliding Window Protocol:**

The sliding window protocol is a flow control mechanism that controls the sending and receiving rate of data by maintaining a window size on the sender and receiver. The sender can send multiple data frames within the window, and the receiver can receive multiple data frames within the window. By dynamically adjusting the window size, the sliding window protocol can adapt to changes in network conditions and ensure efficient data transmission.

### 2. **Adaptive flow control algorithm:**

The adaptive flow control algorithm automatically adjusts the transmission rate according to the actual network conditions. For example, the congestion control algorithm in TCP (Transmission Control Protocol) is an adaptive flow control algorithm that dynamically adjusts the data sending rate according to the degree of network congestion to avoid network congestion and data loss.

### 3. **Token Bucket Algorithm:**

The token bucket algorithm is a token-based flow control algorithm. The sender needs to obtain a token to send data, and tokens are generated at a fixed rate. If the sender does not have enough tokens, it needs to wait, thus limiting the sending rate and ensuring the smoothness of data transmission.

### 4. **Leaky Bucket algorithm:**

Leaky Bucket algorithm is a simple flow control algorithm. It is similar to a funnel. Data flows into the funnel at a constant rate. When the funnel is full, excess data is discarded or cached. This algorithm can smooth burst traffic and avoid network overload.

### 5. **Congestion Control:**

Congestion control is a network-level flow control used to avoid network congestion and packet loss. Congestion control usually includes a series of algorithms and strategies, such as congestion avoidance, congestion detection, congestion recovery, etc., which work together to ensure the stability and reliability of the network.

These flow control methods and algorithms can be used alone or in combination. Appropriate flow control strategies can be selected according to different network environments and needs to ensure the smoothness and reliability of data transmission.

5.Error control

Error Control refers to the detection and correction of errors that may be introduced during the transmission process through various technical means in data communication. Data may be subject to various interferences during transmission, such as noise, interference, attenuation, etc., resulting in data errors at the receiving end. The purpose of error control is to ensure the integrity and reliability of data to ensure that the data received by the receiver is correct.

Error control usually includes the following main techniques:

### 1. **Parity Check:**

Parity checking is a simple error control method. In parity checking, the sender adds an additional bit to each data block so that the number of 1's in the data block is an odd or even number. The receiver determines whether the data is incorrect based on the number of 1's in the received data block. If the number in the data block does not match the agreed parity, it means that the data is wrong.

### 2. **Cyclic Redundancy Check (CRC):**

CRC is an error control technology based on polynomial operations. At the sending end, the data block is generated through polynomial division operation to generate a fixed-length redundant check code, and this check code is appended to the data block and sent together. The receiver will also perform a similar polynomial division operation to obtain a check code of the receiving end, and then compare the received check code with the check code calculated by the receiving end. If they are inconsistent, the data is incorrect.

### 3. **Forward Error Correction (FEC):**

Forward error correction code is an error control technology that can correct a certain number of errors at the receiving end. The sender adds some redundant information to the data so that the receiver can use this information to correct possible errors when receiving. Common forward error correction codes include Hamming Code (Hamming Code) and RS Code (Reed-Solomon Code).

### 4. **Automatic Repeat reQuest (ARQ):**

ARQ is a feedback-based error control technology. When the receiver detects an error in the received data, it sends a feedback signal to the sender, requesting the sender to resend the data. Common ARQ protocols include stop-wait ARQ, sliding window ARQ, etc.

### 5. **Redundant Data Elimination:**

In some cases, redundant data can be added during transmission and used to detect and correct errors after the data is received by the data receiver. If the data is error-free, redundant data can be removed, otherwise used for error correction.

These error control technologies can be used alone or in combination. The specific choice depends on factors such as the communication environment, reliability requirements of data transmission, and network performance. Through reasonable selection and configuration of error control technology, the reliability of data transmission can be improved and the integrity and correctness of data can be ensured.

(2) Framing

**Framing** is the process of dividing the transmitted bit stream into logical frames in data communication. Frame is the basic unit of data transmission at the data link layer. It contains data fields and control information and is used to identify the start and end of data. The purpose of framing is to ensure that during data transmission, both the sender and the receiver can correctly identify the boundaries of the frame, thereby accurately extracting the data part of the frame.

Here are several common framing methods:

### 1. **Character Counting:**

In the character counting method, the beginning of the frame contains a count field indicating the data length, telling the receiver the length of the frame. The receiver knows when to stop receiving data based on this count field. For example, if the frame is 8 bits long, the count field may be an 8-bit binary number representing the length of the data. For example, if the data is 10101010, the count field might be 00001000 (the binary representation of 8), and the entire frame would be 0000100010101010.

### 2. **Character Padding with Delimiters at Start and End:**

In this method, the beginning and end of the frame contain specific characters or bit patterns as delimiters to identify the start and end of the frame. The sender inserts these delimiters in the data, and the receiver extracts the data based on these delimiters. For example, among ASCII characters, SOH (Start of Header) and ETX (End of Text) can be used as delimiters.

The format of the frame may be: `[SOH] Data [ETX]`, where `[SOH]` and `[ETX]` are delimiters, and `Data` is the data part.

### 3. **Zero-bit Padding with Start and End Flags:**

In this method, the beginning and end parts of the frame contain specific bit patterns as start and end flags, indicating the start and end of the frame. In order to ensure that the same bit pattern as the flag does not appear in the data, if the same pattern as the flag appears in the data, zero bits (Zero Bits) need to be inserted to avoid ambiguity. For example, the frame delimitation method used in the HDLC (High-Level Data Link Control) protocol is zero-bit padding.

The format of frame may be: `01111110 Data 01111110`, where `01111110` is the start and end flags, and `Data` is the data part. If five consecutive 1s appear in the data, a 0 needs to be inserted.

### 4. **Violation-Based Encoding:**

In this method, the start and end of the frame are defined by a specific set of rules, and the data is encoded outside these rules, forming a violation encoding. For example, in Manchester encoding, a high-to-low change represents 0, and a low-to-high change represents 1. If a change occurs in the data that violates this rule, it can be used to identify frame boundaries.

The format of the frame may be: `10 Data 01`, where `10` and `01` are the starting and ending violation codes, and `Data` is the data part.

Please note that these framing methods are just examples and actual applications may use different methods depending on requirements and protocol requirements. Different communication standards and protocols may use different framing methods.

(3) Error control

1. Error detection coding
**Parity Check Code:**

Parity code is a simple error detection method commonly used to detect single-bit errors in data transmission. In parity checking, the lowest bit (or the highest bit, depending on the type of parity) of each data block (usually a byte) is used as the check bit. The value of the parity bit is set so that the number of 1's in the entire data block (including the parity bit) is an odd number (odd parity) or an even number (even parity). When data is transmitted, the sender calculates the check digit and appends it to the data block. The receiver recalculates the check digit when it receives the data block and then compares it with the received check digit. If the parity bit does not meet the parity requirements, it indicates an error in the data block.

For example, for odd parity, if the sender's data is 1011001, then the odd parity bit will be set to 1, making the number of 1's in the entire data block an odd number. During transmission, the data frame sent is 10110011. The receiver calculates the check bit when receiving the data frame. If the calculated odd check bit is not 1, it means there is an error in the data frame.

Example of parity code:

Suppose we have a block of data1101 and we want to use parity to detect errors. In odd parity, it is necessary to ensure that the number of 1's is an odd number. Therefore, we need to add a check digit after the data block.

Original data block:1101 Data frame after odd parity:11011

In this example, the odd parity bit is 1 because the number of 1's in the original data block is 3, which is an odd number.

**Cyclic Redundancy Check (CRC):**

CRC is a more robust error detection and correction method commonly used in data communications. CRC uses polynomial operations to process data to generate a fixed-length redundant check code, which is appended to the data and sent together. The receiver will also use the same polynomial to calculate and obtain a check code of the receiving end, and then compare the received check code with the check code calculated by the receiving end. If they are inconsistent, it means there is an error in the data frame.

Here’s how to generate redundant codes:

The calculation process of Redundancy Check Code (CRC) is based on polynomial division. CRC is commonly used in data communications to detect and correct errors during transmission. The CRC calculation process can be carried out through the following steps:

1. **Select a generator polynomial:** First, select a generator polynomial, which determines the performance of the CRC. A generator polynomial is a binary number, usually represented as a polynomial. For example, the generator polynomial of CRC-32 is a 32-bit binary number `1 0000 0100 1100 0001 0001 1101 1011 0111`, which can be expressed as the polynomial x³² + x²⁶ + x²³ + x²² + x¹⁶ + x¹² + x¹¹ + x¹⁰ + x⁸ + x⁷ + x⁵ + x⁴ + x² + x + 1.

2. **Shift the data block left by the number of digits generated by the polynomial:** Shift the data block to be transmitted left by the number of digits generated by the polynomial. For example, if the data block is an 8-bit binary number `11011011`, and the generator polynomial of CRC-32 is 32 bits, then the data block left shifted by 32 bits becomes `110110110000000000000000000000000`.

3. **Modulo 2 division operation:** Perform modulo 2 division operation on the left-shifted data block and the generator polynomial. Specific steps are as follows:

   - Align the highest bit of the left-shifted data block with the highest bit of the generator polynomial.
   - If the highest bit is 1, perform an XOR operation.
   - Shift the result one bit to the left and continue XORing with the highest bit of the generator polynomial until all bits have been processed.

   After the operation is completed, the remaining remainder is the CRC value.

4. **Append CRC to the end of the data block:** Append the obtained CRC to the end of the original data block to form a complete frame and then send it.

At the receiving end, the received frame undergoes the same CRC calculation process. If the CRC calculated by the receiving end is consistent with the received CRC, it means there is no error in the data transmission. If they are inconsistent, there is an error in the data.

Note that the performance of CRC is closely related to the choice of generator polynomial. Different applications may require different selections of generator polynomials to meet specific error detection and correction requirements. CRC calculation is an efficient method that is usually used in network communications and storage systems to ensure data integrity.

CRC is characterized by its ability to detect multi-bit errors and can be designed to correct errors in most cases. The performance of CRC depends on the choice of generator polynomial, and different CRC algorithms use different generator polynomials. A common CRC algorithm is CRC-32, which is widely used in many applications, including Ethernet frame and ZIP file verification.

In general, parity check codes are suitable for simple error detection, while CRC codes are more suitable for data communication environments that require high reliability and can provide more powerful error detection and correction capabilities. The choice of which checksum to use usually depends on the needs and performance requirements of the application.

Example of CRC check code:

Suppose we use CRC-4 (generator polynomial is 10101) to calculate the redundancy check code of a 4-bit data block.

Original data block:1101 Generating polynomial:10101

First, we shift the data block left by 4 bits to become11010000, and then use the generator polynomial to perform modulo 2 division:

   10101
---------
11010000 (dividend)
-10101 (generating polynomial)
---------
   01111000
   -10101
---------
     11001000
     -10101
---------
       10011000
       -10101
---------
         11001000
         -10101
---------
           1010100
           -10101
---------
             0110100
             -10101
---------
               0100100
               -10101
---------
                 101110
                 -10101
---------
                   001010
                   -10101
---------
                     100110
                     -10101
---------
                       111100
                       -10101
---------
                         10000
                         -10101
---------
                           1100
                           -10101
---------
                             101
                             -10101
---------
                               0000

The final redundant check code is0000.

In actual communication, the sender sends the original data block and the CRC check code together, and the receiver uses the same generator polynomial to check the received data. If the check code is not 0, it means there is an error in the data.

2. Error correction coding

**Hamming Code** is an error detection and correction coding technology used to detect and correct errors in data transmission. Its main feature is that it can correct single-bit errors and detect and identify multi-bit errors. The basic idea of ​​Hamming code is to insert redundant check bits (also called check bits of Hamming code) into the data. These check bits are used to detect and correct errors in the data.

**Construction process of Hamming code:**

1. **Determine the location of data and parity bits:** First, determine which bits are data bits and which bits are parity bits in the data block. Typically, the parity bit positions are powers of 2 (1, 2, 4, 8, 16, etc.), while the data bits occupy the remaining positions.

2. **Calculate the value of the check digit:** For each check digit, calculate its value. The position of the check digit determines the position of the bits used to calculate the check digit. For example, for the 1st parity bit (position 1), calculate the XOR value of all data bits whose 1st bit is 1. For the 2nd parity bit (position 2), calculate the XOR value of all data bits whose 2nd bit is 1, and so on.

3. **Insert check digit:** Insert the calculated check digit into the corresponding position of the data block.

4. **Transmitting data:** The sending end transmits the data block with check digits to the receiving end.

5. **Detect and correct errors:** The receiving end uses the same check digit calculation method to calculate the check digit of the received data block. If a check bit does not match, the receiving end knows that an error occurred at the corresponding location. By comparing the XOR values ​​of the check digits, the location of the error can be found and corrected.

**Example of Hamming code:**

Suppose we want to transmit a 4-bit data block `1011`. We decided to use 3 check digits. First, determine the position of the check bit: bits 1, 2, and 4 are check bits, and bit 3 is the data bit.

1. **Calculate the value of the check digit:**
   - Check digit 1 (position 1): Calculate the XOR value of all bits whose first bit is 1 , that is, 1 xor 1 xor 1 = 1.
   - Check bit 2 (position 2): Calculate the XOR value of all bits whose second bit is 1, that is, 0 xor 1 xor 1 = 0.
   - Check bit 4 (position 4): Calculate the XOR value of all bits whose 4th bit is 1, that is, 1 xor 1 xor 1 = 1.

2. **Insert check digit:**
   Insert the calculated check digit into the corresponding position of the data block: `1101001`.

Now, data `1101001` with Hamming code can be transmitted to the receiving end. The receiving end uses the same check digit calculation method to calculate the check digit of the received data block. If a check bit does not match, the receiving end knows that an error occurred at the corresponding position, and can find the error position by comparing the XOR value of the check bit, and then correct it.

(4) Flow control and reliable transmission mechanism

1. Flow control, reliable transmission and sliding window mechanism:

Flow Control and Reliable Transmission are two key concepts in computer networks. They are usually used in conjunction with a sliding window mechanism (Sliding Window) to ensure reliable transmission of data and efficient utilization of the network. The relationship and functions of these three are introduced in detail below:

### 1. **Flow Control:**

Flow control is a mechanism used to control the rate at which the sender sends data to the receiver to prevent the receiver from being unable to process a large amount of data in a timely manner, resulting in data overflow or loss. The purpose of flow control is to ensure the stability and reliability of the network. Flow control is usually implemented through a sliding window mechanism. A window size is maintained between the sender and the receiver to control the amount of data sent by the sender to ensure that the receiver can process it in a timely manner.

### 2. **Sliding Window Mechanism:**

The sliding window mechanism is a technology for flow control and reliable transmission. The sender and receiver each maintain a window, and the sequence number in the window indicates the data frames that are allowed to be sent or received. The sender can send data frames within the window, and the receiver can only receive data frames within the window. As data is transmitted and confirmed, the window will slide, allowing new data frames to enter the window, thereby achieving data flow control and reliable transmission.

### 3. **Reliable Transmission:**

Reliable transmission refers to the mechanism that ensures that data can be transmitted to the recipient correctly, completely, and in order in the network. It usually includes data confirmation, timeout retransmission, flow control and other technologies. In the sliding window mechanism, reliable transmission uses the receiver to send an acknowledgment frame (ACK) to inform the sender which data has been successfully received. The sender can only slide the window and send new data after receiving the acknowledgment frame.

In the sliding window mechanism, the sender maintains a sending window and the receiver maintains a receiving window. The sending window represents data frames that can be sent but have not yet been acknowledged, and the receiving window represents data frames that can be received but have not yet been acknowledged. The sender and receiver control the sending and receiving of data according to the size of the window to ensure reliable transmission and flow control of data.

2. Single-frame sliding window and stop-wait protocol:

**Single Frame Sliding Window** is a simple sliding window protocol in which the sender only sends one data frame at a time, and the receiver only receives one data frame at a time and sends an acknowledgment. It is also known as Stop-and-Wait Protocol. The basic idea of ​​this protocol is that the sender sends a data frame and waits for the receiver's confirmation. Only after receiving the confirmation can it send the next data frame, which ensures reliable transmission of data.

**Stop-Wait Protocol Steps:**

1. **Send data frame:** The sender sends a data frame to the receiver.
2. **Waiting for confirmation:** The sender waits for the receiver to send a confirmation frame.
3. **Receive data frame:** The receiver receives the data frame.
4. **Send confirmation frame:** The receiver sends a confirmation frame to the sender.
5. **Waiting for the next data frame:** The receiver waits for the next data frame.

In the stop-and-wait protocol, the sender can send the next data frame only after receiving an acknowledgment of the previous data frame. If the sender times out while waiting for an acknowledgment, it resends the current data frame until an acknowledgment is received.

**example:**

Suppose the sender (A) wants to send a data frame `0101` to the receiver (B). Here are the steps for the stop-and-wait protocol:

1. **A sends data frame:** A sends data frame `0101` to B.

2. **B receives data frame:** B receives data frame `0101`.

3. **B sends a confirmation frame:** B sends a confirmation frame to A, indicating successful reception.

4. **A waits for the next data frame:** A waits for the opportunity to send the next data frame.

If B finds an error in the data frame after receiving the data frame, it will not send an acknowledgment frame, but wait for a timeout. After the timeout, A will resend the same data frame. This process will continue until B successfully receives the data frame and sends an acknowledgment frame. The reliability of this protocol comes from the retransmission mechanism, which ensures that even if an error occurs in the data frame during transmission, the error can be corrected through the retransmission mechanism.

It should be noted that the stop-and-wait protocol is relatively inefficient because the sender can only send one data frame at a time and needs to wait for confirmation, which will lead to inefficient use of network bandwidth. In practical applications, sliding window protocols with larger window sizes are usually adopted to improve transmission efficiency.

3. Multi-frame sliding window and back-off N-frame protocol:

**Multiple Frame Sliding Window** is a sliding window protocol that allows the sender to continuously send multiple data frames without waiting for confirmation. Compared with stop-and-wait protocols, multi-frame sliding windows improve network utilization because the sender can continue to send new data frames while waiting for acknowledgment. In the multi-frame sliding window protocol, each data frame has a unique sequence number, which is used to identify the sequence and confirmation of the frame.

**Go-Back-N Protocol** is a specific implementation of multi-frame sliding windows. In the backoff N-frame protocol, the sender can send multiple data frames continuously, but can only wait for the receiver to confirm the first data frame within the window. If the receiver receives a data frame with sequence number N, but the receiver detects a loss or error in the previous frame, the receiver will discard all subsequent frames and only acknowledge the last correct frame before the error. The sender will resend all frames in the window after the timeout, starting from the first unacknowledged frame.

Because of this, we need to pay attention to the size of the window here: the acceptance window of the backward N frame protocol is 1, which can ensure that data frames are received in order. If n bits are used to number the frames, the size of the sending window Wt should satisfy 1< ;=Wt<=2n-1 (here is 2 to the nth power). If the sending window is larger than this, the receiver will be unable to distinguish new frames from old frames.

By the way, using n bits to number the frame means: in communication, the frame number is a unique identifier used to identify the data frame. In order to distinguish different frames, a frame sequence number (Frame Sequence Number) is usually added to the data frame. Using n bits to number the frame means that the frame sequence number is represented by n bits (binary digits).

For example, if 3 bits are used to number the frame, the frame number range that can be represented is from 000 (binary) to 111 (binary), that is, from 0 to 7. In this way, a 3-bit frame number can represent 8 different frames. Similarly, if 4 bits are used to number the frame, 16 different frames can be represented, ranging from 0000 (binary) to 1111 (binary), that is, from 0 to 15.

The advantage of using n-bit frame numbering is that more frames can be represented, allowing the communication system to handle larger-scale data transmission. However, as the number of bits increases, the range of frame numbers also increases, resulting in an increase in the length of the frame sequence number field and occupying more bandwidth. When designing a communication system, the range of frame numbers and communication efficiency need to be weighed to meet the needs of specific applications.

**Steps for multi-frame sliding window and back-off N-frame protocols:**

1. **Send data frames:** The sender can send multiple data frames continuously (sequence numbers are 1 to N).

2. **Receive data frame:** The receiver receives the data frame. If the frame is within the window, the receiver buffers it and sends an acknowledgment frame.

3. **Discard error frame:** If the receiver finds that the sequence number of the frame is not the expected sequence number, it will discard the error frame.

4. **Acknowledgement:** The receiver only acknowledges the first correct frame within the window.

5. **Timeout retransmission:** If the sender does not receive an acknowledgment within the specified time, or receives an incorrect acknowledgment, it will time out and retransmit all frames within the window.

**example:**

Suppose the sender (A) wants to send data frames with frame numbers 1 to 4 to the receiver (B), but B only receives the data frame with frame number 1, and errors occur in subsequent frames during transmission. In this case, the steps for backing off the N-frame protocol are as follows:

1. **A sends data frames:** A sends data frames with frame numbers 1 to 4.

2. **B receives the data frame:** B receives the data frame with frame number 1, but an error occurs during the transmission of subsequent frames. B can only confirm the data frame with frame number 1.

3. **A timeout retransmission:** A resends data frames with frame numbers 1 to 4 after timeout.

4. **B receives the data frame:** B receives the data frame with frame number 1 and confirms the data frame with frame number 1. An error occurred during the transmission of subsequent frames, and B can only confirm the data frame with frame number 1. Data Frame.

In this example, the back-off N-frame protocol allows A to send multiple data frames, but only the first correct frame received can be acknowledged and received due to errors. The sender needs to wait for confirmation before sending the frame in the next window.

4. Multi-frame sliding window and selective retransmission protocol:

**Multiple Frame Sliding Window** and **Selective Repeat Protocol** are efficient sliding window protocols. Unlike the Go-Back-N protocol, the Select-Retransmit protocol allows the receiver to receive and acknowledge frames in any order within a window, rather than just acknowledging the first frame within the window. It provides higher network utilization because the sender can continue to send unacknowledged frames without waiting for the entire window of frames to be acknowledged.

**Steps for multi-frame sliding window and selecting retransmission protocol:**

1. **Send data frames:** The sender can send multiple data frames continuously (sequence numbers are 1 to N).

2. **Receive data frame:** The receiver receives the data frame, regardless of whether its order is consistent with the sending order.

3. **Confirmation:** The receiver sends a confirmation frame to confirm the sequence number of the received data frame.

4. **Selective retransmission:** If the receiver finds a lost or erroneous data frame, it can selectively request the sender to resend the lost or erroneous data frame instead of waiting for the entire window of frames to be retransmitted. pass.

Something to note here: In the selective retransmission protocol, the sizes of the receiving window and the sending window are the same, and the maximum values ​​are half of the maximum range. If n bits are used to number the frame, it needs to satisfy: Wtmax=Wrmax=2 (n-1)(here is also 2 to the n-1 power)

**example:**

Assume that the sender (A) wants to send data frames with frame numbers 1 to 5 to the receiver (B). Here are the steps to choose a retransmission protocol:

1. **A sends data frames:** A sends data frames with frame numbers 1 to 5.

2. **B receives data frames:** B receives data frames with frame numbers 1, 2, 4, and 5, but an error occurs during the transmission of the data frame with frame number 3, causing B to be unable to receive it correctly.

3. **B sends a selective retransmission request: ** B sends a selective retransmission request, requesting A to resend the data frame with frame number 3.

4. **A resends the data frame:** After A receives the selective retransmission request, it only resends the data frame with frame number 3.

5. **B receives the data frame:** B receives the data frame with frame number 3, confirms the data frame with frame number 3, and continues to receive the data frame with frame number 6.

In this example, the selective retransmission protocol allows the receiver (B) to selectively request retransmission of lost or erroneous data frames without waiting for the entire window of frames to be retransmitted. This improves network utilization because the sender (A) can continue sending unacknowledged frames without waiting for the entire window of frames to be acknowledged.

(5) Media access control

1. Channel division medium access control:

Channel division medium access control is divided into the following four types:

Frequency Division Multiplexing (FDM):

Frequency division multiplexing is a multiplexing technique that allows multiple signals to be transmitted on different frequencies to coexist on the same communication channel. Each signal occupies a different frequency bandwidth and there is no overlap between them. The receiving end can restore the original signal by separating signals in different frequency ranges. Broadcast television, cable television, etc. are examples of the use of frequency division multiplexing.

Time Division Multiplexing (TDM):

Time division multiplexing is a multiplexing technique that allows multiple signals to be transmitted at different time intervals to coexist on the same communication channel. Each signal occupies the entire channel bandwidth for a different period of time. The receiving end separates the time-division multiplexed signal through time division, and then restores the original signal. Telephone networks, digital transmission, etc. are examples of the use of time division multiplexing.

Wavelength Division Multiplexing (WDM):

Wavelength division multiplexing is a multiplexing technology that allows multiple signals to be transmitted on different wavelengths (optical frequencies) to achieve high-capacity transmission in fiber optic communications. Each signal occupies a different wavelength and there is no overlap between them. In optical fiber communication networks, wavelength division multiplexing technology is often used to increase signal transmission capacity.

Code Division Multiplexing (CDM):

Code division multiplexing is a digital multiplexing technology that uses different coding schemes to allocate different code sequences to different users so that their signals are transmitted on the same frequency but are distinguished by different code sequences . The receiving end uses a corresponding decoder to separate the specific code sequence and restore the original signal. CDMA technology is an example of code division multiplexing and is widely used in 3G and 4G mobile communications.

2. Random access media access control

1. ALOHA Agreement:

ALOHA is one of the earliest multiple access protocols, which allows multiple users to transmit data through a shared communication channel. In the ALOHA protocol, when a user wants to send data, it sends it directly to the channel. If a conflict occurs during transmission (that is, two or more users send data at the same time), data frames may collide during transmission, resulting in data frame loss. Later, the ALOHA protocol was improved into a "sliding window" version, namely Slotted ALOHA.

2. Slotted ALOHA protocol:

Slotted ALOHA divides the channel in time and divides time into slots. Users can only send data at the beginning of the time slot. If a collision occurs, the sender waits for the next time slot to try sending again. Slotted ALOHA improves channel utilization compared to the original ALOHA because it reduces the probability of collisions.

3. CSMA protocol (Carrier Sense Multiple Access):

The CSMA protocol is a multiple access protocol. Before sending, each device will "listen" to the channel to detect whether other devices are transmitting on the channel. If the channel is free, the device can send data. However, due to propagation delays in signal transmission, multiple devices may send data at the same time when they sense that the channel is idle, causing collisions.

In the CSMA (Carrier Sense Multiple Access) protocol, there are three different variants, namely 1-Persistent CSMA (1-Persistent CSMA), Non-Persistent CSMA (Non-Persistent CSMA) and p-Persistent CSMA (p- Persistent CSMA). They behave differently when sending data on competing channels.

1. 1-Persistent CSMA (1-Persistent CSMA):

  • Behavior: When a device wants to send data, it first checks whether the channel is idle. If the channel is free, the device sends data immediately. If the channel is busy, the device will wait until the channel becomes free.
  • Advantages: Simple and direct, the device sends immediately when the channel is idle, no need to wait.
  • Disadvantages: When multiple devices want to send data at the same time, collisions may occur and reduce channel utilization.

2. Non-Persistent CSMA:

  • Behavior: When a device wants to send data, it first checks whether the channel is idle. If the channel is idle, the device has a certain probability to send data immediately. If the channel is busy, the device will wait for a random period of time and then check the channel status again.
  • Advantages: Reduces the probability of collision and improves channel utilization.
  • Disadvantages: Need to wait for a random period of time, which may cause some devices to wait too long, affecting the real-time nature of sending.

3. p-Persistent CSMA:

  • Behavior: When a device wants to send data, it first checks whether the channel is idle. If the channel is idle, the device has probability p to send data immediately. If the channel is busy, the device has a probability of 1-p and waits for a random period of time before checking the channel status again.
  • Advantages: By adjusting the probability p, a trade-off between delay and channel utilization can be made. When p is close to 1, the behavior is close to 1-adherence CSMA, and when p is close to 0, the behavior is close to non-adherence CSMA.
  • Disadvantages: The probability p needs to be adjusted and appropriate configuration is required.

These three CSMA variants are selected for use in different network scenarios and are determined based on the network load, real-time requirements, and performance requirements. 1-Insist that CSMA is suitable for networks with low load and low latency requirements. Non-adherent CSMA is suitable for networks with medium load and certain real-time requirements. p-Adherent CSMA allows for a balance between latency and performance, suitable for more flexible network environments. Choosing the appropriate CSMA variant can better meet the needs of a specific network.

4. CSMA/CD协议(Carrier Sense Multiple Access with Collision Detection):

CSMA/CD is an improved version of CSMA, which introduces a collision detection mechanism. While the device is sending data, it continues to listen for signals on the channel. If the device detects a signal with the same energy as the signal it sent, a collision has occurred. After detecting a collision, the device stops sending, waits for a random amount of time, and then tries to send again.

The main thing you need to pay attention to here is the issue of time.

5. CSMA/CA协议(Carrier Sense Multiple Access with Collision Avoidance):

CSMA/CA is designed to avoid collisions in wireless networks. Unlike CSMA/CD, collisions cannot be easily detected by wireless networks. CSMA/CA reduces the possibility of collisions by avoiding collisions before transmission begins. It introduces a mechanism called RTS/CTS (Request to Send/Clear to Send). The sender will send an RTS frame to request the right to use the channel before sending data. The receiver returns a CTS frame, indicating that the channel is idle. Data transfer can begin.

These multiple access protocols are widely used in different network environments and requirements to improve network efficiency and reliability. Choosing an appropriate multi-access protocol can be determined based on the characteristics and requirements of the network to maximize network performance.

3. Polling access: token passing technology

Polling and Token Passing are two Multi-channel access control method, usually used in network environments with shared media, such as local area networks. Their goal is to efficiently coordinate data transfers between multiple devices to improve network utilization and performance.

Polling access (Polling):

How it works: In polled access, a central controller is responsible for coordinating access to all devices. The controller asks the devices one by one whether they have data to send, and then allocates time to the devices according to the device's response order, so that each device can send data within its allocated time.

step:

  1. The controller asks the first device if it has data to send.
  2. If the device has data, it sends the data; if not, the controller continues to ask the next device.
  3. The controller polls each device in sequence to determine whether they have data to send.

advantage:

  • Simple and easy to implement.
  • The control is in the hands of the central controller, which can flexibly manage the equipment.

shortcoming:

  • If the device responds slowly or there are a large number of devices, it may cause increased latency and reduce real-time performance.
  • Single point of failure problem: If the central controller fails, the entire network may not function properly.
Token Passing:

How it works: In token passing technology, a special packet (token) is circulated through the network. Only the device holding the token can send data. When a device finishes sending data, it passes the token to the next device and then waits for the token to return to itself again.

step:

  1. The token is passed from one device to the next.
  2. Only the device holding the token can send data, other devices must wait for the token to arrive.

advantage:

  • Fairness: Each device has a chance to send data, and there will be no starvation.
  • Good real-time performance: there is no polling process, which reduces transmission delay.

shortcoming:

  • Token passing requires additional transmission time and may introduce some latency.
  • If the token is lost or damaged, the entire network may not function properly.

In general, polling access is suitable for smaller-scale networks and scenarios where real-time requirements are not high. The token passing technology is suitable for large-scale networks, especially those environments with high real-time requirements, such as industrial automation systems. Which method to choose usually depends on the size of the network, real-time requirements, and performance needs.

(6) Local area network

1. Basic concepts and architecture of LAN:

Basic concepts of local area network (LAN): 

A local area network refers to a computer network interconnected through high-speed data communication lines within a limited geographical scope, such as office buildings, schools, companies, etc. LAN is usually used to connect computers and devices within the same organization or unit to realize resource sharing, file transfer, printing and other functions. It has high transmission speed and low transmission delay, and is suitable for small-scale, high-performance, high-reliability network requirements.

LAN architecture:

  1. Topology: The physical layout of the LAN can adopt different topologies, including star, bus, ring, tree, etc. Each topology has its advantages and limitations, and choosing the appropriate topology depends on network size and needs.

    • Star topology: All devices are connected to a central device (such as a switch or hub). The central device is responsible for forwarding data packets and has good management and maintainability.

    • Bus topology: All devices share the same transmission line (bus), and devices communicate by sending and receiving data packets. The bus topology is simple, but the failure of one device may cause the entire network to be disrupted.

    • Ring topology: Devices are connected into a ring, data packets are passed on the ring, and each device has a fixed address. Ring topology is relatively stable, but adding or removing devices is more complex.

    • Tree topology: Devices are organized into a tree structure, which has the characteristics of both star and bus types. Tree topology provides better scalability and fault tolerance.

  2. Transmission Media: The transmission media of LAN usually include twisted pairs, optical fibers, wireless channels, etc. Different transmission media have different transmission rates, transmission distances and anti-interference capabilities. Choosing the appropriate transmission medium depends on network requirements and cost considerations.

  3. Network Devices: Commonly used network devices in LAN include Switch, Hub, Router, Bridge, Wireless Access point (Wireless Access Point), etc. These devices are used to connect, forward and manage data flows to ensure smooth transmission of data.

  4. Network Protocols: LAN uses various network protocols to specify the format, rules and methods of data transmission. Common LAN protocols include Ethernet, Wi-Fi (wireless LAN), TCP/IP, etc. Network protocols ensure that data can be exchanged correctly between different devices and that network communications function smoothly.

  5. Network Services and Applications: The LAN provides various network services and applications, including file sharing, print sharing, email, Web browsing, etc. These services and applications enable users to easily share resources and conduct various network activities.

The architecture of the local area network is composed of the above factors. They cooperate with each other to ensure that the network can operate efficiently and reliably to meet the needs of users. Different LANs can choose different topologies, transmission media, network devices and protocols according to specific needs to build a network environment suitable for specific scenarios.

2. Ethernet and IEEE 802.3

Ethernet is a local area network (LAN) technology that defines standards and rules for computers to communicate in a LAN. Ethernet was first jointly developed by Xerox, Intel and Digital Equipment Corporation (DEC) and released in 1983. It uses the **CSMA/CD (Carrier Sense Multiple Access with Collision Detection)** protocol to allow multiple devices to communicate data on the same shared transmission medium (usually twisted pair or optical fiber).

IEEE 802.3 is an Ethernet standard that defines the specifications of the physical layer and data link layer of Ethernet. The IEEE 802.3 standard was formulated by IEEE (Institute of Electrical and Electronics Engineers) and stipulates various parameters of Ethernet, including transmission rate, data frame format, signal transmission method, etc. This standard ensures that Ethernet equipment produced by different manufacturers can interoperate.

Characteristics and working principles of Ethernet:
  1. CSMA/CD protocol: Ethernet uses the CSMA/CD protocol to avoid data collisions. The device listens to the channel before sending data. If the channel is idle, the device sends data; if it detects that the channel is busy, the device waits for a random period of time and then tries to send again.

  2. Data frame: Ethernet data transmission is in data frame (frame). Each data frame contains information such as source address, destination address, data content and error correction code. The length of the data frame is usually between 46 bytes and 1500 bytes.

  3. MAC address: Each Ethernet device has a unique MAC address (Media Access Control Address), which is used to identify the device in the LAN. A MAC address is a 48-bit binary number, usually expressed in hexadecimal.

  4. Transmission media: Ethernet can run on different transmission media, including twisted pair (such as 10BASE-T), optical fiber (such as 100BASE-FX) and coaxial cable (Such as 10BASE5 and 10BASE2) etc.

IEEE 802.3 standard:

The IEEE 802.3 standard specifies various parameters and specifications of Ethernet, including different rates, transmission media and data frame formats. Some common IEEE 802.3 standards include:

  1. 10BASE-T: uses twisted pair cable for transmission, with a transmission rate of 10 Mbps.
  2. 100BASE-TX: uses twisted pair cable for transmission, with a transmission rate of 100 Mbps.
  3. 1000BASE-T: uses twisted pair cable for transmission, with a transmission rate of 1 Gbps.
  4. 1000BASE-SX: uses multimode fiber for transmission and has a transmission rate of 1 Gbps.
  5. 1000BASE-LX: uses single-mode fiber for transmission, with a transmission rate of 1 Gbps.
  6. 10GBASE-T: uses twisted pair cable for transmission at 10 Gbps.

The continuous development of the IEEE 802.3 standard has given Ethernet technology more choices in terms of speed, distance and transmission media, which is suitable for different network requirements. Ethernet technology is widely used in various network environments, including home networks, enterprise networks, and data centers.

3.IEEE 802.11 Wireless LAN

IEEE 802.11 is a set of Wireless LAN standards that define LAN wireless communication protocols in the 2.4 GHz and 5 GHz frequency bands. This series of standards was developed by IEEE (Institute of Electrical and Electronics Engineers), which specifies communication standards between devices in wireless networks, including communication methods between wireless access points, wireless client devices and networks.

Characteristics and working principles of IEEE 802.11:
  1. Frequency bands and transmission rates: The IEEE 802.11 standard can communicate on two frequency bands: 2.4 GHz and 5 GHz. Different sub-standards (such as 802.11b, 802.11g, 802.11n, 802.11ac, etc.) define different transmission rates and frequency band selections, allowing the appropriate standard to be selected under different needs.

  2. Infrastructure and self-organizing networks: 802.11 networks can work in infrastructure mode or self-organizing (Ad-Hoc) mode. In infrastructure mode, wireless clients connect to the wired network through wireless access points (Access Points, APs). In an ad hoc network, wireless devices can communicate directly with each other without the need for a central access point.

  3. Channel management and collision avoidance: The IEEE 802.11 network uses the CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) protocol to avoid collisions. The device will listen to the channel before sending data. If the channel is busy, the device will wait for a random period of time before trying to send again to reduce the probability of collision.

  4. Security: The IEEE 802.11 standard provides a variety of security mechanisms, including WEP (Wired Equivalent Privacy), WPA (Wi-Fi Protected Access) and WPA2, etc., for Encrypt data transmission to prevent unauthorized devices from accessing the network.

  5. Multiple Antenna Technology (MIMO): Some 802.11 sub-standards (such as 802.11n and 802.11ac) support Multiple Input Multiple Output (MIMO) technology, through Use multiple antennas to increase data transfer rates and signal coverage.

Substandards of IEEE 802.11:
  1. 802.11b: Provides a transmission rate of up to 11 Mbps on the 2.4 GHz band, using DSSS (Direct Sequence Spread Spectrum) modulation technology.

  2. 802.11g: Provides a transmission rate of up to 54 Mbps on the 2.4 GHz frequency band, using OFDM (Orthogonal Frequency Division Multiplexing) modulation technology.

  3. 802.11n: Provides transmission rates up to 600 Mbps or higher on the 2.4 GHz and 5 GHz frequency bands, uses MIMO technology, and supports multi-antenna transmission.

  4. 802.11ac: Provides transmission rates of up to 1 Gbps or higher on the 5 GHz band, using more advanced MIMO technology and wider channel widths.

  5. 802.11ax (Wi-Fi 6): Provides higher transmission rates and better network performance, supports more devices to connect at the same time, provides better channel management and Collision avoidance.

The IEEE 802.11 family of standards continues to evolve, introducing new technologies and improvements to meet the growing needs of wireless networks. Wireless LAN technology is widely used in mobile devices, home networks, enterprise networks, and public places, and has become an important part of modern network communications.

4. Basic concepts and principles of VLAN

Virtual LAN (VLAN) is a network technology used to divide a physical LAN into multiple logical LANs to achieve better network management and resource isolation. The following are the basic concepts and basic principles of VLAN:

basic concept:

  1. Virtual LAN (VLAN): VLAN is a logical local area network that divides physical network devices into multiple logical groups and is not restricted by physical location. Devices within each VLAN can communicate with each other, but devices between different VLANs cannot communicate directly.

  2. Switch: VLANs are typically configured and managed on switches. A switch is a key network device used to forward data frames from one port to other ports to enable communication between devices.

  3. Port: The physical interface of the switch, used to connect computers, servers, and other network devices. Each port can be assigned to one or more VLANs.

  4. VLAN ID: Each VLAN has a unique identification number, called VLAN ID. This ID is used to associate data frames with a specific VLAN.

Fundamental:

  1. VLAN classification: VLAN classification is to divide network devices into different logical groups based on their functions, departments or other criteria. For example, you can create one VLAN for the management department and another for the sales department.

  2. VLAN ID assignment: Assign a unique VLAN ID to each VLAN. This ID is usually an integer ranging from 1 to 4095. Different switch manufacturers may have some restrictions, but there are usually enough VLAN IDs to choose from.

  3. Port Association: Assign each switch port to one or more VLANs. This means that traffic from a device connected to a specific port will be assigned to the corresponding VLAN. For example, connect the management department's equipment to a VLAN 10, and connect the sales department's equipment to VLAN 20.

  4. VLAN isolation: Different VLANs are usually isolated from each other, which means that devices in the same VLAN can communicate with each other, but devices in different VLANs cannot communicate directly. This isolation helps improve network security and manageability.

  5. Switch management: VLAN configuration and management are usually done on the switch by the network administrator. Administrators can add, delete or modify VLAN configurations to meet the needs of different departments or projects.

  6. Tagging and data frame processing: The switch adds a VLAN tag to the header of the data frame to indicate the VLAN to which the data frame belongs. When the switch receives the data frame, it routes the data frame to the correct VLAN based on the tag. This kind of tag is usually implemented using the 802.1Q standard.

Advantages of VLAN:

  • Isolation and security: VLAN can isolate different network traffic and improve network security. Data flows can only communicate within the same VLAN, reducing the risk of network attacks.

  • Resource management: VLAN allows better management of network resources, dividing devices according to functions, departments or projects, helping to configure and maintain the network more effectively.

  • Flexibility: VLANs allow network administrators to reconfigure the network as needed without changing the physical topology.

  • Bandwidth Optimization: VLANs can separate network traffic and avoid bandwidth contention between different departments or projects.

In short, VLAN is a powerful network management tool that can configure the network according to different needs and situations and improve network performance and management. It is an essential part of modern business and organizational networks.

Conclusion

It seems that the WAN is still missing..., forget it.

Guess you like

Origin blog.csdn.net/m0_73872315/article/details/134230014