Computer Network Completion (from typing the URL to the response page --> HTTP common interview questions --> TCP three-way handshake and waving --> TCP timeout retransmission and other features --> IP address)

1. Basics: From typing in the URL to the response page

1. Parse the URL

 Generate HTTP request information

After parsing the URL, the browser determines the web server and file name, and then generates HTTP request information based on the information.

 2. Real address query DNS

DNS server:

After parsing the URL in the first step to obtain the corresponding HTTP request information, the HTTP request information must be sent to the corresponding server. You need to obtain the corresponding IP address first. We can obtain the corresponding IP address by obtaining the corresponding relationship between the web server domain name and the IP address. At the same time, the corresponding relationship is saved in the DNS server.

Domain name structure of DNS server:

Domain names in the DNS server are separated by ., like www.alibaba.com. The last dot represents the root domain name.

The level of DNS domain names is higher as it goes to the right. So. is the root DNS server, .com is the top-level domain DNS server, and alibaba.com is the authoritative DNS server.

DNS server resolves domain name process:

1. First, the client will send a DNS request, asking what the IP address of www.alibaba.com is, and send it to the local DNS server.

2. After receiving the client's request, the local DNS server will first search for the IP of www.alibabab.com in the cached table. If found, it will return the IP directly. If not, the local DNS server will ask the root domain name server for the root domain name. The server will return a direction of .com to the local domain name server, and then the local domain name server will access the .com top-level domain name server. Then the top-level domain name server will return a direction to alibaba.com authoritative domain name server. The authoritative domain name server tells the local domain name server the queried IP address.

 cache:

First check whether the browser itself has a cache for the corresponding domain name. If not, go to the operating system to check the cache. If not, check the hosts file. In the end, if there is no access to the local domain name server.

3.Protocol stack

After obtaining the corresponding IP address through DNS, the transmission service can be handed over to the protocol stack in the operating system.

The interior of the protocol stack is divided into several parts, each responsible for different tasks. There are certain rules for the upper-lower relationship. The upper part will delegate work to the lower part, and the lower part will receive the entrusted work and execute it.

First, the browser delegates the protocol stack work by calling the Socket library

There are two protocols in the upper part of the protocol stack, namely TCP and UDP protocols. They are used by the application layer to perform operations of sending and receiving data.

The IP protocol is used to perform the sending and receiving operations of network packets. When transmitting data on the Internet, IP is responsible for cutting the IP data packet into pieces and sending the network packets to the other party.

ICMP: used to return errors and control information during the transmission of network packets

ARP: Query the corresponding MAC address based on the IP address.

The network card driver below the IP is responsible for controlling the network card hardware, while the lowest network card is responsible for completing the actual sending and receiving operations, that is, sending and receiving signals in the network cable.

4. TCP

HTTP is transmitted based on TCP protocol

    1. Source port number and destination port number

        Without these two port numbers, the data will not be sent to which application (the port number is used to identify the receiving application)

    2. Serial number

        Used to solve the problem of packet disorder

    3. Confirm sequence

        The purpose is to confirm whether the sent data has been received. Used to solve the problem of packet loss

    4. Status bit

        SYN request connection

        ACK confirms receipt

        RST reconnect

        FIN ends the connection

    5. Window size

       TCP needs to control flow. Each party declares a window (cache size) to identify the current processing capabilities.

       Congestion control, used to control your own transmission speed

Three-way handshake: A TCP connection is required before HTTP data transmission, which is a three-way handshake.

 At the beginning, the server is in the listening state, and then the client actively initiates a connection and sends SYN to the server. It is then in the SYN-SENT state. Then the client returns SYN and ACKs (acknowledges receipt) the client's SYN. ​​At this time, the server In the ACK-RCVD state, after the client receives the SYN/ACK sent by the server, it sends ACK to confirm, and then it is in the ESTABLISHED state. After the server receives  ACK it  ACK , it is in  ESTABLISHED a state because it has also sent and received.

The TCP three-way handshake mainly confirms that both parties have the ability to send and receive.

Fragmentation: When the HTTP request information is too long, TCP will disassemble the data packet into pieces of data and send them

 Message generation:

 5. IP

When TCP data packets are connected, disconnected, sent and received, etc., they need to entrust the IP module to encapsulate the data into network packets and send them to the communication object.

 Source address: The IP address of the server that outputs the data

Destination address: The IP address of the server that receives the data

Because HTTP is transmitted through the TCP protocol, the protocol number in the IP header should be 06 (representing hexadecimal)

 

 6. MAC address

The MAC header needs to contain the sender's MAC address and the receiver's MAC address. Used for transmission between two points.

Sender's MAC address: The MAC address is written into the ROM when the network card is produced. You only need to read this value and write it into the MAC header.

Receiver's MAC address: Use the ARP protocol to broadcast to all Ethernet devices. If the other party is in the same subnet as yourself, you can obtain the MAC address.

ARP cache: First query the ARP cache. If there is a corresponding MAC address in the cache, the MAC address will be returned directly. If there is no ARP broadcast query,

7.Network card

The network packet is just a string of binary information stored in the memory, and there is no way to send it directly to the other party. Therefore, we need to convert digital information into electrical signals. Can be transmitted over the network cable.

The network card is responsible for this. To control the network card, a network card control program is required.

After the network card driver obtains the network packet, it will copy it to the buffer area in the network card, and then add the header and starting frame delimiter in front of its header. Add a frame check sequence at the end to detect errors

The start frame delimiter also indicates the beginning of the packet.

The FCS at the end is used to check whether the package is damaged

Finally, the network card converts the network packets into electrical signals and sends them out.

8.Switch _

The switch works at the MAC layer

It first converts the received electrical signal into a digital signal. Then check whether there are errors in the network packet through the FCS of the network packet. If there is no problem, put the network packet directly into the buffer.

The computer itself has a MAC address. If the received MAC address does not match its own MAC address, if not, it will be discarded directly.

The switch's port does not check the receiver address, but directly puts all network packets into the buffer. Therefore, unlike network cards, switch ports do not have MAC addresses.

The switch's MAC address table mainly contains two pieces of information:

     1. One is the MAC address of the device

     2. The other is which port the device is connected to

Therefore, the switch looks up the MAC address according to the MAC address table, and then sends the signal to the corresponding port.

If the switch does not find the corresponding MAC address, the switch will send the signal to all ports. Only the corresponding recipient can receive the package.

9.Router _

The difference between router and switch prices

Network packets pass through the switch and reach the router

The difference between the two:

1. Routers are designed based on IP and are commonly known as three-layer network equipment. Each port contains IP, MAC address

2. The switch is designed based on Ethernet, commonly known as a two-layer network device, and the port does not contain a MAC address.

Router basics

1. The router has a MAC port, so it can be an Ethernet sender and receiver. There is also an IP address. In this sense, it is similar to a network card.

2. When forwarding a packet, the router port will accept the corresponding Ethernet packet and then query the forwarding target through the routing table. Then forward the port you enjoy.

Router packet acceptance operation

First, the electrical signal reaches the network cable interface, and the module in the router converts the electrical signal into a digital signal. Then check the error through the FCS at the end

1. If there is no problem, check the MAC header of the network packet to see if it is the packet sent to you. If so, send it to the buffer.

2. The router's ports all have MAC addresses and only accept packets that match itself.

Query the routing table to determine the output port

 

10. Pick each other’s skin

The MAC address is obtained within the same subnet, and the IP address points to different hosts in different subnets.

Switches are mainly used within the same subnet

The router is in a different subnet and is the exit gate of the subnet.

_____________________________________________________________________________

2. HTTP common interview questions

1. Common concepts of HTTP

HTTP: Hypertext Transfer Protocol

In layman’s terms:

HTTP is a convention and specification for transmitting video, text, pictures and other information between two points in the computer world.

     2. Common HTTP status codes

 404: The resource requested by the client does not exist on the server or is not found.

500: An internal server error occurred

   3. Common fields of HTTP

     1) host: When the client sends a request, it is used to specify the server domain name (www.alibaba.com)

With the host field, requests can be sent to different subsegments on the same server

     2) Content-Length: When the server returns a field, it displays the length of the returned data.

     3) Connection: often used by clients to require the server to use the HTTP long connection mechanism so that other requests can be reused.

                    Characteristics of HTTP long connections: As long as one party does not explicitly propose to disconnect, the TCP connection status will be maintained.

     4) Content-Type: used when the server responds, telling the client what format the response data is.

       5) Content-Encoding: Used to describe the compression method of data and tell the client what compression format the server uses.

 2. GET and POST

the difference:

GET: Get the specified resource from the server

POST: Process the specified resource according to the request load

Safe and idempotent:

Security: Multiple request methods by the client will not destroy resources on the server

Idempotent: Performing the same operation multiple times will result in the same result

So GET is safe and idempotent and can be cached

POST is unsafe and idempotent, and most cannot be cached

From a developer's perspective:

  • You can use the GET method to implement requests to add or delete data. The GET method implemented in this way is naturally not safe and idempotent.
  • You can use the POST method to implement data query requests. The POST method implemented in this way is naturally safe and idempotent.

3. HTTP caching technology

Two implementation methods: forced caching and negotiated caching

  1. Forced caching: As long as the browser determines that the cache has not expired, it will directly use the browser's local cache.

          Forced caching is mainly implemented using two HTTP header fields, which are used to indicate the validity period of the cache in the browser:

              Cache-control: relative time 

              Expires: absolute time, Cache-control has higher priority than Expires

    2. Negotiate cache

          The cache negotiates with the server to determine whether to allow the browser to use the local cache.

Specific implementation of negotiated cache

 Note that the negotiation cache will only start to use the negotiation cache when the Cache-control that forces the cache does not hit, that is, when it expires.

4. HTTP features

 advantage:

Simple, flexible, easy to expand, widely used and cross-platform

shortcoming: 

        1. Stateless double-edged sword

            benefit:

The browser no longer needs to remember the HTTP status and does not require additional resources to record status information. This can reduce the burden on the server and allow more resources and memory to be used externally.

            harm:

                 Since he has no memory, he will have trouble completing related operations.

                      Cookie technology can be used to solve this problem

                     Cookie changes the HTTP status by writing Cookie information in the request and response messages.

        2. The double-edged sword of clear text transmission

           benefit:

Easy to read,

          harm:

All HTTP information is exposed. It is equivalent to information streaking. Information can easily be stolen.

        3. Unsafe

            1) Use clear text transmission: the content may be eavesdropped (eavesdropping risk)

            2) Failure to verify the identity of the communicating parties may lead to disguise (risk of impersonation)

            3) The integrity of the message cannot be proven and it may be tampered with. (Tampering risk)

HTTP performance:

      1. Long connection:

As long as one party does not propose an explicit disconnection, the connection will continue. This reduces TCP repeated connections and saves resources. Implemented through the field Connection:Keep-Alive

      2. Pipeline network transmission: When HTTP long connections are implemented, pipeline transmission is possible.

                                  Allows clients to send multiple requests simultaneously. But the server must send responses to these requests in the order in which they were accepted. However, when the server takes a long time to process the first request A, it will be blocked at the head of the server queue.

Therefore, HTTP/1.1 solves the head-of-line blocking of the browser, but does not solve the head-of-line blocking of the server.

5. HTTP和HTTPS

   1. The difference between the two:

          1) Security: HTTP is a hypertext transfer protocol that uses plain text transmission, while HTTPS solves the insecure flaws of HTTP and adds the SSL/TLS security protocol between the TCP and HTTP network layers to enable encrypted transmission of messages.

          2) Connection: HTTP connection is relatively simple. HTTP plaintext transmission can be performed after three TCP connections, while HTTPS requires an SSL/TLS handshake connection after three-way handshake to transmit messages.

          3) Port: The default port for HTTP is 80 and the default port for HTTPS is 443

          4) Certificate: HTTPS needs to apply for a digital certificate from the CA

 2. HTTPS solves some problems of HTTP

       Disadvantages of HTTP:

                             Eavesdropping risk HTTP clear text transmission, easy to cause information leakage

                              Risk of tampering: forced insertion of spam ads

                              Impersonation risk: impersonating a website to defraud money.

     HTTPS adds SSL/TLS protocol to TCP and HTTP information to solve the above points:

                   1. Information encryption: Encrypt messages so that the data cannot be obtained (solve the risk of eavesdropping)

                   2. Verification mechanism: the communication content cannot be tampered with, and the communication content cannot be displayed normally if it is changed (solve the risk of tampering)

                   3. Identity verification: Prove that it is genuine Taobao, website certificate. (Addressing the risk of impersonation)

      How HTTPS solves the above three risks

               1. Hybrid encryption method achieves information confidentiality. Addresses eavesdropping risk

               2. The digest algorithm achieves integrity and generates unique fingerprints for data, solving the risk of tampering.

               3. Put the server public key into the digital certificate to solve the risk of impersonation.

      How to establish HTTPS connection           

               Basic process:

                  The client asks the server for and verifies the server's public key

                  The two parties negotiate to generate a session key

                  Both parties communicate encrypted using session keys

        How HTTPS ensures application data integrity

             The implementation is divided into two layers: handshake protocol and recording protocol:

  • The TLS handshake protocol is the TLS four-way handshake process we mentioned earlier. It is responsible for negotiating encryption algorithms and generating symmetric keys. This key is subsequently used to protect application data (ie, HTTP data);
  • The TLS record protocol is responsible for protecting application data and verifying its integrity and origin, so HTTP data encryption uses the record protocol;

           The specific process is as follows:

 6. HTTP/1.1  HTTP1 HTTP3

      What performance improvements does HTTP1.1 have compared to HTTP1?

          1. Use long connections to reduce unnecessary repeated connections. Save resources

          2. Use pipeline transportation to solve the problem of browser blocking.

 Optimization of HTTP2

 1. Head compression

      When the browser sends multiple requests, if their headers are the same or similar, the protocol will eliminate the duplicates

 2. Binary format

      HTTP/2 is no longer like the plain text messages in HTTP/1.1, but fully adopts binary format . The header information and data body are both binary, and are collectively called frames: Headers Frame and Data Frame .

3. Concurrent transmission

      Multiple requests sent by the browser can share a TCP connection

A TCP connection contains multiple Streams. A Stream can contain one or more Messages. Messages correspond to requests or responses in HTTP/1 and are composed of HTTP headers and packet bodies.

4. Server push

HTTP/2 also improves the traditional "request-response" working model to a certain extent. The server no longer responds passively and can actively send messages to the client.

Both the client and the server can create a Stream , and the Stream ID is also different. The Stream created by the client must be an odd number, while the Stream created by the server must be an even number.

3. TCP three-way handshake and four-way wave 

 1. Basic understanding of TCP

   Source port number and destination port number:

Used to confirm which application the data packet is sent to

   serial number:

When establishing a connection, a random number generated by the computer is used as the initial value and is transmitted to the receiving host through the SYN packet. Each time data is sent, the size of the data bytes is accumulated once to solve the problem of packet disorder.

   Confirm response number:

Refers to the sequence number of the next data expected to be received. After the sender receives the confirmation response, it will consider that the previously sent request was received normally. Used to solve the problem of packet loss.

   Control bit:

                ACK When this control bit is 1, it indicates that the confirmation response field is valid.

                 RST When this control bit is 1, it means that an exception occurs in the TCP connection and must be forced to interrupt.

                 SYN When this control bit is 1, it indicates that you want to establish a connection and set the initial value in its sequence number field.

                 FIN When this control bit is 1, it indicates disconnection

 Why use TCP protocol

Because the TCP protocol is a reliable data transmission service that works at the transport layer, it can ensure that the received network packets are undamaged, gap-free, non-redundant and orderly.

What is TCP?

 TCP is a connection-oriented, reliable, byte stream-based transport layer communication protocol.

What is a TCP connection?

Certain state information is used to ensure reliability and flow control maintenance. The combination of this information, including Socket, sequence number and window number size, is called a connection.

 How to uniquely determine a TCP connection?

Source address and destination address: This field is in the IP header and is used to send it to the other host through the IP protocol.

Source port and destination port: This field is in the TCP header and is used to send it to the application in the other host.

 Maximum number of TCP connections:

Number of client ports * Number of client IPs

The difference between UDP and TCP

 the difference:

        1. Number of connections: TCP needs to establish a connection before performing communication services, but UDP does not need to establish a connection.

        2. Service objects: TCP can only provide one-to-one services, while UDP can provide one-to-many and many-to-one services.

        3. Reliability: TCP is for reliable transportation. Data can be error-free, not lost, not repeated, and arrives in order.

                          UDP is a best effort delivery and does not guarantee reliable shipping.

        4. Features: TCP has features such as congestion control and flow control to ensure data security. UDP does not

        5. Transmission method: TCP is streaming transmission without boundaries. But guaranteed to be orderly and reliable

                             UDP is sent packet by packet and has boundaries, but problems such as packet loss may occur.

        6. Different fragmentation: When the TCP data packet is larger than the MSS, it will be fragmented. If the fragment is lost midway, the client only needs to resend the new fragment.

       Application scenarios of TCP and UDP

TCP: FTP file transfer, HTTP and HTTPS, etc.

UDP: Communication with a small total package size, video, audio and other communications.

       Can TCP and UDP share the same port?

Can. The function of the "port number" of the transport layer is to distinguish the data packets of different applications on the same host.

2. TCP connection 

three handshakes

   The client sends a data packet (SYN status bit is 1) to the server to request a connection. A handshake

   The server sends a data packet (carrying SYN and ACK) to confirm the response and request a connection. Two handshakes

   The client sends a data packet carrying ACK to confirm the response and establish the connection. Three-way handshake

Only the three-way handshake can indicate that both the client and the server have the ability to send requests and accept

It is mainly the three-way handshake that can initialize the Socket, sequence number and window size and establish the connection (the meaning of TCP connection)

     Mainly reflected in the following three aspects:

               The three-way handshake can prevent repeated initialization of historical connections.

               A three-way handshake is required to synchronize the serial numbers of both parties.

              A three-way handshake can avoid wasting resources

Reasons for not using "two-way handshake" and "four-way handshake":

  • "Two handshakes": It cannot prevent the establishment of historical connections, which will cause a waste of resources on both sides, and cannot reliably synchronize the serial numbers of both parties;
  • "Four-way handshake": The three-way handshake is the theoretical minimum to establish a reliable connection, so there is no need to use more communication times.

Why is the initialization sequence number different every time a TCP connection is established?

  1. In order to prevent historical messages from being accepted by the next connection with the same four-tuple (main reason)

   2. Prevent messages with forged serial numbers by hackers from being received by the other party

Since the IP layer will be fragmented, why does the TCP layer need to be fragmented?

 What happens if the first handshake is lost?

  • When the client times out and retransmits SYN messages 3 times, since tcp_syn_retries is 3, the maximum number of retransmissions has been reached, so it will wait for a while (the time is twice the last timeout time). If it still fails to receive the response from the server After the second handshake (SYN-ACK message), the client will disconnect.

What happens if the second handshake is lost?

  • The client will retransmit the SYN message, which is the first handshake. The maximum number of retransmissions is  tcp_syn_retriesdetermined by kernel parameters;
  • The server will retransmit the SYN-ACK message, which is the second handshake. The maximum number of retransmissions is  tcp_synack_retries determined by kernel parameters.

What is lost in the third handshake?

  • When the server times out and retransmits the SYN-ACK message twice, since tcp_synack_retries is 2, the maximum number of retransmissions has been reached, so it will wait for a while (the time is twice the last timeout time). If it still fails to receive The client's third handshake (ACK message), then the server will disconnect

wave four times

    The client sends a data packet carrying FIN to the server to indicate disconnection

    The server sends a data packet carrying ACK to the client to indicate receipt of the request. The confirmation sequence number is the received sequence number plus 1.

    The server segment sends a FIN packet to the client to indicate disconnection

    The client sends a data packet carrying ACK to the server to indicate receipt of the reply, formally disconnects, and sets the confirmation sequence number to the received sequence number plus 1

Why it takes four waves to disconnect

  When the client sends a FIN packet to request a disconnection, the server will send an ACK to confirm receipt of the request. However, the server may still have unprocessed data at this time. After the data is processed, the FIN can be sent to the client. end. This is why four waves cannot be reduced to three.

What happens if the first wave is lost?

Trigger timeout retransmission mechanism

  • When the client times out and retransmits the FIN message 3 times, since tcp_orphan_retries is 3, the maximum number of retransmissions has been reached, so it will wait for a while (the time is twice the last timeout time). If it still fails to receive the response from the server Wave for the second time (ACK message), then the client will disconnect.

What happens if the wave is lost the second time?  

  • When the client times out and retransmits the FIN message 2 times, since tcp_orphan_retries is 2, the maximum number of retransmissions has been reached, so it waits for a while (the time is twice the last timeout time). If it still fails to receive the response from the server, The second wave (ACK message), then the client will disconnect.

What happens when the third wave is lost?

  • When the server retransmits the third wave message for 3 times, since tcp_orphan_retries is 3, the maximum number of retransmissions is reached, so it waits for another period of time (the time is 2 times the last timeout). If it still If the client fails to receive the fourth wave (ACK message), the server will disconnect.
  • Because the client closes the connection through the close function, there is a time limit in the FIN_WAIT_2 state. If the client still fails to receive the third wave (FIN message) from the server within the tcp_fin_timeout time, the client will disconnect.

What happens when the fourth wave is lost?

  • When the server retransmits the wave message for the third time and reaches 2, since tcp_orphan_retries is 2, the maximum number of retransmissions is reached, so it waits for a while (the time is 2 times the last timeout). If it still fails to receive The client waves for the fourth time (ACK message), then the server will disconnect.
  • After receiving the third wave, the client will enter the TIME_WAIT state and start a timer with a duration of 2MSL. If it receives the third wave (FIN message) again on the way, it will reset the timer and wait for After 2MSL, the client will disconnect.

———————————————————————————————————————————

4. TCP timeout retransmission sliding window flow control congestion control 

1. TCP retransmission

One of the ways TCP achieves reliable transport is to use sequence numbers and confirmation response numbers.

    1. Timeout retransmission

          When the client sends a data packet to the server, if it does not receive the other party's ACK confirmation response message within the specified time, the data packet will be resent. This is the retransmission mechanism.

       Generally, packet loss occurs in two situations:

            1. Packet loss

            2. Acknowledgment response lost

RTT is the difference between the time the data is sent and the time the confirmation is received.

RTO is the timeout retransmission time 

There are two methods of RTO that don’t work:

   One is that the RTO is too large, which will be inefficient. The data packet will be lost for a long time before it is retransmitted.

   One is that if the RTO is too small, the packet may be retransmitted without loss, causing network congestion.

Therefore, to sum up, the value of RTO should be slightly larger than the value of packet round-trip time RTT.

   2. Fast retransmission

       It is not driven by time, but by data.

 Therefore, the fast retransmission mechanism is to retransmit the lost segment before the timer expires when three identical ACK responses are received in a row.

There is a problem with fast retransmission: it is not known whether to retransmit one problem or all problems.

     3. Sack method

Scak selective confirmation

It adds a Sack message to the TCP header option, which can send information about the data that has been received to the sender, so that the sender knows which data has been received and which data has not been received. Knowing this information, you can only retransmit the unreceived information

    4. D-SACK method

It can be seen that D-SACK there are several benefits:

  1. It can let the "sender" know whether the packet sent out is lost or the ACK packet responded by the receiver is lost;
  2. You can know whether the data packet of the "sender" is delayed by the network;
  3. You can know whether the data packets of the "sender" are copied in the network;

 2. Sliding window

Every time a request is sent, a confirmation response is required. The longer the round-trip time of the data packet, the lower the communication efficiency.

Therefore, the window concept is introduced. The window size refers to the maximum value that can continue to send data without waiting for a confirmation response.

The window is a cache space opened by the operating system. The sender must retain the sent data in the buffer before receiving the confirmation response. If the response is received on time, the data can be cleared.

There is a field window in the TCP header, which means the window size. The window size is determined by the receiver.

This field is where the receiver tells the sender how much space there is left in the buffer to receive data. So the sending end can send data based on the processing capabilities of the receiving end. It will not cause the receiving end to be unable to process it.

The size of the receive window is approximately equal to the size of the send window.

3. Flow control

TCP provides a mechanism that allows the sender to control the amount of data sent based on the actual receiving capabilities of the receiver. This is flow control.

The window field in the confirmation message sent by the receiver can be used to control the sender window size, thereby affecting the sender's sending rate. If the window field is set to 0, the sender cannot send data.

  1) The relationship between the operating system buffer and the sending window and receiving window

The number of bytes stored in the send window and receive window are stored in the buffer in the operating system, and the size of the buffer will be adjusted by the operating system.

4. Congestion control

A period of time in which the demand for a resource on the network exceeds the available portion of that resource. The performance of the network will deteriorate, which is called congestion. Congestion control is to prevent more requests from entering the network. The purpose of control is to prevent the sender's data from filling the entire network.

congestion window

The congestion window is a state variable maintained by the sender. It changes depending on the congestion level of the network

Sending window swnd=min (cwnd, ewnd) Generally, the sending window is approximately equal to the receiving window rwnd.

How does cwnd change?

As long as there is no network congestion in the network, cwnd will grow larger

If there is network congestion, it will become smaller.

How to determine network congestion?

If data packets are lost and the retransmission mechanism is initiated, it will indicate network congestion.

Four algorithms for congestion control:

        1. Slow start cwnd gradually becomes larger, 1, 2, 4, 8, 16 without exceeding the slow start threshold ssthresh.

        2. Congestion avoidance: When the ssthresh is almost exceeded, start adding 1, 12, 13, 14

        3. Congestion occurs. When timeout retransmission occurs, two mechanisms

                        1. Timeout retransmission                      

  • ssthresh Set to  cwnd/2,
  • cwnd Reset to  1 (restore to cwnd initialization value, I assume cwnd initialization value 1 here)

                        2. Fast retransmission

  • cwnd = cwnd/2 , that is, set to half of the original value;
  • ssthresh = cwnd;
  • Enter the fast recovery algorithm

        4. Quick recovery

Fast recovery and fast retransmission are generally used.

  • Congestion window  cwnd = ssthresh + 3 (3 means confirmation that 3 packets were received);
  • Retransmit lost packets;
  • If a duplicate ACK is received again, cwnd is increased by 1;
  • If after receiving the ACK of new data, set cwnd to the value of ssthresh in the first step, the reason is that the ACK confirms the new data, indicating that the data from the duplicated ACK has been received, and the recovery process has ended. You can return to the state before recovery, that is, enter the congestion avoidance state again;

———————————————————————————————————————————

5. IP basic knowledge 

 1. Basic understanding of IP

    The network layer implements communication between hosts. Also called point-to-point communication.

    What is the difference between the network layer (IP) and the data link layer (network interface layer) (MAC)?

IP is responsible for communication between two networks that are not directly connected, and MAC is responsible for communication between two directly connected devices.

Only when the two are used together can communication between the host and the host be achieved. The IP source address and destination address remain unchanged.

The MAC source address and MAC destination address will keep changing.

2. Basic knowledge of IP

       1) Classification address

                   

The host numbers are all 1, which specifies all hosts under a certain network for broadcasting.

The host number is all 0 to specify a host

Local broadcast: All hosts in this network will receive the request sent]

Direct broadcast: Broadcast between different networks is called direct broadcast

 Class D addresses are often used for multicasting: sending packets to all hosts in a specific group.

Advantages of IP classification:

    You can quickly find the corresponding host number and network number. Straightforward and easy to choose.

Disadvantages: There is no address hierarchy under the same network

           Does not match well with real networks

   2) Unclassified address CIDR

Two forms to determine the host number and network number

        1. Use /x to distinguish the host number and network number

         2. Use the provided subnet mask to determine

 Divide subnets:

       Separate two of the original host numbers as subnet addresses, and the latter one as the host number.

 Public IP address and private IP address

       Private IP addresses are: personal. Schools, Internet cafes, etc.

       A public IP address is: similar to the kind of website that everyone needs to visit, find a unique IP address

IP address and routing control

Packets are routed to specific IP addresses.

 ARP protocol

         

 

ICMP: Internet Control Message Protocol

    Query message for diagnosis: Query message type

    Error message notifying the cause of the error, error message type.

Guess you like

Origin blog.csdn.net/weixin_55347789/article/details/131664919