[Computer Network] Computer Network (Eighth Edition) written by Xie Xiren - all the answers you want are here

Computer Network: Summary of Knowledge Points (with question marks are exam questions, without knowledge points)

Article Directory


Questions: 1-14: Chapter 1: Overview
   15-21: Chapter 2: Physical Layer
   22-32: Chapter 3: Data Link Layer
   22-52: Chapter 4: Network Layer
   53-65: Chapter 4: Transport Layer
   66-70: Chapter 4: Application Layer


1. The emergence of the Internet?


•1969 – 1990

From a single network ARPANET to the Internet.

•1985 – 1993

A three-level structure of the Internet has been built.

• 1993 – present

Internet with multi-level ISP structure worldwide.


2. Triple play?


Telecommunications network: provides services such as telephone, telegraph and fax.

Cable television network: Delivers various television programs to subscribers.

Computer networking: Enables users to transfer data files between computers.

The fastest growing and the central role is the computer network

"Triple play": refers to the integration of telecommunications network and cable TV network into computer network


3. The Internet and the Internet? Internet and internet


Internet: Currently the most popular and de facto standard translation.

Internet: A network of locally interconnected computers

Internet ≠ Internet.

Internet: The world's largest and most important computer network. It uses the TCP/IP protocol family as the communication rule, and its predecessor is ARPANET in the United States.

Internet: is a general term, which generally refers to a computer network formed by the interconnection of many computer networks. It does not necessarily use the TCP/IP protocol family as the communication rule


4. Two basic characteristics of the Internet


Connectivity _ _ _

It makes it possible for Internet users to exchange various information very conveniently and economically, as if these user terminals are directly connected to each other.

Resource sharing (• Sharing )

Realize information sharing, software sharing and hardware sharing. Due to the existence of the network, these resources are as convenient to use as if they are at the user's side


5.WWW?


Please Baidu World Wide Web. .


6.ISP、IXP?RFC?


Internet Service Provider ISP (Internet Service Provider):

​In many cases, an Internet service provider (ISP) is a company that conducts commercial activities, for example, China Telecom, China Unicom, China Mobile, Tencent, Netease, 3guu

​ Provides access to the Internet.
​ Need to charge a certain fee

Internet exchange point IXP (Internet eXchange Point):

​The peak throughput of the largest IXPs in the world

Allows two networks to connect directly and exchange packets quickly.
A network switch working at the data link layer is often used.

Standard publication: in the form of RFC (Request For Comments):

All RFC documents are freely downloadable from the Internet.
Anyone can comment or suggest a document at any time by e-mail.
But not all RFC documents are Internet standards. Only a small number of RFC documents end up becoming Internet Standards.
RFC documents are numbered according to the time of publication (that is, RFCxxxx, where xxxx is an Arabic numeral).

[External link image transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the image and upload it directly (img-1Ofqm0DF-1665048370186) (C:\Users\86138\AppData\Roaming\Typora\typora-user-images\image-20221006171255325.png)]


7. What is the composition of the Internet?


From the perspective of how the Internet works, it can be divided into two parts:
the edge part :

​ Consists of all hosts connected to the Internet, used directly by users for communication (transmitting data, audio or video) and resource sharing.

Core part :

Consists of a large number of networks and routers connecting these networks, serving the edge (providing connectivity and switching).

[(img-kZzJ0HXH-1665048370187)(C:\Users\86138\AppData\Roaming\Typora\typora-user-images\image-20221006171708050.png)]


8. Two communication methods of the end system


Client/server mode:

​ Client / Server method
​ referred to as C/S method

insert image description here

Peer-to-peer:

​ Peer to Peer method
​ abbreviated as P2P method
insert image description here


9. What is the difference between a hub, a router, and a switch?


Understand the working methods of the three (metaphor, please Baidu for specific official explanation):

1. Distinguish between hubs , switches, and routers (metaphor)
One day, you go to the school where your girlfriend Xiaofang (let’s name this tentatively) goes to find her . (Queuing) If you happen to meet another person shouting at the same time as you shout, neither you nor he can be heard. (Conflict) When you shout, you can't hear what others are saying. Only when you finish shouting do you start to listen. (Half-duplex working mode, monitoring) Sure enough, your girlfriend's voice came from the opposite building, "Go to hell!" (Response) Second, the working method of the switch: Your girlfriend notified you of her mobile phone number (Mac address) in advance. You dial her cell phone. (Make a connection) Say to her "I came to you because I miss you so much, my sweetheart, my darling...". ( Exclusive channel) Your girlfriend is impatient to hear, and before you finish speaking, she replied "I'm so numb ! " You went to the school concierge and asked about the building where Department XX is located, then went to Department XX and asked about the classroom of Class XX, and then went to classroom XX and asked about the location of seat No. XX... After N times of inquiries (N jumps), you finally came to Xiaofang.














Specific difference:

router switch hub
job level Network layer data link layer physical layer
Forward basis IP address Mac address Mac address
Function Connect to a different network Connect to hosts in LAN Connect to hosts in LAN
broadband impact shared broadband Exclusive broadband shared broadband
transfer mode - Full duplex or half duplex half duplex

Note:insert image description here


10. Computer network performance indicators


Rate :
insert image description here
Bandwidth :

insert image description here
Throughput :
insert image description here
Latency :
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
Delay Bandwidth Product :
insert image description here
Round Trip Time :
insert image description here
Utilization :
insert image description here


11. Layered network architecture and the role of each layer


insert image description here
Application layer :

transport layer :

Network layer :

Data link layer :

physical layer :


12. SDU and PDU?


SDU:
  OSI refers to the unit of data exchanged between layers as Service Data Unit (SDU).

PDU:
  The OSI reference model refers to the data unit transmitted between peer layers as the protocol data unit PDU (Protocol Data Unit) of the layer.
Any two same levels pass PDU (ie data unit plus control information) directly to each other through horizontal dotted lines. This is the communication between the so-called "peer layers".
SDUs can be different from PDUs.
  For example: multiple SDUs can be synthesized into one PDU, or one SDU can be divided into several PDUs.


13. The above service users can only see the service, but cannot see the protocol below; it is transparent to the above service users. How do you understand transparency?


insert image description here
  It can be understood that the lower layer provides services to the upper layer, and the upper layer does not need to understand the principles of the lower layer, realizing the "separation" of the upper and lower layers. The lower layer is transparent to the upper layer.


14. Why does the data link layer add headers and trailers?


  • Use the header and tail for frame delimitation (determine the boundary of the frame)
  • Clock Synchronization, Error Checking

The header is a preamble that synchronizes the clocks of the receiver and sender. The tail information contains a CRC check code, which can be used to check whether there is an error in the transmission of the information.


15. What is the physical layer?


Basic concept:
  The physical layer considers how to transmit bit streams on the transmission media connected to various computers , and shields the differences of different transmission media and communication methods as much as possible, rather than specific transmission media.


16. Channel three communication methods


Channel:
  It is generally used to represent the medium that transmits information in a certain direction. A channel often includes a sending channel and a receiving channel

From the way of information exchange between the two sides of the communication, there can be the following three ways:
One-way communication (simplex communication): There can only be communication in one direction and no interaction in the opposite direction. Radio or cable TV broadcasts are of this type

insert image description here

Two-way alternate communication (half-duplex communication): Both parties in the communication can send information, but they cannot send information at the same time (of course they cannot receive at the same time). This communication method is that one party sends and the other receives, and it can be reversed after a period of time.

insert image description here

Two-way simultaneous communication (full-duplex communication): Both communicating parties can send and receive information at the same time

insert image description here


17. Baseband modulation (also called coding), bandpass modulation


Baseband signal (i.e. fundamental frequency band signal)

  • signal from the source.
  • Contains more low-frequency components, and even DC components.

modulation

  • Baseband modulation:

  Only the waveform of the baseband signal is transformed, and the digital signal is converted into another form of digital signal. This process is called coding.

  • Bandpass modulation:

  The carrier is used for modulation to move the frequency range of the baseband signal to a higher frequency band and convert it into an analog signal. The signal after carrier modulation is called a band-pass signal (that is, it can only pass through the channel within a certain frequency range).


18. Self-synchronization capability


Preamble learning: Common coding methods
Self-synchronization ability:
  Non-return-to-zero system cannot extract the signal clock frequency from the signal waveform itself (this is called no self-synchronization ability).
  Manchester encoding and differential Manchester encoding are self-synchronizing.


19. What are the influencing factors of channel transmission rate? Nye's Criterion and Shannon's Formula


Two factors limit the rate at which symbols can be transmitted on a channel:

  • The range of frequencies that a channel can pass through.
  • SNR

Nye's rule:

  Maximum rate of symbol transmission = 2W (symbols/second)

 In a low-communication channel with a bandwidth of W (Hz), if the influence of noise is not considered, the highest rate of symbol transmission is 2W (symbol/second). If the transmission rate exceeds this upper limit, serious intersymbol interference will occur, making it impossible for the receiving end to judge (that is, identify) the symbols.

SNR

  Signal-to-noise ratio (dB) = 10 log10(S/N ) (dB)

Shannon's formula:

  C = W log2(1+S/N) (bit/s)

 The larger the bandwidth of the channel or the signal-to-noise ratio in the channel, the higher the limit transmission rate of information.
 As long as the information transfer rate is lower than the limiting information transfer rate of the channel, there must be some way to achieve error-free transmission.

Nye's Criterion:
  Encourage engineers to explore more advanced coding techniques so that each code unit can carry more bits of information.

Shannon's formula:
  It warns engineers that no matter how complicated the encoding technology is, it is impossible to break through the absolute limit of information transmission rate on an actual noisy channel.


20. What is the classification of transmission media below the physical layer?


  • Guided transmission media
    Twisted pair, coaxial cable, fiber optic cable
  • Unguided transmission media
    Radio waves, microwaves, communication satellites, infrared, lasers

21. Channel multiplexing technology


Multiplexing :

  Allows users to communicate using a shared channel.

Multiplexer (multiplexer) and demultiplexer (demultiplexer):

Frequency Division Multiplexing FDM (Frequency Division Multiplexing):

  • The entire bandwidth is divided into multiple parts, and after users are allocated a certain frequency band, they occupy this frequency band from beginning to end during the communication process.
  • All users occupy different bandwidth (ie, frequency band) resources at the same time.
    Frequency Division Multiple Access FDMA:
             - Let n users use one frequency band, or let more users use this n frequency band in turn

Time division multiplexing TDM (Time Division Multiplexing):

  • Divide time into time-division multiplexing frames (TDM frames) of equal length
  • Each TDM user occupies a time slot with a fixed sequence number in each TDM frame.
  • The time slot occupied by each user occurs periodically (the period is the length of the TDM frame).
  • TDM signals are also known as isochronous signals.
  • All users occupy the same frequency bandwidth at different times.
    Time Division Multiple Access TDMA:
             - Allow N users to use one time slot each, or allow more users to use the N time slots in turn

WDM :

   Using the wavelength of light to carry out frequency division multiplexing with different divisions, the difference lies in the difference in the propagation medium, which requires the use of optical multiplexers and optical demultiplexers.

Code division multiplexing CDMA:

  • Every user can use the same frequency band to communicate at the same time.
  • Each user uses a different pattern that has been specially selected so that there is no interference.
  • When the CDM (Code Division Multiplexing) channel is shared by multiple users with different addresses, it is called CDMA (Code Division Multiple Access).

22. Which layers are included in the intermediate device (switch) in the LAN


Switches (s1, s2): only data link layer and physical layer
insert image description here


23. Ethernet


The most important features of LAN:

  • The network is owned by a unit;
  • Both geographical scope and number of sites are limited.

LAN has the following main advantages:

  • With broadcasting function, the whole network can be easily accessed from one site.
  • It is convenient for the expansion and gradual evolution of the system, and the location of each device can be flexibly adjusted and changed.
  • Improved system reliability, availability and survivability.

DIX Ethernet V2: The world's first protocol for LAN products (Ethernet).
IEEE 802.3: The first IEEE standard for Ethernet.

  Hardware implementations of the two standards can interoperate on the same LAN.
  There is only a small difference between these two standards, so many people often refer to 802.3 LAN as "Ethernet" for short.

The IEEE committee split the data link layer of the local area network into two sublayers, the logical link control LLC sublayer and the media access control MAC sublayer
insert image description here


24. What is the channel type of the data link layer?


  The channels used by the data link layer are generally shared channels . One of the key considerations for shared channels is how to make many users share communication media resources reasonably and conveniently . There are two technical methods for this:

Statically divide the channel:

  • frequency division multiplexing
  • time division multiplexing
  • WDM
  • code division multiplexing

The advantage of this division method is:
  as long as the user is assigned to the channel, it will not conflict with other users. The
disadvantage is:
  this method of dividing the channel is expensive and not suitable for the use of LAN

Dynamic media access control (multipoint access):

  • Random access: All users can send information randomly.
  • Controlled access: Users must submit to certain controls. Such as polling (polling).

Local area networks generally use random access channels


25. What is a link? What is a data link?


link (link):

  • A passive point-to-point physical line segment without any other switching nodes in between.
  • A link is just one component of a path.
  • or physical link.

Data link (data link):

  • The data link is formed by adding the hardware and software of the protocol that controls the data transmission to the link.
  • or logical link.
  • Typical implementation: adapter (i.e. network card)

26. Byte stuffing and zero bit stuffing


Problem:
  If a byte in the data happens to have the same binary code as SOH or EOT, the data link layer will incorrectly "find the boundary of the frame", causing an error. ( during asynchronous transmission)
solution:
  byte stuffing

Problem:
  If a byte in the data happens to have the same binary code as SOH or EOT, the data link layer will incorrectly "find the boundary of the frame", causing an error. ( During isochronous transmission)
Solution:
  zero bit padding


27. CRC calculation redundancy code?


insert image description here


28. What is the difference between CRC and FCS?


CRC is an error checking method, and FCS is a redundant code added to the data


29. No bit error and no transmission error


Reliable transmission : What is sent by the sender of the data link layer is received by the receiver.
Transmission errors can be divided into two categories:

  • bit error;
  • Transmission errors: frame loss, frame duplication, or frame out-of-sequence, etc.

Using CRC checks at the data link layer enables bit-error-free transmission, but this is not yet reliable transmission.
To achieve reliable transmission, mechanisms such as frame numbering, acknowledgment and retransmission must be added.


30. PPP agreement? ? composition?


For point-to-point links, the most widely used data link layer protocol is Point-to-Point Protocol (PPP ).

The PPP protocol has become an official standard of the Internet since 1994 [RFC 1661, STD51].

insert image description here

Three components:

  • A method of encapsulating IP datagrams onto a serial link.
  • A link control protocol LCP (Link Control Protocol).
  • A set of network control protocol NCP (Network Control Protocol).
    insert image description here

31. Parallel transmission and serial transmission, synchronous communication and asynchronous communication (reproduced)


1. Parallel transmission:

The characters (bits) of the character code are transmitted at the same time;

2. Serial transmission (usually serial transmission):

Send the bits that make up the character serially to the wire;

There are two transfer methods:

1) Synchronous transmission;

2) Asynchronous transmission;

There are three directional structures for serial data communication:

1) Simplex;

2) Half-duplex; (I2C)

3) Full duplex; (UART)

1). The principle of synchronous communication

Synchronous communication is a communication method of continuously serially transmitting data, and only one frame of information is transmitted in one communication. The information frame here is different from the character frame in asynchronous communication, and usually contains several data characters.

When synchronous communication is used, many characters are formed into a message group, so that characters can be transmitted one by one, but a synchronization character is added at the beginning of each group of information (usually called a frame), and when there is no information to be transmitted, empty characters are filled, because synchronous transmission does not allow gaps. During synchronous transmission, a character can correspond to 5-8 bits. Of course, for the same transmission process, all characters correspond to the same number, for example, n bits. In this way, during transmission, every n bits are divided into a time slice, the sending end sends a character in a time slice, and the receiving end receives a character in a time slice.

During synchronous transmission, an information frame contains many characters, and each information frame starts with a synchronization character. Generally, the synchronization character and the null character use the same code. In the whole system, a unified clock controls the sending of the sending end and uses the same code for the null character. Of course, the receiving end should be able to recognize the synchronous character. When a string of digits is detected to match the synchronous character, it is considered to start an information frame, so the subsequent digits are treated as actual transmission information.

2). Asynchronous communication principle

Asynchronous communication is a very common way of communication. When sending characters in asynchronous communication, the time interval between the sent characters can be arbitrary. Of course, the receiving end must always be ready to receive. The sending end can start sending characters at any time, so it is necessary to add a mark at the beginning and end of each character, that is, add a start bit and a stop bit, so that the receiving end can receive each character correctly. The advantage of asynchronous communication is that the communication equipment is simple and cheap, but the transmission efficiency is low (because the overhead of the start bit and stop bit accounts for a large proportion).

3). The difference between synchronous communication and asynchronous communication:

(1) Synchronous communication requires that the clock frequency of the receiving end is consistent with the clock frequency of the sending end, and the sending end sends a continuous bit stream; in asynchronous communication, the clock of the receiving end and the clock of the sending end are not required to be synchronized. After sending a byte, the sending end can send the next byte after an arbitrary long time interval.
  (2) The efficiency of synchronous communication is high; the efficiency of asynchronous communication is low.
  (3) Synchronous communication is more complicated, and the allowable error of both clocks is small; asynchronous communication is simple, and a certain error is allowed for both clocks.
  (4) Synchronous communication can be used for point-to-multipoint; asynchronous communication is only suitable for point-to-point.

(End)

Reprinted in: https://www.cnblogs.com/yirenyang/p/4281101.html

Give a GPT answer.
Synchronous communication and asynchronous communication are two common ways in computer communication. Their principles and differences are as follows:

  • Synchronous communication refers to the communication method in which the sender waits for the receiver to respond after sending data before proceeding to the next step. In synchronous communication, the sender and receiver need to communicate within the same time period, and the sender needs to wait for the receiver to respond before proceeding to the next step. Synchronous communication has the advantages of high communication efficiency and high reliability of data transmission, but when the receiver cannot respond in time, it will be blocked, which will affect the communication efficiency.
  • Asynchronous communication refers to the communication method in which the sender can continue to the next step without waiting for the receiver to respond after sending data. In asynchronous communication, the sender and receiver can communicate in different time periods, and the sender can proceed to the next step without waiting for the receiver to respond. Asynchronous communication has the advantages of high communication efficiency and good communication flexibility, but because the sender cannot determine when the receiver will respond, some data may be lost.
  • In general, the difference between synchronous communication and asynchronous communication is whether it is necessary to wait for the response of the other party. Synchronous communication needs to wait for the response of the other party, and the communication efficiency is high but blocking may occur. Asynchronous communication does not need to wait for the response of the other party, and the communication efficiency is also high, but data loss may occur. Which communication method to choose needs to be decided according to the specific application scenario

32. PPP protocol format


insert image description here

33. Overview of the network layer?

The network layer can be divided into two levels:

  • control plane
  • Data plane (forwarding plane)

The function of the network layer is mainly reflected in providing the transmission function between endpoints for the data link layer, further managing the data communication in the network, and trying to transmit the data from the source end to the destination end through several intermediate nodes, thus providing the most basic end-to-end data transmission service to the transport layer.

34. What is the comparison between virtual circuit and datagram service?

insert image description here

tips: The network layer uses datagram services, and the network layer does not provide quality of service commitments (connectionless)

35. Direct delivery and indirect delivery?

Direct delivery does not need to go through any routers, otherwise, indirect delivery
insert image description here

36. Representation method of IP address

insert image description here

37. Classification of IP addresses

  • IP addresses are divided into five categories: A, B, C, D, and E. Types A, B, and C are unicast addresses.
  • The IP address is written in dotted decimal notation, where the value range of each segment is 0 to 255
    insert image description here
    IP special address
    insert image description here

38.CIDR

The full name is Classless Inter-Domain Routing
insert image description here

39. How to calculate the network address of the subnet mask?

insert image description here

40. Bitwise AND, OR, XOR operation?

With: and operation
or: or operation
XOR:! = operation

41. What is the difference between an IP address and a MAC address?

The MAC address is the address used by the data link layer, while the IP address is the address used by the network layer and the above layers, which is a logical address

The IP address is called a logical address because the IP address is implemented by software, and
the MAC address is called a physical address because the MAC address has been solidified into the ROM on the network card

insert image description here

42. Supernet?

Supernetting is a concept similar to subnetting – an IP address is divided into separate network addresses and host addresses based on a subnet mask. However, instead of subnetting, which divides a large network into several smaller networks, it combines several small networks into one large network—a supernet.
Supernetting was created to solve the problem of routing lists exceeding existing software and administrative manpower and to provide a solution to the exhaustion of class B network address space. Supernetting allows a routing list entry to represent a collection of networks, just as an area code represents a collection of telephone numbers for an area.
The currently prevailing exterior gateway protocol Border Gateway Protocol (BGP) and Open Shortest Path First (OSPF) routing protocol support supernetwork technology.

43. Route aggregation and division?

A large CIDR address block often contains many small address blocks, so a larger CIDR address block is used in the forwarding table of the router to replace many small address blocks.

tips: The address block with the shorter network prefix contains more addresses

44. Why do I need an ip address when I have a mac address?

Because if we only use the MAC address, we will find that the router needs to remember which subnet each MAC address is in (otherwise, the router will have to find the location of the MAC address all over the world every time it receives a data packet). And there are 248 MAC addresses in the world, which means that even if we leave only 1 byte of storage space for each MAC address, each router needs 256TB of memory! This is obviously impossible to achieve. This is why we need IP addresses. Unlike MAC, IP address is related to the region. For devices on the same subnet, we assign them the same IP address prefix, which is like a zip code. In this way, the router can know which subnet the device is on through the prefix of the IP address. Now, the router only needs to remember the location of each subnet, which greatly reduces the memory required by the router.
Reprinted ( with MAC address, why use IP address? )

45. How does ARP obtain the mapping between IP address and MAC address?

Prerequisite knowledge:

  • ARP cache
    - stores the mapping table from IP address to MAC address
    - dynamic update of the mapping table (add or delete after timeout)

Probably long like this
insert image description here
ARP process
insert image description here

  • A knows B's IP address and needs to obtain B's MAC address (physical address). If A's ARP table caches the mapping relationship between B's IP address and MAC address, it can be obtained directly from the ARP table.
  • If the mapping relationship between B's IP address and MAC address is not cached in A's ARP table, A broadcasts an ARP query packet containing B's IP address
  • All nodes on the LAN can receive the ARP query. After B receives the ARP query packet, it sends its own MAC address to A. A
    caches the mapping relationship between B's IP address and MAC address in the ARP table, and deletes it when timeout

46. ​​Calculation of the header checksum? (distinct from CRC)

insert image description here
The specific process is as follows:
insert image description here

47. The variable part of the IP datagram header?

The variable part of the IP datagram header is an option field. Selection fields are used to support measures such as troubleshooting, measurement, and security, and the content is very rich. This field has a variable length, ranging from 1 byte to 40 bytes. Depends on the selected item.

tips: The new version of IPv6 makes the length of the IP datagram header fixed

48. How does the forwarding table of the router perform forwarding? And the specific process of forwarding table matching?

Approximate process:

  • A creates an IP packet (source is A, destination is E)
  • Find the IP address 223.1.1.4 of router R in the routing table of source host A
  • A uses the ARP protocol to obtain R's MAC address based on R's IP address 223.1.1.4
  • A creates a data frame (the destination address is the MAC address of R)
  • Encapsulate IP packets from A to E in the data frame
  • A sends the data frame, R receives the data frame
    insert image description here
    The specific process of forwarding table matching:
    insert image description here
    insert image description here

Here is the quote

49. Binary thread lookup forwarding table?

In fact, it is an algorithmic measure taken to save time (quickly find a matching network address).
insert image description here

50. ICMP message

ICMP: Internet Control Message Protocol

  • ICMP allows hosts or routers to report error conditions and provide reports on abnormal conditions
  • Used by hosts and routers to communicate network layer information
  • The ICMP message is carried in the IP datagram: the IP upper layer protocol number is 1

ICMP packet type

  • ICMP error report message

    : Endpoint unreachable: unreachable host, unreachable network, invalid port, protocol

  • ICMP query message

    : echo request/reply (used by ping)

ICMP message format
insert image description here

The first 4 bytes of the ICMP message contain three fields with uniform format: type, code, check and the content of the next four bytes is related to the ICMP message type

ICMP packet types and functions
insert image description here

51.IPV6?

Compared with IPV4, the following characteristics appear:

  • Larger address space (original 32 bits expanded to 128 bits)
  • Extended Address Hierarchy
  • Flexible header format
  • Improved options (IPV6 header length is fixed, 40 bytes = 40B = 40*8 b = 40*8 bits)
  • Allows protocol to be extended
  • Support for pre-allocation of resources
  • IPV6 header changed to 8 bytes

insert image description here

52. Static routing and dynamic routing

  • Static routing
    - non-adaptive routing
    - cannot adapt to changes in network status in a timely manner
    - simple, less overhead.
  • Dynamic routing
    -adaptive routing selection
    -can better adapt to changes in network status
    -complex implementation and high overhead

insert image description here

52. RIP example

insert image description here

53. Where is the transport layer located?

  1. The transport layer provides communication services to the application layer above it, and it belongs to the highest layer of the communication-oriented part, and is also the lowest layer of user functions.
  2. When two hosts at the edge of the network use the functions of the core of the network for end-to-end communication, only the protocol stack of the host at the edge of the network has the transport layer, while the routers at the core of the network only use the functions of the lower three layers when forwarding packets.

insert image description here

54. What is the role of the transport layer?

  • Multiplexing and Demultiplexing
    Often, multiple application processes in one host communicate with multiple application processes in another host at the same time.
    This shows that the transport layer has a very important function - multiplexing and demultiplexing
    insert image description here

Since it is mentioned that the transport layer provides end-to-end logical communication between application processes, it is necessary to talk about the concept of ports in detail. In the network
layer, the objects of communication are different hosts, but from the perspective of the transport layer, the objects of communication are processes, and the port represents the process. That is to say, the host is found through ip, and the corresponding process is found through the
port
.
The port number only has a local meaning, that is, the port number is only used to mark each process in the application layer of the computer.
It can be seen that if the processes in two computers want to communicate with each other, they must not only know the IP address of the other (in order to find the other computer), but also know the port number of the other (in order to find the application process in the other computer)

  • Shielding function
    The transport layer shields the details of the underlying network core from high-level users, and it makes the application process see that there is an end-to-end logical communication channel between two transport layer entities.insert image description here

55. What is the difference between UDP and TCP?

UDP and TCP are two types of network transmission protocols, and they have the following differences:

  • Reliability: TCP is a reliable transmission protocol, it will ensure the reliable transmission of data, because it uses mechanisms such as confirmation, retransmission and flow control to ensure the reliability of data. While UDP is an unreliable transport protocol, it does not guarantee reliable data transmission because it does not use confirmation and retransmission mechanisms.
  • Connectivity: TCP is a connection-oriented protocol, which must first establish a connection before transmitting data. UDP is a connectionless protocol, and data can be sent directly without establishing a connection first.
  • Speed: UDP is faster than TCP because it does not need to establish a connection and confirm the receipt of data, but UDP cannot guarantee the reliable transmission of data, nor can it perform flow control.
  • Applicable scenarios: TCP is suitable for scenarios that require reliable transmission, such as file transfer, email, etc. UDP is suitable for scenarios that require fast transmission, such as real-time transmission scenarios such as video and audio, because UDP is faster and can better support real-time transmission.

Let’s talk about the difference in reliability in detail:
When we are transmitting data, reliability means that the data sent by the sender can be correctly received by the receiver without being lost, damaged or duplicated. The difference between TCP and UDP in terms of reliability is as follows:
TCP reliability: TCP is a reliable transmission protocol, which uses a variety of mechanisms to ensure reliable data transmission. For example, TCP uses an acknowledgment mechanism, that is, every time the sender sends data, the receiver will send an acknowledgment message to the sender, telling it that it has received the data. If the sender does not receive an acknowledgment message within a certain period of time, it will resend the data until the receiver receives the data correctly. In addition, TCP also uses flow control and congestion control mechanisms to ensure that the sender does not send more data than the receiver can process and avoid network congestion.
Unreliability of UDP: UDP is an unreliable transmission protocol, and it does not provide mechanisms such as confirmation, retransmission, and flow control. In UDP, after the sender sends data, it will not receive a confirmation message from the receiver, nor will it retransmit unconfirmed data. If data is lost, damaged or duplicated, UDP will not perform any processing, so reliable data transmission cannot be guaranteed.
In general, TCP provides a more reliable transmission service than UDP, but this also makes the performance of TCP lower than UDP, because it needs to spend more time and resources to maintain the connection and ensure the reliable transmission of data. UDP is more lightweight and fast, and is suitable for scenarios that do not require high reliability but require fast transmission.

56. How to identify different processes?

In a computer, the identification of different processes is realized by the process identifier (Process ID, PID). Each process has a unique PID, which is used to identify the process. PID is an integer value, usually automatically generated by the operating system when the process is created.

In the Unix/Linux system, you can use the command ps to view the processes running in the current system. The ps command will display the PID and other information of each process. For example, you can use the following command to view the PIDs of all processes in the system:

ps -ef

In the Windows system, you can use the task manager to view the processes running in the current system, and the task manager will display the PID and other information of each process. In addition, you can also use the command-line tool tasklist to view the PIDs of all processes in the system, for example:

tasklist

The PID of a process plays a very important role in the operating system. It is used to manage processes, perform inter-process communication, and other operations. When a process needs to communicate with another process, the PID of the process is usually used to identify the process and perform inter-process communication. Therefore, the PID of each process is unique and unchangeable to ensure uniqueness and identifiability among different processes.

57. Port?

In a computer network, a port (Port) is a number used to identify an application program or a network service, and it is used together with an IP address to uniquely identify a network communication process on a computer. Specifically, the port number is a 16-bit unsigned integer (value range is 0~65535), some of which have been standardized for use by specific applications or services, and these standardized port numbers are called "Well-known Ports".

Here are some common port numbers and their purpose:

20, 21: FTP data transmission and control port
22: SSH remote login protocol port 23: Telnet remote login protocol port 25: SMTP mail transfer protocol port
53 : DNS domain name resolution protocol port 80: HTTP protocol port, used for web browser access to web pages 110: POP3 mail collection protocol port 143: IMAP mail collection protocol port 443: HTTPS protocol port, used for web browsers to safely access web pages 3306: MySQL database port 3389 : Windows remote desktop protocol port







It should be noted that the port number is only used to identify the communicating application or service, it is not associated with the physical device. On a computer, multiple processes can listen to the same port number at the same time, as long as they have different IP addresses. In addition, it is also possible to map a port number to different physical devices or different processes by configuring port forwarding of network devices.

58. Stop waiting for agreement?

Stop-and-Wait Protocol (Stop-and-Wait Protocol) is a simple data transmission protocol used to solve the problem of data transmission on a reliable communication channel. It is a protocol based on confirmation and retransmission mechanism, which is often used in serial communication, such as serial communication, infrared communication, etc.

The basic idea of ​​the stop-waiting protocol is: when the sender sends data, it waits for the receiver to send a confirmation message, and then sends the next piece of data after receiving the confirmation message; when the receiver receives data, it sends a confirmation message, and waits for the sender to retransmit the lost data. Specifically, the data transmission process of the stop-and-wait protocol is as follows:

  1. The sender divides the data into several parts, each with a serial number, and sends the first part of data;
  2. After receiving the first piece of data, the receiver sends a confirmation message to the sender and waits for the next piece of data;
  3. After receiving the confirmation message, the sender sends the next piece of data and repeats steps 1-3 until all the data is sent;
  4. When receiving data, if the receiver finds that there is a data packet loss, it sends a retransmission request to the sender, asking the sender to retransmit the lost data packet;
  5. After receiving the retransmission request, the sender resends the lost data packet and waits for the receiver's confirmation message;
  6. After receiving the retransmitted data packet, the receiver sends an acknowledgment message to the sender and waits for the next piece of data.

The stop-wait protocol is simple and easy to understand, but its main problem is inefficiency. In the stop-and-wait protocol, the sender has to wait for the receiver's acknowledgment message, which causes the sending speed of the sender to be greatly reduced. In addition, when problems such as packet loss or delay occur in the network, stopping to wait for the retransmission mechanism of the protocol will cause the sender to continuously retransmit data packets, wasting network bandwidth and computing resources. Therefore, some protocols with higher efficiency and reliability are more commonly used in practice, such as GBN and SR protocols.

59. TCP packet header format?

insert image description here

The meaning of each field is as follows:

  • Source Port and Destination Port: Port numbers used to identify the source host and destination host respectively, occupying 2 bytes.

  • Sequence Number: It is used to identify the sequence number of the first data byte in the TCP segment, occupying 4 bytes.

  • Acknowledgment Number (Acknowledgment Number): used to identify the sequence number of the next data byte expected to be received, occupying 4 bytes.
    - The acknowledgment number is only valid if ACK = 1. The acknowledgment number is invalid when ACK = 0.

  • Data Offset (Data Offset): It is used to indicate the offset of the starting position of the data part in the TCP message segment relative to the starting position of the message segment, in units of 4 bytes.

  • Reserved bit (Reserved): not used yet, occupying 6 bits.

  • Flags (Flags): The control bits of the TCP segment, including six flag bits of URG, ACK, PSH, RST, SYN, and FIN, each occupying 1 bit.
    - URG (urgent): When URG = 1, it indicates that the urgent pointer field is valid.
    - ACK (acknowledgement): The acknowledgment number is only valid when ACK = 1. The acknowledgment number is invalid when ACK = 0.
    - PSH (push): The sender TCP sets PSH = 1, which is used to inform the receiver that the data should be delivered to the upper application immediately without waiting for the buffer to fill up.
    - RST (reset): When RST = 1, it indicates that a serious error has occurred in the TCP connection and the connection must be released.
    - SYN (synchronization): SYN=1 and ACK=0, indicating that this is a connection request field, SYN=1 and ACK=1, indicating that the connection is agreed to be established - FIN (termination): When FIN=1, it indicates that the data of the sender of this segment has been sent and the connection is required to be
    released

  • Window size (Window): Indicates the size of the amount of data that the sender can receive, occupying 2 bytes.

  • Checksum (Checksum): used to check whether the TCP segment is damaged, occupying 2 bytes.

  • Urgent Pointer: Indicates the offset of the end position of the urgent data relative to the serial number, occupying 2 bytes.

  • Options (Options): TCP options, optional fields, the length is not fixed.

  • Padding: It is used to fill the TCP segment, so that the length of the message is a multiple of 4, and the length is not fixed.

  • Data (Data): The data in the TCP segment, the length is not fixed, and the maximum length is determined by the window size.

The above fields occupy a total of 20 bytes

60. Sliding windows?

Sliding window (Sliding
Window) is a data flow control technology in a popular data transmission protocol. It is used to control the sending and receiving of data flow between communicating parties to ensure that the data can be transmitted in order in the network and can be processed correctly.

In the TCP protocol, the sliding window is implemented based on the "reliable transmission" mechanism. The sender splits a continuous piece of data into several data packets, and the receiver receives the data packets in order, and deletes each data packet from the window. The sender can only slide the window forward and send the next data packet after receiving the confirmation message from the receiver. At the same time, the receiver can send a confirmation message to the sender, telling the sender that the data packet has been successfully received, so as to control the speed at which the sender sends data and ensure the reliability and smoothness of data transmission.

Sliding windows can also be used to control network latency and bandwidth utilization. By changing the window size, the number of packets and transmission speed can be controlled. If the window size is set too large, the network bandwidth may be overused, resulting in increased network congestion and delay; if it is set too small, the network bandwidth cannot be fully utilized, resulting in reduced transmission efficiency.
insert image description here

Sliding window technology is a core feature of the TCP protocol, which can make data transmission more efficient and reliable. It is also widely used in other data transmission protocols, such as UDP and SCTP.

61. RTO calculation? (Baidu can't understand)

RTO (Retransmission Timeout) is an important parameter in the TCP protocol. It is used to calculate the retransmission time after the data packet is lost to ensure that the data packet can be received correctly. The calculation process of RTO is as follows:

  • Initialize RTO: When a TCP connection is established, RTO is initialized to a default value, usually 3 seconds. This value is a standard value in the TCP protocol and can be modified in different operating systems.

  • Calculate RTT: When TCP sends a data packet, it waits for the receiver to return an acknowledgment message. The sender will record the timestamp when the data packet was sent. When the confirmation message is received, the sender will record the timestamp of the confirmation message and calculate the RTT (Round Trip Time), which is the time difference from sending to receiving the data packet.

  • Calculate SRTT: In order to reduce the jitter of RTT, TCP uses a weighted average to calculate SRTT (Smoothed Round Trip Time). SRTT is the average value of RTT, which is calculated as:
    SRTT = (1 - α) * SRTT + α * RTT
    where α is a value between 0 and 1, usually 0.125, indicating that the new RTT accounts for 12.5% ​​of the weight of the SRTT, and the old RTT accounts for 87.5% of the weight.

  • Calculate RTTVAR: In order to further reduce the jitter of RTT, TCP uses a weighted average to calculate RTTVAR (Round Trip Time VARiance), which is the variance of RTT. It is calculated as:
    RTTVAR = (1 - β) * RTTVAR + β * |SRTT - RTT|
    where β is a value between 0 and 1, usually 0.25, indicating that the difference between the new RTT and SRTT accounts for 25% of the weight, and the old RTTVAR accounts for 75% of the weight. This difference uses an absolute value, since the jitter of the RTT can be positive or negative.

  • Calculate RTO: Finally, calculate RTO based on SRTT and RTTVAR. TCP uses the following formula:
    RTO = SRTT + 4 * RTTVAR

This formula calculates a safe RTO value, which can ensure that the packet can be successfully transmitted when the transmission time in the network changes. If a packet is not acknowledged within the RTO time, TCP will consider it lost and retransmit it.

It should be noted that the calculation process of RTO is dynamic, and each calculation is based on the previous RTT and RTTVAR. Also, different implementations of TCP may use different values ​​of α and β, resulting in slightly different RTO calculations.

62. TCP flow control?

Sliding window is a mechanism to realize flow control in TCP protocol. It controls the sending and receiving of data by dynamically adjusting the window size of the sender and receiver. Specifically, the sender and the receiver maintain two windows: the sending window and the receiving window, which are used to control the data flow sent and received.

The size of the send window depends on the receiver's receive window size and network conditions. When the receiving window size of the receiver becomes smaller or the network condition becomes worse, the sending window of the sender will also be reduced correspondingly to avoid packet loss and congestion. The size of the sending window is determined by the following three factors:

  • The maximum window size that the sender can bear: The sender will estimate a maximum window size based on its sending capability and network conditions to ensure that data packets will not be lost due to excessive congestion.
  • Receiver's receive window size: The receiver tells the sender the size of the data packet it can currently receive through the send window. The sender needs to determine the number of data packets to send according to the size of the receiving window, so as to avoid sending too many data packets and causing the buffer overflow of the receiver.
  • The number of data packets that have been sent but not yet acknowledged: the sender needs to keep a certain number of unacknowledged data packets to ensure that the data packets can be transmitted in order and avoid retransmission.

The size of the receive window depends on the receiver's buffer size and the number of received but unprocessed packets. The size of the receive window is determined by a combination of the following two factors:

  • Receiver buffer size: The receiver needs to allocate a certain buffer to store the received data packets, and the size of the receiving window cannot exceed the size of the buffer.
  • The number of data packets that have been received but not yet processed: the receiver needs to keep a certain number of unprocessed data packets to ensure that the data packets can be processed in order and avoid loss.

Through the sliding window mechanism, the TCP protocol can achieve reliable data transmission and flow control to ensure that data packets can be transmitted correctly and avoid network congestion.
Problem: Deadlock
insert image description here
Solution: Persistence Timerinsert image description here

tips:

insert image description here

63. Confused window syndrome?

Silly Window Syndrome (Silly Window
Syndrome) refers to the TCP protocol, when the receiver window is very small and the data packets sent by the sender are very small, the sender will send too many small data packets, resulting in network congestion and performance degradation. Specifically, the conditions for the occurrence of confused window syndrome are as follows:

  • The receiver window is very small and can only accommodate a few bytes of data;
  • The sender only sends a few bytes of data at a time, because the sender's application only transmits a small amount of data;
  • The sender cannot send more data before waiting for the receiver to send an acknowledgment message, because the sender's send window has been reduced to a very small value.

In this case, the sender sends very small packets and there are many of them, which leads to network congestion and performance degradation. This is because routers and switches in the network need to process more data packets, and each data packet needs to consume certain bandwidth and network resources.

To avoid the confused window syndrome, the following solutions can be adopted:

  • Flow control is adopted between the sender and receiver so that the amount of data sent by the sender will not exceed the window size of the receiver;
  • Increase the amount of data sent by the sender, such as combining multiple small data packets into one large data packet through the Nagle algorithm, or using delayed acknowledgment technology, so that the sender can send more data;
  • Adjust the size of the TCP window so that more data can be transmitted between the sender and receiver while avoiding too many small packets causing network congestion.

64. TCP congestion control?

The congestion control of the TCP protocol is a technology used to adjust the sending rate of the sender, with the purpose of preventing network congestion and avoiding network performance degradation. TCP congestion control mainly has the following aspects:

  • Slow start: When TCP establishes a connection or congestion occurs, the amount of data sent by the sender will gradually increase until network congestion occurs. The slow start algorithm sets the initial congestion window to a small value, and then doubles the congestion window every time an acknowledgment message is received, so as to gradually increase the sending rate. If a timeout or network congestion occurs, the congestion window is reset to a smaller value, and slow start is restarted.
  • Congestion avoidance: Once the sender's congestion window reaches a certain size, a congestion avoidance algorithm is used to prevent network congestion. The congestion avoidance algorithm increases the congestion window by 1 MSS (maximum segment length) each time, so as to gradually increase the sending rate, and monitor the network congestion at the same time. If the network congestion is found, the congestion window will be reduced.
  • Fast retransmission: If the sender sends the same data packet continuously, but the receiver only receives a part of it, it is considered that the data packet is lost, and the sender will immediately retransmit the data packet without waiting for a timeout.
  • Fast recovery: If the sender detects network congestion and reduces the congestion window, but the receiver receives subsequent data packets, it will send a confirmation message to tell the sender that it can continue sending data without waiting for the congestion window to increase again. This is the fast recovery algorithm.
  • Timeout retransmission: If the sender waits for the acknowledgment message for more than a certain period of time, the data packet will be considered lost and will be retransmitted immediately.

Through these congestion control technologies, the TCP protocol can adaptively adjust the sending rate to avoid network congestion and performance degradation, thereby ensuring reliable data transmission.
insert image description here

What is the difference between congestion control and flow control?

  • Congestion control:
    1. Prevent too much data from being injected into the network, and avoid overloading of routers or links in the network
    2. It is a global process involving all hosts, routers, and all factors related to reducing network transmission performance.
  • Flow control
    1. Suppress the rate at which the sender sends data so that the receiver can receive it in time
    2. The control of point-to-point traffic is an end-to-end problem
    insert image description here

65. TCP's three-way handshake and four waved hands?

It’s best to look at: Explain the TCP three-way handshake in simple terms (multiple pictures and detailed explanations)
Simply put:

  • The three-way handshake of TCP is the process of establishing a TCP connection. The client sends a SYN request, the server sends a SYN+ACK response, and the client sends an ACK confirmation.
  • The four waves of TCP are the process of terminating the TCP connection. The client sends a FIN request, the server sends an ACK confirmation, the server sends a FIN request, and the client sends an ACK confirmation.

In detail:

The three-way handshake is the process of TCP establishing a connection. During this process, the following steps are carried out between the client and the server:

  1. The client sends a SYN segment to the server, requesting to establish a connection. SYN is an acronym for Synchronization Sequence Number, which contains a random Initial Sequence Number (ISN).
  2. After the server receives the SYN message segment, it sends a SYN+ACK message segment to the client to confirm receipt of the request and request to establish a connection. In the SYN+ACK message segment, ACK means to confirm the receipt of the SYN message segment from the client, and SYN means that the server also sent a random ISN.
  3. After receiving the SYN+ACK message segment from the server, the client sends an ACK message segment to the server to confirm the connection establishment. In the ACK message segment, ACK means to confirm receipt of the SYN+ACK message segment of the server, and at the same time, the confirmation number is set to the ISN+1 of the server, and the sequence number is set to the ISN+1 of the client.

Four hand waving is the process of TCP disconnecting. During this process, the following steps are carried out between the client and the server:

  1. The client sends a FIN segment to the server, requesting to close the connection. FIN indicates that the sender has sent the data and requests to close the connection.
  2. After receiving the FIN message segment, the server sends an ACK message segment to the client to confirm receipt of the request. At this time, the server will not close the connection immediately, but continues to wait for the data transmission.
  3. The server sends a FIN segment, requesting to close the connection. At this point, the server has already sent the data and requests to close the connection.
  4. After receiving the FIN message segment from the server, the client sends an ACK message segment to the server, indicating confirmation to close the connection. At this time, the client will not close the connection immediately, but wait for a period of time to receive data that may not have arrived. After receiving the ACK segment, the server closes the connection.

During the four-way wave, both the client and the server need to send a FIN segment and an ACK segment to confirm each other's request and response. Eventually, both ends will close the connection.

66. Application layer role?

The application layer is the highest layer in the OSI model, responsible for communication and data exchange between applications. It provides some protocols and services for applications, so that applications can conveniently carry out data transmission and communication.

There are many application layer protocols, such as HTTP, FTP, SMTP, POP3, and SSH. These protocols not only stipulate the format and content of data transmission, but also stipulate the method and communication process of data exchange.

Application layer services include identity authentication, authorization, session management, data encryption, etc. These services provide security and reliability guarantees for applications, enabling applications to perform secure data transmission and communication in the network.

The application layer also provides application programming interfaces (APIs), which can be used by applications to access the underlying network protocol stack to implement data transmission and communication functions. Common application programming interfaces include Socket, Winsock, POSIX, etc.

Generally speaking, the application layer is responsible for data exchange and communication between applications, and provides some protocols and services, so that applications can conveniently carry out data transmission and communication.

67. DNS? Domain name structure? What is the difference between an area and a domain?

DNS (Domain Name System) is a distributed naming system, which is used to convert domain names into IP addresses. The main role of DNS is to convert human-readable domain names into computer-readable IP addresses for network communication.

A domain name structure usually consists of multiple parts, each separated by a dot (.). Take www.example.com as an example, where ".com" is the top-level domain name (TLD, Top-Level Domain), ".example" is the second-level domain name (SLD, Second-Level Domain), and "www" is the hostname (Hostname).

DNS divides the domain name into multiple zones (Zone), and each zone has an authoritative DNS server (Authoritative DNS Server) to manage domain name resolution in the zone. Each domain name can only be managed by one authoritative DNS server.

A domain usually contains multiple subdomains, and each subdomain can have its own authoritative DNS server. The authoritative DNS server is responsible for managing domain name resolution in the subdomain, and can delegate the management authority of the subdomain to other DNS servers through DNS delegation.

The difference between a zone and a domain is that a zone is a part of the domain name space managed by a DNS server, while a domain is a complete domain name space composed of multiple zones. A domain usually consists of multiple zones, and each zone has an authoritative DNS server to manage domain name resolution in the zone.

68. Domain name server type? Recursive query vs iterative query?

Domain name servers are mainly divided into the following three types:

  • Root DNS Server: There are 13 root domain name servers in the world. Their main function is to respond to DNS recursive query requests and return the address of the top-level domain name server.
  • Top-Level DNS Server (Top-Level DNS Server): Responsible for managing first-level domain names, such as ".com", ".org", ".cn", etc. Their main function is to respond to DNS recursive query requests and return the address of the next-level domain name server.
  • Authoritative DNS Server (Authoritative DNS Server): Responsible for managing all hostname resolutions under a specific domain name. Their main function is to respond to DNS iterative query requests and return the IP address of the queried hostname.

In DNS query, recursive query and iterative query are two different query methods.

  • Recursive query means that when the DNS client sends a query request to the local DNS server, the local DNS server will send a recursive query request to the root domain name server, and the root domain name server will return the address of the top-level domain name server. Finally, the local DNS server returns the queried domain name and its corresponding IP address to the DNS client.

Suppose a user wants to visit the website www.example.com,

  1. First, the user's computer will send a query request to the local DNS server, and the local DNS server will check whether there is a corresponding IP address in its own cache. If not, the local DNS server sends a query request to the root domain name server, and the root domain name server returns the address of the authoritative DNS server of the com domain.
  2. The local DNS server sends a query request to the authoritative DNS server of the com domain, and the authoritative DNS server returns the address of the authoritative DNS server of the example.com domain.
  3. The local DNS server sends a query request to the authoritative DNS server of the example.com domain, and finally obtains the IP address of www.example.com, and returns this IP address to the user's computer

During the recursive query process, the DNS server will always send query requests to the next-level DNS server until the IP address of the target host is found or the query fails. During the recursive query process, the DNS server will be responsible for obtaining the required information from the next-level DNS server and returning it to the requesting client.

Iterative query means that when the DNS client sends a query request to the local DNS server, the local DNS server will send an iterative query request to the root domain name server. The root domain name server will return the address of the top-level domain name server. The local DNS server will then send an iterative query request to the top-level domain name server. Finally, the DNS client returns the queried domain name and its corresponding IP address to the local DNS server.

Suppose a user wants to visit the website www.example.com,

  1. First, the user's computer will send a query request to the local DNS server, and the local DNS server will check whether there is a corresponding IP address in its own cache. If not, the local DNS server sends a query request to the root domain name server.
  2. The root domain name server returns the address of the authoritative DNS server of the com domain, and the local DNS server sends a query request to the authoritative DNS server of the com domain. If the authoritative DNS server of the com domain cannot answer the request, it will return an address pointing to the authoritative DNS server of the example.com domain to the local DNS server, and let the local DNS server send a query request to the authoritative DNS server of the example.com domain.
  3. If the authoritative DNS server of the example.com domain cannot answer the request, it will return an
    address pointing to the authoritative DNS server of www.example.com to the local DNS server, and let the local DNS server
    send a query request to the authoritative DNS server of the www.example.com domain, and finally obtain the IP address of www.example.com, and return this IP address to the user's computer.

In the process of iterative query, the DNS server will not recursively send a query request to the next-level DNS server, but directly returns an address pointing to the next-level DNS server, allowing the client to send a query request to the next-level DNS server. The client needs to process the address returned by the DNS server by itself, and then send a query request to the next-level DNS server.

In general, the recursive query is that the DNS server queries other DNS servers on behalf of the DNS client until the IP address of the queried domain name is found, while the iterative query is that the DNS client itself queries the DNS server until the IP address of the queried domain name is found.

69.FTP, TFTP, NFS and TELNET?

FTP, TFTP, NFS, and TELNET are commonly used network protocols. The details are as follows:

  1. FTP (File Transfer
    Protocol) file transfer protocol: FTP is a protocol for file transfer on the network, which includes two components: FTP client and FTP server. An FTP client refers to an application program used to upload or download files, while an FTP server refers to a computer that stores files and provides file access services. The FTP protocol transmits data through port 21 of the TCP protocol.
  2. TFTP (Trivial File Transfer
    Protocol) simple file transfer protocol: TFTP is a simple file transfer protocol that transmits data through port 69 of the UDP protocol. TFTP is simple and lightweight, but not as powerful as FTP. It is usually used for upgrading and backing up network devices.
  3. NFS (Network File
    System) network file system: NFS is a distributed file system that allows computers on the network to share files and directories over the network. The NFS server shares files to the client, and the client can locally access the shared file. The NFS protocol transmits data through port 2049 of the TCP protocol.
  4. TELNET: TELNET is a remote login protocol that allows a user to connect from one computer to another in order to perform operations on the remote computer. The TELNET protocol transmits data through port 23 of the TCP protocol. However, the TELNET protocol has been gradually replaced by the SSH (Secure
    Shell) protocol, because the TELNET protocol is transmitted in clear text and has poor security.

70.HTTP?

HTTP (Hypertext Transfer Protocol) is a protocol for transferring hypermedia documents such as HTML. It works based on the request-response pattern, where the client sends a request and the server returns a response.

An HTTP message is data transmitted in the HTTP protocol, which consists of two parts: a request message and a response message. A request message is sent from a client to a server, requesting the server to perform an operation, such as fetching a resource or submitting a form. The response message is sent by the server back to the client, containing the result of the request or any errors that occurred.

An HTTP message consists of two parts: a message header and a message body. The message header contains the metadata of the request or response, such as request method, status code, date and time, message body length, etc. The message body is the actual data content of the request or response.

There are multiple versions of the HTTP protocol, the common ones are HTTP/1.0, HTTP/1.1 and HTTP/2. HTTP/1.1, currently the most widely used version, supports persistent connections and pipelining, which enables clients to send multiple requests and receive multiple responses on a single connection.

The HTTP protocol uses the TCP protocol as its transport layer protocol, so it is a reliable protocol that can ensure reliable transmission of data. The HTTP protocol also supports encrypted transmission, and the HTTPS protocol can be used to protect the privacy and integrity of data.

Overall, the HTTP protocol is one of the most important protocols on the Internet, supporting a variety of different applications, including web browsers, web services, APIs, and more. Understanding the HTTP protocol is fundamental to web development and network communication.

The following is an example of an HTTP request message:

GET /index.html HTTP/1.1
Host: www.example.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:58.0) Gecko/20100101 Firefox/58.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Upgrade-Insecure-Requests: 1

The above message is a GET request, the requested resource path is /index.html, and the protocol version used is HTTP/1.1. In addition, it also contains some request header information, such as Host indicates the target host of the request, User-Agent indicates the browser type of the client, and Accept indicates the type of response that the client can accept.

The following is an example of an HTTP response message:

HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Content-Length: 1234
Date: Mon, 01 Jan 2000 00:00:00 GMT
Server: Apache/2.4.29 (Unix)
Last-Modified: Sun, 31 Dec 1999 23:59:59 GMT
ETag: "1234567890abcdef"
Accept-Ranges: bytes
Connection: close

<!DOCTYPE html>
<html>
<head>
	<title>Example</title>
</head>
<body>
	<h1>Welcome to Example</h1>
	<p>This is an example HTML page.</p>
</body>
</html>

The above message is a 200 OK response, indicating that the request was successful. The response header contains some metadata, such as Content-Type indicates the type of the response content, Content-Length indicates the length of the response content, Date indicates the response time, etc. The response header and the response body are separated by a blank line. After the blank line is the specific content of the response. Here is a simple HTML page.

71. Briefly describe the main flow of the user entering http://acm.sdtbu.edu.cn in the browser and pressing Enter to the browser to display the page?

When the user enters the URL "http://acm.sdtbu.edu.cn" in the browser and press Enter, the main process is as follows:

  1. Domain name resolution: The browser first resolves the domain name "acm.sdtbu.edu.cn" in the URL into the corresponding IP address. If there is no record of the domain name in the local DNS cache, a domain name resolution request is sent to the local DNS server.
  2. Establish a TCP connection: The browser sends a TCP connection request to the Web server, performs a three-way handshake, and establishes a reliable data transmission channel.
  3. Send HTTP request: The browser sends an HTTP request to the web server, requesting to obtain relevant resources of the website, such as HTML files, CSS style sheets, JavaScript scripts, etc.
  4. Server response: After receiving the request, the web server returns the corresponding HTTP response, which contains the information and data of the requested resource.
  5. Parsing and rendering the page: After the browser receives the HTTP response, it parses the HTML document, CSS style sheet and JavaScript script, and presents the webpage content to the user. The browser renders the webpage content into a visual page according to the structure of the HTML document, and loads and parses other resources in the page.
  6. Disconnecting the TCP connection: After the browser finishes loading the page, it sends a TCP connection disconnection request to the web server, waves four times, and disconnects the TCP connection.

The whole process involves multiple steps such as DNS resolution, TCP connection, HTTP request and response, page parsing and rendering, and each step requires a certain amount of time and resource overhead, which affects page loading speed and user experience.

Guess you like

Origin blog.csdn.net/qq_25218219/article/details/127184759