Computer Network Eighth Edition - Answers to Chapter 3 After-school Questions (Super Detailed)

third chapter

This answer is organized by the blogger on the Internet, and the typesetting is not easy. I hope everyone will like it and support it. Follow-up will continue to update (you can pay attention to the blogger~

Chapter 1 Answers

Chapter Two Answers

【3-01】What is the difference between a data link (that is, a logical link) and a link (that is, a physical link)? What is the difference between "the link is connected" and "the data link is connected"?

Answer: A link is a physical line from one node to an adjacent node without any other switching nodes in between. During data communication, the communication path between two computers often passes through many such links. It can be seen that a link is only a component of a path. The link is connected, indicating that the physical link is connected.
Data link is another concept. This is because when it is necessary to transmit data on a line, in addition to a physical line, there must be some necessary communication protocols to control the transmission of these data. If the hardware and software implementing these protocols are added to the link, a data link is formed. The most common approach today is to use network adapters (such as dial-up adapters for dial-up Internet access and LAN adapters for Internet access via Ethernet) to implement hardware and software for these protocols. General adapters include the functions of the data link layer and the physical layer.
Other terms are also used. This is to divide links into physical links and logical links. The physical link is the link mentioned above, and the logical link is the data link above, which is the physical link plus the necessary communication protocol.

【3-02】What functions does the link control in the data link layer include? Try to discuss the advantages and disadvantages of making the data link layer a reliable link layer.

Answer: There are three main functions of link control: (1) encapsulation into frames; (2) transparent transmission; (3) error detection.
Making the data link layer a reliable link layer means that the communication of each link in the entire communication path from the source host to the destination host is reliable. The advantage of this is that a certain node in the network can find an error in the transmission early, so the error can be corrected through the retransmission of the data link layer. If the data link layer is not made a reliable link layer, then when a node in the network finds that the received frame has an error (regardless of whether the data link layer is made reliable, this error checking step always needs to Yes), it only discards the frame with error, but does not notify the sending node to retransmit the frame with error. Only when the high-level protocol of the destination host (for example, the transport layer protocol TCP) finds this error, it notifies the source host to retransmit the errored data. But it is already late at this time, and more data may have to be retransmitted (including data without errors), which is a waste of network resources.
However, sometimes the higher layer protocol uses the unreliable transport protocol UDP. UDP does not require retransmission of erroneous data. In this case, if the data link layer is made a reliable link layer, it will not bring more benefits in some cases (for example, when the upper layer transmits real-time audio or video signals). In other words, increasing reliability and sacrificing real-time performance is sometimes inappropriate.

【3-03】What is the function of network adapter? At which layer do network adapters work?

Answer: An adapter is also known as a network interface card or "network card" for short. The processor and memory (including RAM and ROM) are installed on the adapter. The communication between the adapter and the local area network is carried out in serial transmission through a cable or twisted pair, while the communication between the adapter and the computer is carried out in parallel through the 1/O bus on the computer motherboard. Therefore, an important function of the adapter is to convert data serial transmission and parallel transmission. Since the data rate on the network is not the same as the data rate on the computer bus, memory for buffering the data must be included in the adapter. If an adapter is inserted on the motherboard, the device driver that manages the adapter must also be installed in the operating system of the computer. This driver program will tell the adapter in the future how long the data block should be sent to the local area network from where in the memory, or where the data block transmitted by the local area network should be stored in the memory. The adapter must also be able to implement the Ethernet protocol.
The adapter does not use the computer's CPU to receive and transmit various frames. The CPU is free to handle other tasks at this time. When the adapter receives a frame in error, it discards the frame without notifying the computer. When the adapter receives the correct frame, it uses an interrupt to notify the computer and deliver to the network layer in the protocol stack. When the computer wants to send an IP datagram, the protocol stack sends the IP datagram down to the adapter, assembles it into a frame and sends it to the LAN.

【3-04】Why must the three basic problems of the data link layer (encapsulation into frames, transparent transmission and error detection) be solved?

Answer: Encapsulation into a frame is to add a header and a tail before and after a piece of data (there are many necessary control information in the header and tail), thus forming a frame. After receiving the bit stream submitted by the physical layer, the receiving end can identify the start and end of the frame from the received bit stream according to the header and tail marks.
The so-called "transparent transmission" refers to the data handed over by the upper layer, no matter what kind of bit combination it is, it must be able to be transmitted correctly. Since the frame start and end markers use specially designated control characters, any bit combination in the transmitted data must not be allowed to be the same as the bit encoding of the control characters used for frame delimitation, otherwise frame delimitation will occur. mistake. The data link layer should not impose restrictions on the data to be transmitted, that is, it should not stipulate that certain forms of bit combinations cannot be transmitted.
If there is no error detection in the data link layer, when the destination host receives the data sent by other hosts, after handing it over to the upper layer, if the application program requires that the received data must be correct, then the upper layer software of the destination host can check the received data. The received data is checked for errors. If errors are found in the data, the source host can be requested to retransmit the data. Doing so can achieve the purpose of receiving data correctly. But this way of working has a big disadvantage, that is, some data that has errors during transmission (please note that these are useless data) will continue to be transmitted in the network, which wastes network resources . For example, there are 20 nodes in the path from the source host to the destination host. While transmitting data, the first node detects an error. If the data link layer has the function of error detection, the frame with errors can be discarded, and will not be transmitted in the future. Otherwise, this useless frame will continue to be transmitted on the network and pass through the next 19 nodes one after another, which causes a waste of network resources.

【3-05】What will happen if encapsulation and framing are not performed at the data link layer?

Answer: If the data link layer does not encapsulate into frames, then when the data link layer receives some data, it will not be able to know which of the data transmitted by the other party is data and which is control information, and even whether there is an error in the data. Unclear (because error detection is not possible). The data link layer also cannot know whether the data transmission is over, so it does not know when the received data should be handed over to the upper layer.

【3-06】 What are the main features of the PPP agreement? Why doesn't PPP use frame numbers? WHAT IS THE PPP APPLICABLE? Why can't the PPP protocol make the data link layer realize reliable transmission?

Answer: The PPP protocol has the following characteristics.
(1) Simple: The PPP protocol is very simple. Every time the receiver receives a frame, it performs a CRC check. If the CRC check is correct, the frame will be accepted; otherwise, the frame will be discarded, and nothing else will be done. (2) Encapsulation into frames: The PPP protocol specifies special characters as frame delimiters, so that the receiver can accurately find out the start and end positions of the frame from the received bit stream.
(3) Transparency: PPP protocol can guarantee the transparency of data transmission. If the data happens to have the same bit combination as the frame delimiter, PPP provides some measures to solve this problem
(4) Support multiple network layer protocols: PPP protocol supports multiple network layer protocols (such as IP and IPX, etc.) run on the same physical link. When the point-to-point link is connected to a LAN or a router, the PPP protocol must simultaneously support various network layer protocols running on the LAN or router connected to the link.
(5) Support multiple types of links: PPP can run on multiple types of links. For example, serial (only one bit is sent at a time) or parallel (multiple bits are sent in parallel at a time), synchronous or asynchronous, low speed or high speed, electrical or optical, switched (dynamic) or Non-switched (static) point-to-point links.
PPP does not use frame numbering, because frame numbering is for efficient retransmission in case of errors, and PPP does not require reliable transmission.
PPP is suitable for situations where the line quality is not too bad. If the quality of the communication lines is too poor, transmission errors will occur frequently. But PPP has no numbering and confirmation mechanism, so it must rely on the upper layer protocol (with numbering and retransmission mechanism) to ensure the correctness of data transmission. This will reduce the efficiency of data transmission.

[3-07] The data to be sent is 1101011011. The generator polynomial using CRC is P ( X ) = X + X +1. Find the remainder that should be added after the data. If the last 1 of the data to be sent becomes 0 during transmission, that is, it becomes 1101011010, can the receiving end find it? If the last two 1s of the sent data become 0 during the transmission process, that is, it becomes 1101011000, can the receiving end find it? After adopting CRC check, does the transmission of data link layer become reliable transmission?

Answer: The generator polynomial using CRC is P ( X ) = X * + X +1, which is P =10011 in binary.
Now the divisor is 5 bits, so add 4 0s after the data to get the dividend (as shown in Figure T-3-07( a )). insert image description here
The remainder R from the division operation is the check sequence that should be appended to the data: 1110.
The data to be sent now has the last 1 changed to 0 during transmission, which is 1101011010. Connect the check sequence 1110 to the back of the data 1101011010, and the next step is to perform CRC check (as shown in Figure T-3-07(b)). insert image description here
It can be seen from Figure T-3-07(b) that the remainder R is not zero, so it is judged that the received data has an error. It can be seen that the CRC check here can find this error.
The last two 1s of the data to be sent become 0 during transmission, that is, 1101011000. Connect the check sequence 1110 after the data 1101011000, and the next step is to perform CRC check (as shown in Figure T-3-07(c)).
Now the remainder R is not zero, so it is judged that the received data has an error. It can be seen that the CRC check here can find this error.
After adopting CRC check, the transmission of the data link layer does not become reliable transmission. When the receiver performs the CRC check, if it finds an error, it simply discards the frame. The data link layer cannot guarantee that what the receiver receives is exactly the same as what the sender sent.insert image description here

【3-08】The data to be sent is 101110. The generator polynomial using CRC is P ( X ) = X +1. Find the remainder that should be added after the data.

Answer: The generating polynomial of CRC is P ( X ) = X3 + 1, so the divisor P = 1001 in binary representation. The divisor is 4 bits, and 3 0s are added after the data.
After the CRC operation, the remainder R = 011 (as shown in Figure T-3-08).

Figure 1-3-48 Calculate the number of flowers for CRC inspectioninsert image description here

[3-09] The data portion of a PPP frame (written in hexadecimal) is 7D5EFE277D5D7D5D657D5E. What is the real data (written in hexadecimal)?

Answer: Underline the 2-byte sequence starting with the escape character 7D:
7D 5E FE 27 7D 5D 7D 5D 65 7D 5E
7D 5E should be restored to 7E.
7D 5D should revert to 7D.
So the real data part is: 7E FE 27 7D 7D 65 7E

[3-10] The PPP protocol uses synchronous transmission technology to transmit the bit string 011011111111100. What kind of bit string does it become after zero bit padding? If the data part of the PPP frame received by the receiving end is 0001110111110111110110, what kind of bit string will become after deleting the zero bits added by the sending end?

Answer: The first bit string 011011111111100:
Zero bit padding means that a 0 must be inserted after a series of 5 1s.
After being filled with zero bits, it becomes 01101111101111000 (the underlined 0 is filled) Another bit string 0001110111110111110110:
Delete the zero bits added by the sender, that is, delete the 0 after five consecutive 1s. Therefore, after deleting the zero bits added by the sender, it is obtained: 000111011111-11111-110 (the hyphen indicates that 0 is deleted).

【3-11】Try to discuss under what conditions the following situations are transparent transmission and under what conditions it is not transparent transmission. (Hint: Find out what "transparency" is, and consider whether you can meet its conditions.)

(1) Ordinary telephone communication.
(2) Email services provided by the Internet.
Answer: The two cases are analyzed as follows.
(1) Due to the limited bandwidth and distortion of the telephone system, the input and output sound waves at both ends of the telephone are different. In the sense of "transmitting sound waves", ordinary telephone communication is not transparent transmission. But from the perspective of "understanding the meaning of speech", it is basically transparent transmission. But sometimes individual voices can also be misheard, such as the difference between a single number 1 and 7 on the phone. If one party on the call says "1" and the other party hears it as "7", then this is not transparent transmission.
(2) Generally speaking, e-mail is transparently transmitted. But sometimes not. Because in order to prevent spam, some foreign mail servers block all mails from certain domain names (such as .cn). This is not transparent transmission. Some emails have attachments that cannot be opened on the recipient's computer. This is not transparent transmission either.

【3-12】What are the working states of the PPP protocol? When the user wants to use the PPP protocol to establish a connection with the ISP for communication, what types of connections need to be established? What problem does each connection solve?

Answer: There are six working states of the PPP protocol, and the relationship between these state diagrams is shown in Figure T-3-12. insert image description here
When the user wants to use the PPP protocol to establish a connection with the ISP for communication, two connections need to be established.
The first type of connection is a physical layer connection, see the process from "link static" to "link establishment" in Figure T-3-12. We know that the data link layer connection above can be established only when the physical layer connection (that is, the physical layer link) is established.
The second type of connection is a data link layer connection, that is, an LCP link is established. At this time, the user PC sends a series of LCP packets (encapsulated into multiple PPP frames) to the ISP to establish the LCP connection. At this time, LCP starts to negotiate some configuration options. The LCP configuration options include the maximum frame length on the link, the specification of the authentication protocol used (if any), and the address and control fields in the PPP frame are not used (because these two The value of each field is fixed, without any amount of information, and can be omitted in the header of the PPP frame). After the negotiation, the two parties establish the LCP link, and then enter the "authentication" state, and the party that initiates the communication sends the identity identifier and password (the system may allow the user to retry several times). If the authentication is successful, enter into the "network layer protocol"
state. In the "network layer protocol" state, the network control protocol NCP roots at both ends of the PPPP link exchange network layer specific network control packets with each other according to different protocols of the network layer. The network layers at both ends of the PPPP protocol can run different network layer protocols, but still use the same PPP protocol for communication. If the IP protocol is running on the PE PPP link, when configuring the IP protocol module (such as assigning IP addresses) to each end of the PPP link, it is necessary to use the protocol definition supporting IP in the NCP - IP control protocol IPCP
. IPCP grouping is also encapsulated into PPP frame and transmitted on PPP link. When running on a low-speed link, the two parties can also negotiate the use of compressed TCP and IP headers to reduce the number of bits sent on the link.
When the network layer is configured, the link enters the "link open" state where data communication can be performed. The two PPP endpoints of the link can send packets to each other.

【3-13】What are the main characteristics of the LAN? Why does the LAN use the broadcast communication method but the WAN does not?

Answer: The most important feature of a LAN is that the network is owned by one person, and the geographical scope and the number of stations are limited.
When the local area network first appeared, it had a higher data rate, lower delay and smaller bit error rate than the wide area network. But with the widespread use of optical fiber technology in the wide area network, the wide area network now also has a high data rate and a very low bit error rate.
The geographical scope of the local area network is small, and it is owned by a single unit. It is very simple and convenient to use the broadcast communication method. However, the geographical range of the WAN is very large. If the broadcast communication method is used, it will inevitably cause a great waste of communication resources. Therefore, the WAN does not use the broadcast communication method.

【3-14】What kinds of network topologies are commonly used in LAN? What kind of structure is the most popular now? Why did the early Ethernet choose the bus topology instead of the star topology, but now instead use the star topology? What about star topology?

Answer: The network topology of the initial LAN includes a star network, a ring network (the most typical is a token ring network) and a bus network.
But now the most popular is the star network, and the other two are rare.
In the early days of LAN development, people believed that active devices were more likely to fail, and therefore passive bus structures would be
more reliable. The center of the star topology uses an active device, which is considered to be less prone to failure, and to make this
active device less likely to fail, very expensive active devices must be used. However, practice has proved that the bus-type Ethernet connected with a large number of sites
is prone to failure due to the many interfaces of the connectors. Now E uses a dedicated ASIC chip to
make the hub of the star structure very reliable, so the current Ethernet of E generally uses the topology of the star structure.

【3-15】What is traditional Ethernet? What are the two main standards of Ethernet?

Answer: Traditional Ethernet is the earliest popular Ethernet with 10 MMbit's rate.
Ethernet has two standards, namely DIX Ethernet V22 standard and IEEE 802.3 standard.
In September 1980, DEC Corporation, Intel (Intel) Corporation and Xerox Corporation (Xerox) jointly proposed the first version of the 10 Mbits
Ethernet protocol, DIXV1 (DIX is the abbreviation of the names of these three companies). In 1982 It was revised to the second version of the protocol (actually
the last version), that is, DIX Ethernet V2, which became the world's first protocol for LAN products. A LAN conforming to
this standard is called an Ethernet.
On this basis, the 802.3 working group of the IEEE 802 committee formulated the first IEEE LAN standard IEEE 802.3 in 1983 (the more accurate name of this standard is IEEE 802.3 CSMA/CD), with a data rate of 10 Mbit/s . 802.3 LANs make minor changes to the frame format in the Ethernet standard, but allow hardware based on both standards to interoperate on the same LAN. LANs conforming to this standard are called 802.3 LANs.
The DIX Ethernet V2 standard is only slightly different from the IEEE 802.3 standard, so many people often refer to the 802.3 LAN as "Ethernet" or "an Ethernet-like system based on DIX Ethernet technology".

【3-16】What is the symbol transmission rate (that is, symbol/second) of Ethernet with a data rate of 10 Mbit/s on physical media?

Answer: The data sent by Ethernet all use Manchester coded signals (as shown in Figure T -3-16). insert image description here
As can be seen from Figure T-3-16, Ethernet at a data rate of 10 Mbit/s shows that the baseband signal is sent at 10x10 symbols per second in the Ethernet adapter before Manchester encoding. But after Manchester encoding, each symbol of the original signal source becomes two symbols. Therefore, the final symbol rate sent by the network adapter to the line is 20x10 symbols per second, that is, the rate is 20 megasymbols per second.
Please note that there are also some Manchester codes that have a level shift that is just the opposite of what is shown in Figure T-3-16, that is, 1 corresponds to the negative transition of the Manchester code, and 0 corresponds to the positive transition of the Manchester code .

【3-17】Why is the standard for the LLC sublayer established but rarely used now?

Answer: When the IEEE developed the 802.3 standard in 1983, several different LANs were in vogue. Therefore, the 802 committee decided to divide the data link layer protocol of the LAN into two sublayers, one is the media access control MAC sublayer, and the other is the logical link control LLC sublayer that has nothing to do with the specific media. However, until now, token ring network, token bus LAN and optical fiber distributed data interface FDDI LAN, which were popular in the past, have all disappeared in the market. Therefore, in the case where there is only one local area network (Ethernet) left, the LLC sublayer obviously has no value in existence. Now IP datagrams are put directly into Ethernet as the data part of Ethernet.

【3-18】Try to explain the meanings of "10", "BASE" and "T" in 10BASE-T.

Answer: "10" means that this Ethernet has a data rate of 10 Mbit / s, BASE means that the signal on the connection line is a baseband signal, and T means a twisted pair (Twisted - pair).

【3-19】The CSMA/CD protocol used by Ethernet accesses the shared channel in a contention mode. What are the advantages and disadvantages of this compared with the traditional time division multiplexing TDM?

Answer: It should be said that CSMA / CD protocol and traditional time division multiplexing TDM have their own advantages and disadvantages.
When the load on the network is light, the CSMA/CD protocol is very flexible, whichever station wants to send can send, and the probability of collision is very small. If time division multiplexing TDM is used, the efficiency is relatively low. When many stations have no information to send, the allocated
time slots are also wasted. But when the network load is heavy, there are many collisions caused by CSMA/C/CD protocol, and retransmission often occurs, so the efficiency is greatly reduced. At this time, the efficiency of TDM is very high.
This is like a traffic light system at an intersection in a market. When the vehicle is small, traffic lights may cause some unnecessary red light waiting. But when the flow of vehicles is very large, it is very necessary to use the red moon traffic light system, which can make the traffic of vehicles in an orderly manner.

[3-20] Assume a 1 km long CSMA/CCD network with a data rate of 1 Gbits. Suppose the propagation speed of the signal on the network is 200000 km's. Find the shortest frame length that can use this protocol.

Answer: The end-to-end propagation delay of a 1 km long CSMACD network is = (1 km )/(20200000 km / s )=5uS.
2r=10uS, (1 Gbitss)×(10us)=10000 bits should be sent within this time.
Only after such a period of time can the sender and sender receive the collision information (if a collision occurs) and detect the collision. Therefore, the shortest frame length
is 10000 bits, or 1250 bytes.

【3-21】What is bit time? What are the benefits of using this time unit? How many picoseconds is 100 bit time?

Solution: Bit time is the time required to send 1 bit, regardless of the data rate. It should be noted that
the length of time to send 1 bit is obviously closely related to the data rate.
The advantage of using bit time is convenience. If the bit time is not used, then when we discuss a station sending data,
if the data sent has a total of 6400 bits, then the time required to send the 6400 bits is 64000 divided by the sending rate.
For example, if the sending rate is 10 Mbits, the time required to send 6400 bits is:
6400/10000000=640x10s=640 us
But if the "bit time" is used as the unit, then no matter what the sending rate is, the time required to send 640000 bits The required time
must be 6400 bit times. This is obviously much more convenient.
To convert "bit time" into "seconds" or "microseconds", you must first know what the data rate is. Therefore, the answer to
the question "How many microseconds is 100 bit times?" cannot be answered without giving the data rate.

[3-22] Assume that in the 10 Mbit's Ethernet using the CSMACD protocol, a certain station detects a collision when sending data, and chooses the random number rr = 100 when executing the backoff algorithm. May I ask how long does this station need to wait before sending data again? What if it is 100Mbbits Ethernet?

Solution: For 10 Mbit/s Ethernet, the contention period is 512 bit times. Now strict == 100, so the backoff time is
S1200 bit times.
The time this station needs to wait is 51200//10=5120uIs=5.12 ms
For the Ethernet of 100 Mbits, the contention period is still 5512 bit times, and the backoff time is 51200 bit times.
Therefore, the waiting time for this station is 511200/100=512S.

【3-23】The formula (3-3) in the teaching material shows that the limit channel utilization of Ethernet has nothing to do with the number of stations connected to Ethernet. Can it be deduced from this that the utilization rate of Ethernet has nothing to do with the number of stations connected to Ethernet? Please explain your reason.

Answer: The utilization of the Ethernet should be related to the number of stations connected to the Ethernet. We know that the moment when each Ethernet station sends data should be random. However, the limit channel utilization of Ethernet expressed by formula (3-3) is based on the assumption that this Ethernet uses a special scheduling method, and after one station sends data, another station continues to send. The result is that the transmissions from each station will not collide. This maximizes the utilization of the Ethernet. But we noticed that this is no longer Ethernet with CSMA/CD protocol.

[3-24] Assume that sites A and B are on the same 10 Mbit/s Ethernet network segment. The propagation delay between these two sites is 225 bit times. Now assume that A starts sending a frame, and before A finishes sending, B also sends a frame. If A sends the shortest frame allowed by Ethernet, can A send its own data before detecting a collision with B? In other words, if A doesn't detect a collision until it's done sending, is it certain that a frame sent by A won't collide with a frame sent by B? (Hint: When calculating, it should be considered that when each Ethernet frame is sent to the channel, several bytes of preamble and frame delimiter will be added in front of the MAC frame.)

Answer: Let A start sending at t = 0. The shortest frame length sent by A is 64 bytes=512 bits. In fact, there are 8 bytes (=64 bits) of preamble and frame start delimiter transmitted on the channel, so if no collision occurs, then at t =512+64=576 bit time, A should Sent.
The later B sends, the easier it is to collide with the frames sent by A. After t = 225 bit times, B has received the bit sent by A. Therefore, now assume that B has sent data at t = 224 bit times to see if there is a collision.
At t = 225 bit times, B detects a collision (as shown in Figure T-3-24). insert image description here
Therefore, B stops sending data after t = 225 bit times. Next, B sends a 48-bit interference signal. The first bit sent by B at t = 224 bit time will arrive at A at t = 224+225=449 bit time, therefore, at t=224+225=449 bit time, A detects the collision and terminates the transmission Data, and send 48 bit interference signal.
A obviously has not finished sending before detecting a collision with the data sent by B (because 449 is less than the 576 calculated above). Therefore, A cannot finish sending its own data before detecting a collision with B.
But if A does not detect a collision before the transmission is completed (that is, before t = 512 + 64 = 576 bit time), then it can be shown that no other station is sending data on this Ethernet, and of course the frame sent by A will not be the same as Data sent later by other stations collides.

[3-25] Stations A and B in the above question sent data frames at the same time at t = 0. When t = 225 bit time, A and B simultaneously detect that a collision has occurred, and complete the transmission of the interference signal at t = 225+48=273 bit time. A and B choose different r-value backoffs in the CSMA/CD algorithm. Assume that the random numbers selected by A and B are ra =0 and rB =1 respectively. May I ask at what time do A and B start retransmitting their data frames? When does the data frame retransmitted by A arrive at B? Will the data retransmitted by A collide with the data retransmitted by B again? Will B stop sending data at the scheduled retransmission time?

Answer: Figure T-3-25 shows the events that occurred at several major times. All time units are "bit times". When t = 0, A and B start to send data.
At t = 225 bit times, both A and B detect a collision.
When 1=273 bit time, A and B end the transmission of the interference signal. Both A and B immediately execute the backoff algorithm.
Because ra = 0 and rB = 1, A can send data immediately. But according to the protocol, the channel must be detected before sending, and if it is busy, it must wait until the channel is free before sending. And B will detect the channel after delaying 512 bit time.
That is to say, A starts to detect the channel at t=273 bit time, but B does not detect the channel until t=785 bit time. insert image description here
When t =273+225=498 bit times, the last bit of B's ​​interference signal arrives at A: A detects that the channel is idle. But A can't send data immediately, and must wait for 96 bit time before sending data (we should note that the minimum interval between Ethernet frames is 9.6 μ s, which is equivalent to 96 bit time).
In this way, when t = 498 + 96 = 594 bit times, A starts to send data.
Let's see when B can send data. When t = 273 + 512 = 785 bit times (B is counted from 273 bit times, after a contention period of 512 bit times), the channel is detected again. If idle, then B sends data after 96 bit times, that is, at t=785+96=881 bit times. Please note that only when B detects that the channel is idle from bit time 785 until bit time 881, B can send data at 881 bit time. When 1=594+225=819 bit time, the data sent by A reaches B at 5944 bit time.
It can be seen that counting from the 785-bit time, after only 34-bit time, B detects that the channel is busy, so B cannot send data at the predetermined 881-bit time.

【3-26】There are only two stations on the Ethernet, and they send data at the same time, resulting in a collision. Then the retransmission is performed according to the truncated binary exponential backoff algorithm. The number of retransmissions is recorded as i, i =1,2,2,3,……. Try to calculate the probability that the first retransmission fails, the probability that the second retransmission fails, the probability that the third retransmission fails, and the average number of retransmissions I before a station successfully sends data.

Answer: Record the probability of the i-th retransmission failure as P, obviously
P = (0.s), k = min [ i , 10,
so the probability of the first retransmission failure P = 0.5, and
the second retransmission failure The probability of P =0.5=0.25
The probability of the third retransmission failure P =0.s=0.125.
P [Successful transmission i time = P 1st transmission failure] · P 2nd transmission failure]... · P i -1 transmission failure] · P i transmission success] P [transmission 1
success
= 0.s
P transmits 2 times before it succeeds = P [1st transmission fails] · P [2nd transmission succeeds
= P [1st transmission fails](1- P 2nd transmission fails D )=)= 0.5(0.75)=0.375
P successful transmission 3 times = P 1st transmission failed] P 2nd transmission failed] P 3rd transmission successful = P
[1st transmission failed] P 2nd Failed transmission for the first time](1- PP Failed for the third transmission])
=0.5(0.25)(1-0.125)=0.5(0.25)(0.875)=0.101094
P failed for 4 transmissions]=0.S(0.25)( 0.125) (1-0.0625)) = 0.5 (0.25) (0.125) (0.9375) = 0.0146
Find the statistical average of P transmission i times [successful], and obtain the average number of retransmissions = I (0.5) + 2 (0.375) +3(0.1094)+4+4(0.0146)+…
=0.5+0.7S+0.3282+0.0S86+…=1.64s

【3-27】There are 10 stations connected to the Ethernet. Try to calculate the bandwidth that each station can get under the following three situations.

(1) 10 stations are all connected to a 10 Mbit's Ethernet hub.
(2) 10 stations are connected to a 100 Mbits Ethernet hub.
(3) 10 stations are connected to a 10 Mbits Ethernet switchboard.
Answer: The bandwidth that each station can obtain is as follows:
(1) Assuming that the utilization rate of Ethernet basically reaches 100%, then 10 stations share 10 Mbits, that is, each station can obtain 1 Mbits of bandwidth on average.
(2) Assuming that the utilization rate of the Ethernet basically reaches 100%, then 10 stations share 100 Mbits, that is, each station can obtain a bandwidth of 10 Mbivs on average.
(3) Each station monopolizes the bandwidth of an interface of the switch to 10 MbiVsiVs. Here we assume that the total bandwidth of the switch is not less than 100 Mbits.

【3-28】What technical problems need to be solved when 10 Mbit/s Ethernet is upgraded to 100 Mbit/s, 1 Gbit/s and 10 Gbit/s? Why can Ethernet eliminate its competitors in the process of development and expand its application range from local area network to metropolitan area network and wide area network?

Answer: IEEE 802.3u's 10 Mbit/s Ethernet standard does not include support for coaxial cables. This means that users who want to upgrade from 10 Mbit/s Thin Cable Ethernet to 100 Mbit/s Fast Ethernet must rewire. Now 10 Mbit / s Ethernet and / 100 Mbit / s Ethernet mostly use unshielded twisted pair wiring.
In 100 Mbit/s Ethernet, keep the shortest frame length unchanged, and reduce the maximum cable length of a network segment to 100 m. But the shortest frame length is still 64 bytes, that is, 512 bits. Therefore, the contention period of 100 Mbit/s Ethernet is 5.12μs, and the minimum interval between frames is now 0.96μs, which is 1/10 of 10 Mbit/s Ethernet.
The new standard for 100 Mbit/s Ethernet also specifies the following three different physical layer standards.
(1) 100BASE-TX: Use two pairs of UTP category 5 wires or shielded twisted pair STP, one pair is used for sending and the other pair is used for receiving. (2) 100BASE-FX: Use two optical fibers, one for sending and the other for receiving. In the standard, the above-mentioned 100BASE-TX and 100BASE-FX are collectively referred to as 100BASE-X.
(3) 100BASE-T4: Use 4 pairs of UTP Category 3 or Category 5 cables, which are designed for a large number of users who have already used UTP Category 3 cables. It uses 3 pairs of wires to transmit data at the same time (each pair of wires transmits data at a rate of 33Mbit/s), and uses 1 pair of wires as a receiving channel for collision detection.
The standard of Gigabit Ethernet (1 Gbit/s rate) is IEEE 802.3z, which has the following characteristics:
(1) It allows full-duplex and half-duplex operation at 1 Gbit/s. (2) Use the frame format specified in the IEEE 802.3 protocol.
(3) Use CSMA/CD protocol in half-duplex mode (full-duplex mode does not need to use CSMA/CD protocol).
(4) Backward compatible with 10BASE-T and 100BASE-T technologies.
Gigabit Ethernet can be used as the backbone network of existing networks, and can also be used to connect workstations and servers in high-bandwidth (high-speed) applications (such as medical images or CAD graphics, etc.).
The physical layer of Gigabit Ethernet uses two mature technologies: one is from the existing Ethernet, and the other is Fiber Channel FC (Fibre Channel) formulated by ANSI. Using mature technology can greatly shorten the development time of the Gigabit Ethernet standard.
The physical layer of Gigabit Ethernet has the following two standards:
(1) 1000BASE-X (IEEE 802.3z standard).
(2) 1000BASE-T-(802.3ab standard).
When Gigabit Ethernet works in half-duplex mode, collision detection must be performed. Gigabit ethernet still keeps the maximum length of a network segment at 100 m, but adopts the method of "carrier extension", so that the shortest frame length is still 64 bytes (so as to maintain compatibility), and at the same time increases the contention period is 512 bytes. Where the length of the MAC frame sent is less than 512 bytes, some special characters are used to fill in the back of the frame to increase the sending length of the MAC frame to 512 bytes, which has no effect on the payload. After receiving the Ethernet MAC frame, the receiving end shall delete the filled special characters before delivering it to the upper layer. When the original 64-byte long short frame is filled to 512 bytes, the filled 448 bytes cause a lot of overhead.
Gigabit Ethernet also increases the function of grouping burst. When many short frames are to be sent, the first short frame should be filled with the method of carrier extension mentioned above. However, subsequent short frames can be sent one after the other with only the necessary minimum inter-frame spacing between them. This forms a burst of packets until it reaches 1500 bytes or a little more. When Gigabit Ethernet works in full-duplex mode, carrier extension and packet burst are not used. 10 Gigabit Ethernet is referred to as 10GbE for short, and its official standard is IEEE 802.3ae, and its frame format remains unchanged. 10GbE also retains the Ethernet minimum and maximum frame lengths specified in the 802.3 standard. This enables users to communicate with lower-speed Ethernets conveniently when upgrading their existing Ethernets.
Due to the high data rate, 10GbE no longer uses copper wire but only uses optical fiber as the transmission medium. It uses long-distance (more than 40km) optical transceivers interfaced with single-mode fiber to be able to work in the range of WAN and MAN. 10GbE can also use cheaper multimode fiber, but the transmission distance is 65~300 me.
10GbE only works in full duplex mode, so there is no contention problem, and CSMA / CD protocol is not used. This makes the transmission distance of 10GE no longer limited by the collision detection and is greatly improved.
The physical layer of 10GbE is newly developed. 10GbE has the following two different physical layers:
(1) LAN physical layer LAN PHY. The data rate of the LAN physical layer is 10.000 Gbit/s (this means exactly
10 Gbit/s), so a 10GbE switch can support exactly 10 Gigabit Ethernet interfaces.
(2) Optional wide area network physical layer WAN PHY. In order to enable 10GbE frames to be inserted into the payload of OC-192/STM-64 frames, the data rate of this WAN physical layer is 9.95328 Gbit/s.
Ethernet can evolve from 10 Mbit/s to 10 Gbit/s because Ethernet has the following advantages:
(1) Scalable (from 10 Mbit/s to 10 Gbit/s). (2) Flexible (multiple media, full/half duplex, shared/exchange).
(3) Easy to install.
(4) Good robustness.

【3-29】What are the characteristics of Ethernet switches? How to use it to form a virtual local area network?

Answer: An Ethernet switch is essentially a multi-interface bridge, which is very different from a transponder and a hub that work on the physical layer. In addition, each interface of the Ethernet switch is directly connected to a host or hub, and generally works in full-duplex mode. When the host needs to communicate, the switch can connect many pairs of interfaces at the same time, so that each pair of hosts that communicate with each other can transmit data without collision as if they monopolize the transmission medium. Like the transparent bridge, the Ethernet switch is also a plug-and-play device, and its internal frame forwarding table is also automatically and gradually established through a self-learning algorithm. When the communication between the two stations is completed, the connection is disconnected. Due to the use of a dedicated switching fabric chip, the Ethernet switch has a high switching rate.
For common 10Mbit/s shared Ethernet, if there are N users in total, the average bandwidth occupied by each user is only one-Nth of the total bandwidth (10Mbit/s). When using an Ethernet switch, although the bandwidth from each interface to the host is still 10 Mbit/s, since one user monopolizes the bandwidth of the transmission medium during communication instead of sharing the bandwidth of the transmission medium with other network users, the total number of switches with N pairs of interfaces The capacity is Nx10 Mbit/s. This is the biggest advantage of the switch.
Ethernet switches generally have interfaces with multiple rates, such as various combinations of interfaces with 10 Mbit/s, 100 Mbit/s and 1 Gbit/s, which greatly facilitates users in various situations.
Some switches adopt the straight-through switching method, which can immediately determine the forwarding interface of the frame according to the destination MAC address of the data frame when receiving the data frame, thus improving the frame forwarding speed.
Virtual local area network (VLAN) can be easily realized by using an Ethernet switch. In fact, a virtual local area network is just a service provided by a local area network to users, not a new type of local area network.
A Virtual Local Area Network (VLAN) is a logical, location-independent group of LAN segments that share certain common requirements. Each virtual local area network frame has an unambiguous identifier indicating which virtual local area network the workstation sending the frame belongs to. In 1988, the IEEE approved the 802.3ac standard, which defined the extension of the Ethernet frame format to support virtual local area networks. The virtual local area network protocol allows a 4-byte identifier, called a VLAN tag, to be inserted into the Ethernet frame format, which is used to indicate which virtual local area network the workstation sending the frame belongs to. If the original Ethernet frame format is still used, it is obviously impossible to divide virtual local area networks.
In a larger local area network connected by multiple switches, the virtual local area network can be divided flexibly without being restricted by geographical location. A VLAN can span across different switches. Of course, the switches used must be able to recognize and handle VLANs. In Figure T-3-29, there are 5 computers connected to switch #2 on another floor and connected to switch #1. Two computers in switch #2 are joined to VLAN -10, while the other 3 are joined to VLAN -20. Although the two VLANs span two switches, each is a broadcast domain. insert image description here
A link connecting two switch ports is called a trunk link or a trunk link.
Now suppose A sends a frame to B. Since switch #1 can recognize that B belongs to the VLAN-10 managed by the switch according to the destination MAC address of the frame header, it forwards frames directly as in ordinary Ethernet without using VLAN tags. This is the simplest case.
Now suppose A sends a frame to E. Switch #1 finds that E is not connected to this switch, so it must forward the frame from the aggregation link to Switch #2, but before forwarding, it needs to insert the VLAN tag. Without inserting the VLAN tag, switch #2 would not know which VLAN to forward the frame to. Therefore, the frames transmitted on the aggregation link are 802.1Q frames. Switch #2 removes the inserted VLAN tag before forwarding the frame to E, so the frame received by E is the standard Ethernet frame sent by A, not the 802.1Q frame.

[3-30] In Figure T-3-30, the Ethernet switch of a certain college has three interfaces respectively connected to the Ethernets of the three departments of the college, and the other three interfaces are respectively connected to the e-mail server, the World Wide Web server and one connected to the Internet. connected to the router. A, B and C in the figure are all 100 Mbit/s Ethernet switches. Assume that the speed of all links is 100 Mbit/s, and any of the 9 hosts in the figure can communicate with any server or host. Try to calculate the maximum total throughput generated by these 9 hosts and 2 servers.

[External link picture transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the picture and upload it directly (img-SW2H4oCe-1678111953995) (C:\Users\Lyanjie\AppData\Local\Temp\WeChat Files\3cd89fa0469fb23721144e759aad802.jpg )] insert image description here
Answer: The total throughput of the 9 hosts and the two servers here is 900+200=1100 Mbit/s. Each of the three departments has a host to access the two servers and access the Internet through a router. Other hosts communicate within the department.

[3-31] Assume that the rate of all links in Figure T-3-30 is still 100 Mbit/s, but the Ethernet switches of the three series are replaced by 100 Mbit/s hubs. Try to calculate the maximum total throughput generated by these 9 hosts and 2 servers.

Answer: Each system here is a collision domain with a maximum throughput of 100 Mbit/s. Adding the throughput of 100Mbit/s for each server gives a total maximum throughput of 500 Mbit/s.

[3-32] Assume that all links in Figure T-3-30 are still at 100 Mbit/s, but all Ethernet switches are replaced with 100 Mbit/s hubs. Try to calculate the maximum total throughput generated by these 9 hosts and 2 servers.

Answer: The whole system is now a collision domain, so the maximum throughput is 100 Mbit/s.

【3-33】In Figure T-3-33, the Ethernet switch has 6 interfaces, which are respectively connected to 5 hosts and a router.insert image description here

Guess you like

Origin blog.csdn.net/weixin_62440328/article/details/129372075