Principles of Computer Networks (1): Computer Networks and the Internet

  • What is the Internet?
  • Network edge, network core, access network
  • Latency, Packet Loss and Throughput in Packet Switched Networks
  • Protocol layer and service model
  • exercise

1. What is the Internet?

What is the Internet?

There are two ways to answer this question: one is to be able to describe the specific structure of the Internet, that is, the basic hardware and software components that constitute the Internet; the other is to describe the Internet by the networking infrastructure that can provide services for distributed applications.

The description of the Internet based on the above two methods leads to several basic contents, that is, what are the basic hardware that constitutes the Internet? And what are the software components that make up the Internet? What kind of networking infrastructure can provide services for distributed applications.

1.1 Specific achievements of the Internet

Host and end systems: The Internet is a worldwide computer network. Devices connected to this network include: desktop PCs, Linux workstations (so-called servers), laptops, smartphones, tablets, TVs, game consoles, Thermostats, home security systems, watches, glasses, automobiles, transportation control systems, etc. These devices are collectively referred to as hosts or end systems. In addition to these terminal application devices, the basic equipment that builds the network, including routers, link layer switches, modems, base stations, and cell phone towers, is also a type of host or end system, and the end systems are connected together through communication links and packet switches.

Communication Links: Composed of different physical media including: cables, copper wires, fiber optics, and radio spectrum. Different links transmit data at different rates, and the transmission rate of a link is measured in bits per second (bit/s or bps).

Packet: When one end system wants to send data to another end system, the sender divides the data into segments and adds header bytes to each segment. The resulting information is called a packet in terms of computer networks. The packets are sent over the network to the destination end system where they are assembled into the original data.

Packet switch: A packet switch receives an arriving packet through one of its incoming communication links and forwards the packet through one of its outgoing communication links. The two best-known types of packet switches are routers and link-layer switches, which forward packets of data toward the final destination end system. Link layer switches are typically used in the access network, while routers are typically used in the network core.

Path: From the sending end system to the receiving end system, a series of communication links and packet switches that a packet goes through is called the path (route or path) through the network.

Internet Service Provider: Internet Service Provider, referred to as ISP to access the Internet. Includes residential ISPs for cable or phone companies, business ISPs, university ISPs, ISPs that provide WiFi access in public places such as airports, hotels, coffee shops, and cellular data ISPs that provide mobile access for smartphones or other devices. Each ISP is itself a network of one or more packet switches and multiple communication links. Each ISP provides various types of network access for end systems, including broadband access such as cable modem or DSL, high-speed LAN access, and mobile wireless access. ISPs also provide Internet access services for content providers, connecting Web sites and video servers directly to the Internet. The Internet is to interconnect the end systems with each other, so the ISPs that provide access to the end systems must also be interconnected. Lower-level ISPs are interconnected through national and international higher-level ISPs such as Level 3 Communications, AT&T, Sprint and NTT. Higher-level ISPs are composed of high-speed routers interconnected by high-speed optical fibers. Regardless of the higher-level or lower-level ISP networks, they are independently managed, run the IP protocol, and follow certain namespace and address rules.

End systems, packet switches, and other Internet components run a set of protocols that control the sending and receiving of information on the Internet. TCP (Transmission Control Protocol) and IP (Internet Protocol) are the two most important protocols on the Internet. The IP protocol defines the format of packets sent and received between routers and end systems, and the main protocols of the Internet are collectively referred to as the TCP/IP protocol. Internet standards are developed by the Internet Engineering Task Force (IETF).

The standard document of IETF is called Request For Comment (RFC). RFC was originally just an ordinary request for comment. The purpose is to solve the network and protocol problems faced by Internet pioneers. They define protocols such as TCP, IP, HTTP, and SMTP. , there are nearly 7,000 RFCs, and other organizations are developing standards for network components, such as the IEEE 802 LAN/MAN Standardization Committee, which has developed standards for Ethernet and radio WiFi.

1.2 Description of Internet Services

Distributed applications: Described from the perspective of the infrastructure that the Internet provides services for applications, Internet-based applications are very rich, such as traditional mail and web applications, as well as instant messaging, music, movies, TV, online Social networking, video conferencing, multiplayer games, and location-based recommendation systems are called distributed applications because they involve multiple end systems exchanging data with each other.

Socket interface: An end system connected to the Internet provides a socket interface, which specifies that a program running on one end system requests the Internet infrastructure to send a request to a specific program running on another end system. The way the destination program delivers the data. Internet sockets are a set of rules that a sending program must follow so that the Internet can deliver data to its destination.

To put it simply, the service provided by the Internet for distributed applications is data transmission, and these data transmissions provide standard data transmission and delivery methods to the Internet infrastructure by the socket interface on the end system. Like physical logistics, the Internet provides a variety of services, and the application must choose an Internet service to realize data transmission. There will be more specific blog analysis about Internet services later.

1.3 What is a network protocol

In real life, there will be a lot of communication in a conventional way, such as question-answer, there will be refusal to answer and silence in the answer link, and the questioner will continue the next dialogue according to the response of the answerer. There are no specific constraints on this process, but there are some social etiquette and potential loose constraints of the occasion. And because the transmission of data in the network will be restricted by all the activities of the communication entity, such as the protocol implemented on the hardware controls the "online" bit stream between two network interface cards; the congestion control between the end systems Protocols control the rate at which packets are sent between sender and receiver; protocols in routers determine the path packets take from source to destination.

Protocol: defines the format and sequence of messages exchanged between two or more communicating entities, as well as the actions taken for message sending/receiving or other events.

2. Network edge, network core, access network

In the first section, the Internet is analyzed through the specific components and the services they provide. This section analyzes the Internet from the types of its components. This type mainly refers to their functions in the Internet. These types of functions include: hosts responsible for sending and receiving data, since these hosts are at the edge of the network, this part is collectively referred to as the edge of the network; lines responsible for data transmission and packet switches for switching, corresponding to the edge of the network This part is often referred to as the network core; the access network responsible for connecting hosts to the network core.

The figure above is a structural diagram of the Internet based on the simulation of function types. This diagram briefly illustrates that the Internet is composed of edge systems, access systems, and network cores.

2.1 Network edge

Usually the end system is also called the host (host), they accommodate (that is, run) the application program. Therefore, the host = end system mentioned on computer network related topics, these two terms will be used interchangeably in related blogs, here just to show that these two terms describe the same thing. Hosts are sometimes divided into two categories: client (client) and server (server), and today most of the servers that provide search results, emails, web pages, and videos belong to large data centers (center), and the desktops used by daily users PCs, mobile PCs, and smart phones are all specific types of end systems or hosts. This is just a conceptual topic, so let's briefly explain it.

Simply put, the edge of the network refers to end systems (hosts), and more specifically, host systems and distributed applications. The network working modes on the edge system are mainly divided into four types: peer-to-peer mode and client-server mode.

Peer-to-peer mode: referred to as P2P, that is, peer-to-peer communication mode, all parties have the same function, that is, any party can start a communication session, and applications based on P2P communication mode on any host system can directly communicate with each other .

Client-server mode: Client-Server, referred to as C/S mode, the CS mode structure usually adopts a two-tier structure, the server is responsible for data management, and the client is responsible for completing interactive tasks with users.

Based on the C/S mode, two network modes are extended in modern distributed applications: dedicated server mode and BS architecture mode.

Dedicated server mode: Large-scale network application servers may provide users with different services. Using one server may not be able to withstand the pressure of network services, so multiple servers are required to provide servers for their users, and each server provides a dedicated server. Internet service.

B/S architecture mode: based on the www browser (Browser) as the client, some business logic may be implemented on the browser, but the main business logic is still implemented on the server side (Server), forming a so-called three-tier 3-tier Structure, the three-layer structure is: interface layer (User Interface layer, namely UI), business logic layer (Business Logic Layer), data access layer (Data access layer).

Contains host systems, distributed applications running on host systems at the edge of the network. And various basic network components in the host system, including physical components such as: network card, baseband, etc.; software components such as: socket interface program, various protocol application programs that provide services, etc. Protocol applications include: TCP protocol services, UDP protocol services, etc. Distributed applications exchange data between end systems through these network services.

TCP service: Transmission Control Protocol (Transmission Control Protocol), defined by RFC 793 of IETF. TCP achieves reliable and sequential data transmission through the confirmation and retransmission mechanism; the sender can implement flow control, and when the network is congested, the sender reduces the sending rate.

UDP service: User Datagram Protocol (User Datagram Protocol), defined by RFC 768 of IETF. UDP implements connectionless, unreliable data transmission, cannot control flow, and does not perform congestion control.

Here is a brief introduction about the network edge. In the following OSI network layered architecture model, the network edge part covers the application layer and the transport layer. In the following blogs, we will give a detailed introduction to the content of these network edges. .

2.2 Network Core

Abstractly speaking, the core of the network is a mesh network composed of packet switches and links. This part mainly describes how data is transmitted through the network.

Network of Networks: The core of the network is often referred to as the network of the network, and the end systems are connected to the Internet through an access ISP that can provide wired or wireless connections using multiple methods including DSL, cable, FTTH, WiFi and cellular access technology. An ISP is not necessarily a telecommunications company or a cable company, such as one that provides Internet access to students and teachers. Whether it is a telecommunications bureau, a cable company, or other units that provide Internet access, it is to allow end users and content providers to access ISPs, but to realize the connection of all Internet users, ISPs must be interconnected. The network of the network refers to the network connecting various ISPs.

Internet exchange point: Assume that there is an ISP service provider that can connect each access ISP to each other access ISP, but the cost required for such an access model will be an astronomical figure, and it is not necessary Yes, each user does not need to use the network at the same time, nor does it have to exclusively share this link between two user connections. In reality, ISP service providers access a certain range of users to form a branch network, and then these branch networks are interconnected to achieve a wider range of network interconnection and intercommunication to form regional ISPs. These networks pass through the Internet Exchange Point (Internet Exchange Point, (referred to as IXP) convergence to achieve interconnection between each other, on top of each IXP, the backbone network realizes regional or global network links. The Internet exchange point is actually many switches connected to various networks to realize these networks. data exchange between them.

Access point, multi-homing, peer-to-peer: The ISP service provider mentioned above is not a single regional network, but there are often multiple ISP service providers in an area, and ISP service providers of different sizes cover The area ranges are also inconsistent, and they overlap with each other. In order to achieve mutual interconnection, an ISP is added with a Point of Presence (PoP), that is, one or more routers in the network realize data exchange with other ISPs. Multi-homing is to access ISPs of different levels and ISPs in different regions on the access point of one ISP. The so-called peer-to-peer means that the traffic generated by data transmission between two ISPs does not need to pay for each other. This kind of adjacent level is generally the connection mode between ISP service providers whose network coverage area or user access volume is not much different. The data transmission of an ISP needs to provide data transmission services through a higher-level ISP, and they need to pay traffic fees to the higher-level ISP.

The above is a schematic diagram of the interconnection model of ISP. From the analysis and diagram of this part, it is drawn to describe the interconnection content between ISPs. In essence, they are still networks composed of various packet switches and link media. The difference in application scenarios uses devices with different performance characteristics in different places, and in order to solve specific business problems, different applications will be configured on these device-side systems to handle specific business requirements. These specific network components are described in the following blog will be described in detail.

Message: In various network applications, end systems exchange messages with each other. The message contains whatever the protocol designer needs, it can perform a control function, and it can also contain data.

Packet: A message is sent from the source system, and the source divides the long message into smaller data blocks, called packets.

Switches: Between source and destination, each packet travels through a communication link and a packet switch. There are two main types of switches: routers and link-layer switches.

Packet switching: Since data transmission is limited by the physical medium on the communication link, the data transmitted each time is carried by a limited capacity. Packets are transmitted at a speed equal to the maximum transmission rate of the link through the communication link. For example, a source end system or packet switch sends a packet of L bits through a link, and the transmission rate of the link is R bits/second, then the packet is transmitted The time is L/R seconds. In addition to dividing larger packets into smaller data blocks, packet switching may sometimes combine small packets into one packet. The size of the packet depends on the transmission rate of the link.

Store-and-forward transmission: Most packet switches use a store-and-forward transmission mechanism at the input of the link. Store-and-forward transmission means that the entire packet must be received before the switch can begin transmitting the first bit of the packet out the link. That is to say, the router needs to receive, store and process the entire packet before forwarding the partition.

Store-and-forward delay: Suppose a packet of L bits is sent from source to destination through a path consisting of N links of rate R (so, there are N-1 routers between source and destination), then The end-to-end delay is: d end-to-end = N*(L/R).

Queuing delay: Each switch has multiple links connected to it, and for each connected link, the switch has an output buffer (also called output queue), which is used to store router Prepare packets for that link. If an arriving packet needs to be transmitted to a link, but that link is found to be busy transmitting other packets, the arriving packet must wait in the output buffer. Therefore, in addition to the store-and-forward delay, the packet also needs to bear the queuing delay (queuing delay) of the output buffer. The queuing delay in the output buffer varies, depending on the degree of network congestion.

Packet loss (packet loss): Because the size of the buffer space in the router is limited, an arriving packet may find that the buffer has been completely filled by other packets waiting to be transmitted, in which case packet loss (packet loss) will occur ) (packet loss), the arriving packet or one of the queued packets will be discarded.

Forwarding table: Each end system has an address called an IP address. When the source host wants to send a packet to the destination end system, the source includes the destination IP address in the header of the packet. When a packet arrives at a router in the network, the router examines a portion of the packet's destination address and forwards the packet to a neighboring router. Each router has a forwarding table for mapping destination addresses (or parts of destination addresses) to outgoing links. When a packet arrives at a router, the router examines the address and searches its forwarding table with the destination's address to find the appropriate outgoing link. Regarding how to set up the forwarding table in the router, whether to manually set up each router one by one, or to configure the Internet using an automatic process, this will be introduced in detail in the blog on the control plane later.

Circuit Switching: In a circuit switched network, during an inter-end-system communication session, resources (buffers, link transfer rates) required for communication between end-systems along the path are reserved. Before the sender can send information, the network must establish a connection between the sender and the receiver. At this time, the switches along the path between the sender and the receiver will maintain the connection state for this connection. This connection is called for a circuit. When the network creates this kind of circuit, it also reserves a constant transmission rate (expressed as a part of the transmission capacity of each link, that is, bandwidth) at the sender-receiver during the connection, and the sender can guarantee the constant rate Send data to the receiver.

Multiplexing in a circuit-switched network: circuits in a link are implemented by Frequency-Division Multiplexing (FDM) or Time-Division Multiplexing (TDM).

Frequency Division Multiplexing: The link's spectrum is shared by all connections created across the link, during which the link dedicates a frequency band to each connection. In the telephone network, the width of this frequency band is usually 4kHz (that is, 4000 cycles per second), and the width of this frequency band is called bandwidth. FM radio stations also use FDM to share the spectrum from 88MHz to 108MHz, where each station is assigned a specific frequency band. To put it simply, frequency division multiplexing is to divide the total bandwidth of the transmission channel into several sub-frequency bands (ie, sub-channels), and each sub-channel transmits one signal, and the signals of each channel overlap in time and do not overlap in frequency spectrum.

Time-division multiplexing: For a TDM link, time is divided into fixed-time frames, and each frame is divided into a fixed number of time slots. When the network creates a connection across a link, the network assigns to the connection a time slot in each frame, these time slots are exclusively used by the connection alone, and one time slot (within each frame) can be used to transmit the link The data.

The multiplexing in the circuit switching network also includes: code division multiplexing and wavelength division multiplexing. The relevant details will be introduced in the following specific blogs.

Comparison of packet switching and circuit switching:

Circuit switching: it includes three phases—connection establishment, communication, and connection release. Its advantages are: the current line is dedicated to the users at both ends of the communication, and the data transmission delay is small; data can be transmitted in the order of sending without out-of-sequence problems; it is suitable for transmission simulation It is also suitable for transmitting digital signals; it occupies fixed resources on the link and does not need to switch between different link resources. Its disadvantages are: the communication parties monopolize the link resources of the line, which is not conducive to the utilization of link resources; the data is direct, terminals of different types and different rates are inconvenient to communicate with each other, and error control is difficult; because it takes a long time to maintain the line, it is easy Cause the equipment on the link to go down.

Packet switching: There is no need to establish the connection of the entire link, and only need to use the appropriate path to forward the packet after it arrives. Its advantages are: there is no delay in establishing a connection; because the store-and-forward method is adopted, and the switching node has path selection, when a link fails, it can be switched to another link; the communication parties do not monopolize link resources, which can be more efficient The use of link resources; its disadvantage is: because the data needs to be stored and forwarded after entering the switching node, resulting in forwarding delay (including receiving packets, checking correctness, queuing, sending time, etc.). If there are many users using link resources or the amount of data transmission is large, it will cause greater delay or even packet loss; because different packets may be transmitted through different link resources, it may cause out-of-sequence problems; if the data transmission demand is sudden For transmission, it is suitable to use packet switching, which is only suitable for digital signals.

Quantitative analysis of packet switching and circuit switching can refer to: University of Science and Technology of China Zheng Xuan and Yang Jian's complete set of "Computer Network" P5 1.3 Network Core .

2.3 Access Network

The edge of the network and the network were introduced earlier. This part introduces the concept of how the end system at the edge of the network accesses the core of the network, and the physical media that support the access network. Of course, not only the access network uses physical media, but the core of the network The various end systems of the system are connected together based on the physical media that transmits data, and it is only introduced in this part.

By access network, we mean the network that physically connects an end system to its edge router, which is the first router on the path of an end system to any other remote end system. Based on the types of users accessing the access network, the access network types can be divided into home access and enterprise access.

The network types for home access include: DSL, cable, FTTH, dial-up, and satellite; the types of enterprise access networks include: Ethernet and WiFi. Of course, these network types are not dedicated access network types for certain types of users, but common In some cases these users are connected based on these network types.

DSL: Digital Subscriber Line, usually DSL Internet access obtained from the telephone company for local telephone access. Each subscriber's SDL modem uses existing telephone lines (twisted copper pairs) to exchange data with a Digital Subscriber Line Access Multiplexer (DSLAM) located in the local central office (CO). The DSL modem in the home gets the digital data and converts it into high-frequency tones for transmission over telephone lines to the local central office, which is then converted into digital form by the DSLAM. Home telephone lines carry data and traditional telephone signals at the same time. They use different frequencies for encoding. The frequency bands they use are: high-speed downstream channel, located in the 50kHz to 1MHz frequency band, medium-speed upstream channel, located in the 4kHz to 50kHz frequency band, ordinary two-way Telephony channel, located in the 0 to 4kHz frequency band. A splitter on the customer's side separates the data signal reaching the home from the telephone signal and forwards the data signal to the DSL modem. On the telephone company's side, the DSLAM separates the data from the telephone signal and sends the data to the core network of the Internet.

Cable: Cable Internet access, which utilizes the cable company's existing cable television infrastructure, where the cable is typically coaxial. Cable Internet access requires a special modem called a cable modem. The cable modem termination system (Cable Modem Termination System, CMTS) has a similar function to the DSLAM of the DSL network, which converts the analog signal sent by the cable modem in many downstream homes back to digital form, and the cable modem divides the HFC network into downstream and downstream. The two uplink channels are usually accessed asymmetrically, and the transmission rate assigned to the downlink channel is usually higher than that of the uplink channel.

FTTH: Fiber To The Home (Fiber To The Home), in simple terms, is to provide a fiber optic path directly from the local central office to the home. Each fiber that exits the central office is actually shared by many households until it is relatively close to those households where it is distributed to one fiber per household. There are two types of optical fiber distribution structures for this distribution: Active Optical Network (AON) and Passive Optical Network (PON), AON is essentially switched Ethernet, which will be described in the following link Detailed introduction in road layer and LAN related blogs. The FTTH of the PON distribution architecture has an optical network terminator (Optical Network Termi-nator, ONT) for each family, which is connected to the adjacent splitter by a dedicated optical fiber, and the splitter gathers some families into one Root shared fiber that connects to the local telephone company or central office.

Ethernet: Many companies or units use Ethernet, and some families also use this LAN access technology, using a local area network (LAN) to connect end systems to edge routers. Ethernet users use twisted-pair copper wires to connect to an Ethernet switch, which, or a network of such connected switches, then connects to the Internet.

WiFi: Since there are more and more wireless communication devices (such as mobile phones, etc.), wireless LAN is commonly called WiFi. Simply understand that zombies replace the circuit signals of twisted-pair copper wires in Ethernet with wireless routers. These wireless signals are converted into digital signals through wireless routers or base stations. These routers or base stations are also cable modems that can convert cable signals from the underlying network into wireless signals. This part of the content is described in detail in the related blog of the radio network later.

Wide area wireless access: 3G, 4G, 5G and LTE, and satellite radio signals. This part of the content is also introduced in detail in the later radio network related blogs.

In the access network, different technologies are mainly applied based on different transmission physical media. In the core network, data transmission is also based on these physical media. However, most of the core networks now use optical fibers as the physical media.

3. Delay, packet loss and throughput in packet switching network

Packets start at the source host, travel through a series of routers, and end at the destination. When a packet travels along this path to a subsequent node, the packet experiences several different types of delay at each node along the way: nodal processing delay, queuing delay, transmission delay ( transmission), propagation delay (propagation delay), the overall accumulation of these delays is the total node delay (total nodal delay).

3.1 Four types of delay in node delay

Processing delay: the time required to check for bit-level errors, the time required to check the packet header and decide where to direct the packet;

Queuing delay: The time for packets to wait for transmission on the link depends on the congestion level of the router;

Transmission delay: the time to push all the bits of the packet to the link. Note that this and the time when the receiving end receives from the link are only two sides of the transaction. While the sending end is pushing data to the line, the receiving end is receiving data from the line. Therefore, the transmission delay can only take one of the tasks, not the sum of the sending and receiving time. Transmission delay is a function of packet size and link transmission rate, independent of the distance between routes.

Propagation delay: After the bit is transmitted on the road, the time it takes to propagate from the starting point to the next node is a function of the distance between the two routers.

Regarding transmission delay and propagation delay may be misunderstood as the same thing, in fact, transmission delay refers to the time when the entire packet is pushed onto the propagation path, or the entire packet is pushed from the path to the propagation path by the receiving node The time of reception; and the propagation delay refers to the time for a single frame of data to reach the other end from a period of transmission media, which can be understood through the following detailed diagram:

3.2 Queuing delay and packet loss

The queuing delay is different from the other three delays. The delay of the queuing link depends on the size of the instantaneous flow of data arriving at the current node. If the packet arrives at the current node, the previous packet has been completely pushed out. At this time, the current node has no Queuing delay, otherwise there will be a delay, and the more packets queued before the arrival of the packet, the greater the queuing delay of the packets arriving later.

Traffic intensity: Therefore, under normal circumstances, statistics are used to measure the queuing delay. Assuming that the average rate of arriving packets is a (packet/second, that is, pkt/s), the rate at which bits are pushed out from the queue is R (in bps, that is, b/ s is the unit), each packet is composed of L bits, then the average rate of bits arriving at the queue is La bps, and the traffic intensity can be represented by the ratio of the average rate of packet bits arriving at the queue to the bit rate La/R at which the packet is pushed out .

Assuming that La/R>1, the average rate at which bits arrive at the queue exceeds the rate at which the queue is transmitted. In this case, the queue will tend to increase infinitely, and the queuing delay will tend to be infinite, so a golden rule in traffic engineering Yes: The flow intensity cannot be greater than 1 when designing the system.

Now consider the situation when La/R<=1, if the packets arrive periodically, there will be no queuing delay when each packet arrives in an empty queue; on the other hand, if the packets arrive in bursts instead of periodically, Then there may be a large average queuing delay.

Assuming that N packets arrive at the same time per second, the first packet has no queuing delay, the second packet has a queuing delay of L/R seconds, and the nth transmitted packet has (n-1)L/R seconds Queuing delay.

Of course, the above two examples assume that the groupings arrive periodically, which is somewhat academic. The process of actually arriving at the queue is random, that is, the arriving queue does not follow any pattern, but it is still useful to intuitively understand the range of queuing delay. If the traffic intensity La/R is closer to 0, then almost no packet arrives or the packet arrival interval is very large, then the packet arrival will not be able to find other packets in the queue, so the queuing delay will be close to 0. Conversely, when the traffic intensity La/R is close to 1, as the traffic intensity approaches 1, the average queuing length becomes worse and the queuing delay increases.

Packet loss: Assuming that the queue can hold infinite packets, in reality, the queue in front of a link has only limited capacity, although the queue capacity depends greatly on the router design and cost, because the queue capacity is limited, with the flow When the intensity is close to 1, the queuing delay does not really tend to infinity. On the contrary, the arriving grouping finds a full queue, because there is no place to store this grouping, the router will discard the grouping, that is, the grouping is lost (lost), which is often referred to as packet loss.

Therefore, from the above analysis, the performance of a node is not only measured by delay, but also by the probability of packet loss.

3.3 End-to-end delay

The sum of the node delays of a packet from the sender to the destination is the end-to-end delay. Assuming that there are N-1 routers between the source host and the destination host, and the network is not congested, the end-to-end delay can be expressed as: N*(processing delay + transmission delay + propagation delay).

Other delays in end systems: One is that an end system that shares media transport packets may intentionally delay transmission as part of its agreement to share media with other end systems; the other is media packetization delays.

3.4 Throughput in Computer Networks

In addition to delay and packet loss in computer networks, end-to-end throughput is also an important measure of network performance. To define throughput, consider the transfer of a large file across a computer network from host A to host B. The instantaneous throughput at any instant in time is the rate (in bps) at which host B receives the file. If the file consists of F bits and it takes T seconds for host B to receive the F bits, then the average throughput of the file transfer is F/T bps.

Due to the physical transmission characteristics of the network transmission medium, different transmission media have different rates, and the final decision on the throughput is the transmission medium with the lower rate. Assuming that the server sends a file to the client now, use Rs to represent the link rate between the server and the router, and use Rc to represent the link rate between the client and the router. Obviously the server cannot traverse the link faster than Rs bps, and the router cannot traverse the link faster than Rc bps.

If Rs<Rc, then at a given throughput of Rs bps, the bits injected by the server will pass smoothly through the router and reach the client at Rs bps.

If Rs>Rc, then the router cannot forward bits as fast as it is received, in which case bits will leave the router at rate Rc, resulting in an end-to-end throughput of Rc.

Therefore, the throughput of this simple two-link network is min{Rc,Rs}, which can also be said to be the transmission rate of the bottleneck link. Assume that there are N sections of transmission media between the server and the client, and the transmission rate of each section is R1, R2, R3..., RN. In the same way, the transmission rate from the server to the client depends on the smallest link in the link. That transfer rate, can be expressed as min{R1,R2,R3...,RN}.

We know that in a public link, each end-to-end transmission operation shares link resources at the same time. Suppose there are 5 files transmitting bits at the same time on a physical transmission medium, then this physical transmission medium will divide its transmission equally at this time. rate, then the transmission rate of a single file at this time is 1/5 of the physical transmission media rate R, that is, R/5.

4. Protocol layer and service model

In the Internet, all activities of designing two or more remote communication entities are governed by the protocol. In order to provide a structure for the design of the network protocol, the network designer organizes the protocol and the hardware and hardware that implement the protocol in a hierarchical manner. software. And the protocol that each layer has is called a protocol stack. The protocol stack of the Internet consists of five layers, from top to bottom: application layer, transport layer, network layer, link layer, and physical layer.

4.1 Five-layer network structure

Application layer (software): The grouping of the application layer is called a message, and the application program distributed among multiple end systems uses the protocol to exchange messages. Application layer protocols include: HTTP, SMTP, FTP, etc.;

Transport layer (software): The grouping of the transport layer is called a segment, and the application layer message is transported between the application endpoints. There are two protocols in the transport layer: TCP and UDP. TCP provides a connection-oriented service, which includes the guaranteed delivery mechanism and flow control of the application layer message project site. At the same time, it divides long messages into short messages and provides a congestion control mechanism. Therefore, when the network uses , the source throttles its transmission rate. The UDP protocol provides its applications with a connectionless service that has no reliability, no flow control, and no congestion control.

Network layer (mixture of hardware and software): The grouping of the network layer is called datagram (datagram), which moves datagrams from one host to another, including Internet Protocol and routing protocol, among which is the famous Internet Protocol IP, Defines the various fields in a packet and how end systems and routers cooperate to use those fields. There is only one IP, and all Internet components that have a network layer run IP, so the network layer is often referred to as the IP layer even though it has routing protocols other than the IP protocol.

Link Layer: Link layer packets are called frames. To move packets from one node (host or router) to the next node on the path, the network layer must rely on the services of the link layer. The network layer includes the DOCSIS protocol of Ethernet, WiFi and cable access network. The services provided by the link layer depend on the specific link layer protocol of the link. Some protocols provide reliable transmission based on the link layer. Different link layers Links are handled by different link layer protocols and receive different services.

Physical layer: The task of the physical layer is to move bits in the frame transmitted by the link layer from one node to the next node. The physical layer protocol is still related to the link layer and related to the actual transmission medium of the link.

4.2 OSI seven-layer network model

The OSI (Open System Interconnection) reference model is a standard system developed by the International Organization for Standardization (ISO) for interconnection between computers or communication systems, and is generally called the OSI reference model or the seven-layer model.

The OSI model is divided from top to bottom: application layer, presentation layer, session layer, transport layer, network layer, link layer, and physical layer. Compared with the five-layer network structure, the functions of the presentation layer and session layer are supplemented.

Presentation layer: enables communicating applications to parse the meaning of exchanged data, including data compression, data encryption, and data description;

Session layer: Provides demarcation and synchronization functions for data exchange, including methods for establishing checkpoints and recovery schemes.

4.3 Packaging

On the sending host side, an application layer message is sent to the transport layer. After receiving it, the transport layer attaches information to the header of the message. This header will be used by the transport layer at the receiving end. The application layer message and the header information of the transport layer together form a transport layer message segment, that is to say, the transport layer encapsulates the application layer message.

Similarly, the transport layer transmits message segments to the network layer, and the network layer similarly adds network layer header information such as injection source and destination system addresses to its header to generate a network-layer datagram (network-layer-datagram).

Continuing downward, the datagram is passed to the link layer, and the link layer continues to add the header information of the link layer to generate a link-layer frame (link-layer-frame).

A packet has two types of fields at each layer: a header field and a payload field. The payload is usually a packet from the previous layer.

The receiving end reconstructs the encapsulation in reverse, and removes the header layer by layer to receive the original information.

The last part of the so-called homework was originally intended to write the answers to the after-school exercises of "Computer Network Top-Down". Due to the consideration of space, I will directly add the file address after I finish sorting it out.

Digression

In this first year of fast-growing technology, programming is like a ticket to a world of infinite possibilities for many people. In the star lineup of programming languages, Python is like the leading superstar. With its concise and easy-to-understand syntax and powerful functions, it stands out and becomes one of the most popular programming languages ​​in the world.


The rapid rise of Python is extremely beneficial to the entire industry , but " 人红是非多" has caused it to add a lot of criticism, but it still cannot stop its hot development momentum.

Will Python remain relevant and intact for the rest of the next decade? Today, we're going to analyze the facts and dispel some misconceptions.

If you are interested in Python and want to get a higher salary by learning Python, then the following set of Python learning materials must be useful to you!

Materials include: Python installation package + activation code, Python web development, Python crawler, Python data analysis, artificial intelligence, machine learning and other learning tutorials. Even beginners with 0 basics can understand and understand. Follow the tutorial and take you to learn Python systematically from zero basics!

1. Learning routes in all directions of Python

The route of all directions in Python is to organize the commonly used technical points of Python to form a summary of knowledge points in various fields. Its usefulness lies in that you can find corresponding learning resources according to the above knowledge points to ensure that you learn more comprehensively.
insert image description here
2. Python learning software

If a worker wants to do a good job, he must first sharpen his tools. The commonly used development software for learning Python is here!
insert image description here
3. Python introductory learning video

There are also many learning videos suitable for getting started with 0 basics. With these videos, you can easily get started with Python~insert image description here

4. Python exercises

After each video lesson, there are corresponding practice questions, you can test the learning results haha!
insert image description here

Five, Python actual combat case

Optical theory is useless. You have to learn to type codes along with it, and then you can apply what you have learned in practice. At this time, you can learn from some practical cases. This information is also included~insert image description here

6. Python interview materials

After we have learned Python, we can go out and find a job with the skills! The following interview questions are all from first-line Internet companies such as Alibaba, Tencent, and Byte, and some Alibaba bosses have given authoritative answers. After reading this set of interview materials, I believe everyone can find a satisfactory job.
insert image description here
insert image description here
7. Information collection

The full set of learning materials for the above-mentioned full version of Python has been uploaded to the CSDN official website. Those who need it can scan the QR code of the CSDN official certification below on WeChat to receive it for free.

Guess you like

Origin blog.csdn.net/Python966/article/details/132345846