The core concept of computer networks

This is the second article "Computer Network" series of articles

Our first article describes the basic concepts of computer networks, basic terms of the Internet, what is the protocol and physical media several access network and transmission network, then this article we explore the network core switching network, delay , packet loss, throughput and computer network protocols and network-level attacks.

Network core

The core network is a mesh network system constituted by the Internet and the link end, the following figure depicts the correct expression of this

So different ISP and local and home network is how to exchange information? Information exchange is divided into two ways 分组交换and 电路交互, here we come to know with what.

Packet switching

In Internet applications, each terminal systems can exchange information with each other, this information is also called 报文(Message), is a master of the message, which can include anything you want, such as text, data, e-mail, audio, video. In order to transmit packets from a source to a destination end system, the need to cut the long message into a small block of data, blocks called 分组(Packets), that is, the message packet is composed of a small block. Between the end system and a destination, each packet must go through 通信链路(communication links)and 分组交换机(switch packets), a communication link can be divided twisted-pair copper wire, coaxial cable and fiber optics. The packet switch is divided into link layer switch and a router. (If you do not understand this, then I need to look at the article you say you understand the Internet, then you know these? ) Packet to the interaction takes some time between the end of the system, if the two end systems packets that require interaction between the L-bit, the transmission rate of the link Q R bits / second, the transfer time is L / R seconds.

Now let's simulate this process of packet switching, the system needs to go through one end to the other end of the switch system to send packets when a packet arrives at the switch, the switch can be forwarded directly to you? No, the switch may not be so unselfish, you want me to help you to forward the packet? Well, first you need to put the entire packet data gave me, I'll send your question to consider, which is存储转发传输

Store and forward transmission

Store and forward before the first transmission means is a bit switch and then forward the packet, the entire packet must be received, the following is a schematic view of a store and forward transmission clues can be gleaned from FIG.

As can be seen from the figure, the packet 1,2,3 rate of R bps packet transmission to the switch, and the switch has received a packet transmitted bits, then the switch will be forwarded directly to it? The answer is no, you will switch packet first cached locally. This and cheating on exams, like a school bully to go through school to learn slag slag A B pass the answer, say A study slag, slag A school after receiving the answer, it can be directly on the paper pass past it? A study slag say, so I copied the first answer (save function) and then on the paper to you.

Queuing delay and packet loss

what? Do you think the switch can only be connected to a communication link? You're wrong, that was the switch ah, how could only one communication link it?

So I am sure you can think of this problem, a plurality of end systems transmit packet to the switch, there must be 顺序到达and 排队problems. In fact, for each link connected, which have a packet switch 输出缓存(output buffer)and 输出队列(output queue)corresponding thereto, for storing packets destined for the router to prepare each link. If a packet arrives at the router is receiving the other packet is found, then the newly arriving packet will be queued in the output queue, this waiting time it takes a packet forwarding is also referred to 排队时延, it will be referred to a packet switch at a packet forwarding above waiting, waiting to be called such 存储转发时延, we now understand that there are two delay, but in fact there are four delays. The delay is not static, it changes the program depends on the degree of network congestion.

Because the queue has a limited capacity, and when a plurality of links simultaneously transmit packet buffer cause unacceptable excess output packets, these packets will be lost, this situation is referred to 丢包(packet loss), or arriving packets queued packets will be dropped .

The following figure illustrates a simple packet-switched network

In the figure, the packet data shown by the three plates, the width of the plate indicates the size of the packet data. All packets have the same width and, therefore, have the same packet size. Below to simulate a scenario: assuming Host A and Host B sends the packet to Host E, A and B, the host first sends its data packet to the first router through a 100 Mbps Ethernet link, then the router the packets directed to 15 Mbps link. If, within a short time interval, the router packet arrival rate (bits per second is converted into) exceeds 15 Mbps, before the packets are queued in the output buffer link, congestion on the router will then retransmit to the link. For example, if the host A and host B sent back to back while a data packet 5, the majority of these packets will take some time to wait in the queue. In fact, this situation is common with many similar cases fully, for example, when we are waiting in line or waiting for a bank teller at the toll station.

Forwarding and router protocols

We have just talked about, routers and multiple communication lines connected to each communication link if packets are transmitted at the same time, it may cause queuing and packet loss, and packet waiting to be sent in a queue, and now I have a question you send a packet queue where to? This is determined by what mechanism?

Another angle to the problem, what is the role of routing? The data packets of different end systems store and forward . In the Internet, each end will have a system IPaddress, the original master when transmitting a packet in the packet header will add the IP address of the original host. Each will have a router 转发表(forwarding table), when a packet arrives at a router, the router will be part of the destination address of the packet inspection and search by forwarding destination address to find suitable transport links, and then mapped to output link for forwarding.

So the question is, how to set up an internal router is forwarding it? Detailed we'll come back here just to say probably, also has internal router 路由选择协议for automatic forwarding settings.

Circuit Switched

Another way is in a computer network, the other for data transmission and routing through a network link 电路交换(circuit switching). Circuit switching in 资源预留a packet-switched different, what does that mean? Is not reserved for packet switched link transmission rate per cache and packets between end-system interaction, so every time a transmission queuing; will reserve the circuit-switched information. A simple example will help you understand: It's like there are two restaurants, restaurant A and restaurant B do not need to need to book reservations, restaurant reservations for A, we have to advance with their contact, but when we arrived at the destination, we were able to be seated immediately and selected vegetables. For restaurants that do not need to book, you may not need to contact us in advance, but you have to bear the risk of arrival to wait in line.

The following shows a circuit switched network

In this network, links 4 for 4 circuit switch. Each of these links has a circuit 4, so each link can support four parallel links. Each host is directly connected to a switch, when two hosts need to communicate, which creates a dedicated network between two hosts 端到端的链接(end-to-end connection).

Comparative packet-switched and circuit-switched

Supporters often say that packet switching packet switching is not suitable for real-time services, because the delay unpredictable when it end to end. Proponents thought that packet-switched packet switching over circuit switching provides better bandwidth sharing; it is simpler than circuit switching, more efficient, less costly to implement. But now the trend is more towards the development of packet switching.

Delay, packet loss and packet-switched network throughput

Internet can be seen as an infrastructure, the infrastructure to run distributed applications on end systems to provide services. We hope that any transfer of data between two end systems will not cause loss of data in a computer network, but this is a very high goal difficult to achieve in practice. Therefore, in practice, it must be restricted between end-system 吞吐量to control data loss. If you introduce delay between end systems, there is no guarantee the problem will not be lost packets. So we look at from the computer network latency, packet loss and throughput on three levels.

Packet switching delay

Computer network packet from a host (source), goes through a series of routers transmit, the other end of the system the end of its course. In the whole course of the transmission, the packet would involve delay at four most important: node processing delay (nodal processing delay), queuing delays (queuing delay), the transmission delay (total nodal delay) and propagation delay ( Delay propagation) . These four delays add up 节点总时延(total nodal delay).

If dproc dqueue dtrans dpop represents the processing delay, queuing delay, transmission delay, and propagation delay, the total delay of the nodes are determined by the following formula: dnodal = dproc + dqueue + dtrans + dpop.

Delay type

Here is a typical delay distribution, let us analyze the figures about different types of delay

Packet transmitted by the end system via the communication link to the router A, router A examines the packet header to map the appropriate transmission link, and the link into the packet. Only when the link is no other packet being transmitted and no other packets of the row in front of the packet, the packet transmission to be free on this link. If the current link is busy or has other packets at the front of the packet, the packet will reach a new line is added. Below we discuss these four separate delay

Node processing delay

节点处理时延It is divided into two parts, the first router checks the header information of the packet; the second part is to determine where the packet transmission time of communication links required. High-speed network node processing delay are generally in the order of microsecond and lower. After the completion of this processing delay, the packet will be sent to the router's forwarding queue

Queuing delay

In the forwarding process queuing, packets sent to wait in a queue, waiting to send a packet consumed in the course of time is referred to 排队时延. Queuing delay depends on the length of the packet before the arrival packet number queue are queued. If the queue is empty, and is not currently transmitted packet, then the packet queuing delay is 0. If the high time is in the network, the packet transmission link are more, then the packet queuing delay will be extended. The actual queuing delay may reach microseconds.

Transmission delay

队列It is the main data structure used by the router. FIFO queue feature is the first to reach the cafeteria Dafan. Transmission delay time of transmission bits per unit time in the case of theory consumed. Such as the packet length is L bits, R represents the transmission rate from router A to router B's. Then the transmission delay is L / R. This is all grouped into time needed for the link. Transmission delay also typically millisecond to microsecond under what circumstances

Propagation delay

Link from the starting point B to the propagation time of the router is required 传播时延. The bit propagate in the propagation rate of the link. The propagation velocity depends on the physical medium of the link (twisted pair, coaxial cable, optical fiber). If a formula, then calculate the propagation delay between two routers equal to the distance / propagation velocity. I.e., propagation rate d/s, where d is the distance between Router A and Router B, s is the propagation rate of the link.

Comparison of the propagation delay and transmission delay

Computer network transmission delay and propagation delay is sometimes difficult to distinguish, explain here, 传输时延is the time required for packet router Release, which is a function of packet length and a transmission rate of the link, and between the two routers regardless of the distance. And 传播时延a bit propagates from one router to another router time required, which is the reciprocal of the distance between the two routers, regardless of the packet length and the link transmission rate. Can be seen from the equation, delay is the transmission L/R, i.e. between the length of the packet transmission rate / router. Formula propagation delay is d/s, i.e. the distance between the router / propagation rate.

Queuing delay

In these four delay, the delay of perhaps the most interesting is the queuing delay the dqueue. And the other three delay (dproc, dtrans, dpop) is different, queuing delay for different groups may be different. For example, if 10 packets arrive simultaneously at a queue, the first queue is not a packet arriving queuing delay, and finally reaches the packet has to withstand the maximum queuing delay (time delay to wait for the other nine are transmitted).

So how do you characterize the queuing delay it? It might be considered from three aspects: traffic arrival rate of the queue, the nature of the transmission rate of the link and traffic arrival . I.e., the flow rate reaches or periodic burst arrives, if a represents an average packet arrival rate of the queue (in units of a packet / sec, i.e. pkt / s) in front of said transmission rate is represented by R, it is possible from the queue Release of the bit rate (in bps i.e. b / s bit units). Assuming that all packets are of L bits, then the average bit arrival rate of the queue is La bps. Then the ratio La/Ris called 流量强度(traffic intensity), if the La / R> 1, then the average bit arrival rate of the queue exceeds the rate of transfer out from the queue, the queue in this case tends to increase indefinitely. Therefore, when designing the system traffic intensity can not be greater than 1 .

Now consider the La / R <= 1 is the case. The nature of the arriving traffic will affect the queuing delay. If the flow rate is 周期性reached, i.e., every L / R reaches a second packet, each packet will reach an empty queue, no queuing delay. If the traffic is 突发性arriving, it may be very average queuing delay. It may represent a general relationship between the average queuing delay and the traffic intensity in this figure with the following

The horizontal axis is the La / R traffic intensity, and the vertical axis is the average queuing delay.

Loss

We depicted in the above discussions of a formula that La / R is not greater than 1, if the La / R is greater than 1, then the queue arrival will be infinite, and the router queue queue received packet is limited, after the router queue until filled, newly arriving packet can not be received, cause the router to 丢弃(drop)the packet, i.e. packet will 丢失(lost).

Computer network throughput

除了丢包和时延外,衡量计算机另一个至关重要的性能测度是端到端的吞吐量。假如从主机 A 向主机 B 传送一个大文件,那么在任何时刻主机 B 接收到该文件的速率就是 瞬时吞吐量(instantaneous throughput)。如果该文件由 F 比特组成,主机 B 接收到所有 F 比特用去 T 秒,则文件的传送平均吞吐量(average throughput) 是 F / T bps。

协议层次以及服务模型

因特网是一个复杂的系统,不仅包括大量的应用程序、端系统、通信链路、分组交换机等,还有各种各样的协议组成,那么现在我们就来聊一下因特网中的协议层次

协议分层

为了给网络协议的设计提供一个结构,网络设计者以分层(layer)的方式组织协议,每个协议属于层次模型之一。每一层都是向它的上一层提供服务(service),即所谓的服务模型(service model)。每个分层中所有的协议称为 协议栈(protocol stack)。因特网的协议栈由五个部分组成:物理层、链路层、网络层、运输层和应用层。我们采用自上而下的方法研究其原理,也就是应用层 -> 物理层的方式。

应用层

应用层是网络应用程序和网络协议存放的分层,因特网的应用层包括许多协议,例如我们学 web 离不开的 HTTP,电子邮件传送协议 SMTP、端系统文件上传协议 FTP、还有为我们进行域名解析的 DNS 协议。应用层协议分布在多个端系统上,一个端系统应用程序与另外一个端系统应用程序交换信息分组,我们把位于应用层的信息分组称为 报文(message)

运输层

因特网的运输层在应用程序断点之间传送应用程序报文,在这一层主要有两种传输协议 TCPUDP,利用这两者中的任何一个都能够传输报文,不过这两种协议有巨大的不同。

TCP 向它的应用程序提供了面向连接的服务,它能够控制并确认报文是否到达,并提供了拥塞机制来控制网络传输,因此当网络拥塞时,会抑制其传输速率。

UDP 协议向它的应用程序提供了无连接服务。它不具备可靠性的特征,没有流量控制,也没有拥塞控制。我们把运输层的分组称为 报文段(segment)

网络层

因特网的网络层负责将称为 数据报(datagram) 的网络分层从一台主机移动到另一台主机。网络层一个非常重要的协议是 IP 协议,所有具有网络层的因特网组件都必须运行 IP 协议,IP 协议是一种网际协议,除了 IP 协议外,网络层还包括一些其他网际协议和路由选择协议,一般把网络层就称为 IP 层,由此可知 IP 协议的重要性。

链路层

现在我们有应用程序通信的协议,有了给应用程序提供运输的协议,还有了用于约定发送位置的 IP 协议,那么如何才能真正的发送数据呢?为了将分组从一个节点(主机或路由器)运输到另一个节点,网络层必须依靠链路层提供服务。链路层的例子包括以太网、WiFi 和电缆接入的 DOCSIS 协议,因为数据从源目的地传送通常需要经过几条链路,一个数据包可能被沿途不同的链路层协议处理,我们把链路层的分组称为 帧(frame)

物理层

虽然链路层的作用是将帧从一个端系统运输到另一个端系统,而物理层的作用是将帧中的一个个 比特 从一个节点运输到另一个节点,物理层的协议仍然使用链路层协议,这些协议与实际的物理传输介质有关,例如,以太网有很多物理层协议:关于双绞铜线、关于同轴电缆、关于光纤等等。

五层网络协议的示意图如下

OSI 模型

我们上面讨论的计算网络协议模型不是唯一的 协议栈,ISO(国际标准化组织)提出来计算机网络应该按照7层来组织,那么7层网络协议栈与5层的区别在哪里?

从图中可以一眼看出,OSI 要比上面的网络模型多了 表示层会话层,其他层基本一致。表示层主要包括数据压缩和数据加密以及数据描述,数据描述使得应用程序不必担心计算机内部存储格式的问题,而会话层提供了数据交换的定界和同步功能,包括建立检查点和恢复方案。

网络攻击

在计算机高速发展的 21世纪,我们已经越来越离不开计算机网络,计算机网络在为我们带来诸多便利的同时,我们也会遭受一些网络攻击,下面我们就一起来认识一下网络中的攻击有哪些

植入有害程序

因为我们要从因特网接收/发送 数据,所以我们将设备与因特网相连,我们可以使用各种互联网应用例如微信、微博、上网浏览网页、流式音乐、多媒体会议等,网络攻击很可能在这时不知不觉的发生,通过在这些软件中植入有害程序来入侵我们的计算机,包括删除我们的文件,进行活动监视,侵犯隐私等。我们的受害主机也可能成为众多类似受害设备网络中的一员,它们被统称为 僵尸网络(botnet),这些攻击者会利用僵尸网络控制并有效的对目标主机开展垃圾邮件分发分布式拒绝服务攻击

大多数有害程序都具有自我复制(self-replicating)的功能,传播性非常强,一旦它感染了一台主机,就会从这台感染的主机上寻找进入因特网的其他主机,从而感染新的主机。有害应用程序主要分为两种:病毒(virus)蠕虫(worm),病毒是一种需要某种形式的用户交互来感染用户的计算机,比如包含了病毒的电子邮件附件。如果用户接收并打开感染病毒的电子邮件的话,就会以某种方式破坏你的计算机;而蠕虫是一种不需要用户交互就能进入计算机的恶意软件,比如你运行了一个攻击者想要攻击的应用程序,某些情况下不需要用户干预,应用程序就可能通过互联网接收恶意软件并运行,从而生成蠕虫,然后再进行扩散。

攻击服务器和网络基础设施

另一种影响较大的网络攻击称为拒绝服务攻击(Denial-of-Service Dos),这种网络攻击使得网络、主机、服务器、基础网络设施不能常规使用。Web 服务器、电子邮件服务器、DNS 服务器都能成为 Dos 的攻击目标。大多数因特网 Dos 攻击分为以下三类

  • 弱点攻击。这涉及向一台目标主机上运行的易受攻击的应用程序或操作系统发送制作精细的报文,如果适当顺序的多个分组发送给一个易受攻击的应用程序或操作系统,该服务器可能停止运行。
  • 带宽洪泛。攻击者通过网络向主机或服务器发送大量的分组,分组数量太多使得目标接入链路变得拥塞,使得合法分组无法到达服务器。
  • 连接洪泛。和上面的带宽洪范攻击性质相似,只不过这次换成了通过创建大量的 TCP 连接进行攻击。因为 TCP 连接数量太多导致有效的 TCP 连接无法到达服务器。

互联网中攻击最多的就属于带宽洪泛攻击了,可以回顾一下我们上面讨论的时延和丢包问题,如果某服务器的接入速率为 R bps,那么攻击者则需要向服务器发送大于 R bps 的速率来产生危害。如果 R 非常大的话,单一攻击源可能无法产生足够大的流量来伤害服务器,所以还需要产生多个数据源,这就是屡见不鲜的 分布式Dos(Distributed Dos,DDos) 。攻击者通过控制多个数据源并让每个数据源发送大量的分组来致使服务器瘫痪。如下图所示

嗅探分组

今天许多用户通过无线设备接入因特网。例如 WiFi 连接的计算机或者使用蜂窝因特网连接的手持设备。在带来便利的同时也会成为易受攻击的目标。在无线传输设备的附近放置一台被动的接收机,该接收机就能够得到传输的每个分组的副本!这些分组中包含了各种敏感信息,例如口令、密码等,记录每个流经分组副本的接收机被称为分组嗅探器(packet sniffer)。分组嗅探也能够应用于有线环境中,可以用 Wireshark 实验来进行模拟

IP 伪装

生成具有任意源地址、分组内容和目的地址的分组,然后将这个分组传输到互联网中。这种将虚假源地址的分组注入因特网的能力被称为 IP 哄骗(IP spoofing)。为了解决这个问题,我们需要采用 端点鉴别,它是一种使我们能够确信真正源目的地的机制。我们后面会再探讨这些机制。

文章参考:

《计算机网络:自顶向下方法》

http://zahid-stanikzai.com/types-of-delay/

如果大家认可我,请帮我点个赞,谢谢各位了。我们下篇技术文章见。

发布了80 篇原创文章 · 获赞 361 · 访问量 9万+

Guess you like

Origin blog.csdn.net/qq_36894974/article/details/103778956