[Internet History, Technology, and Security Lecture experience]

  In conjunction with the fifth week of the course, in accordance with the TCP / IP network infrastructure from the bottom up, this week was mainly on the transport layer.

Transport Layer

  If the content is just in front of the IP layer, then start from the transport layer is TCP-level content, and destination TCP layer is to compensate for errors that may appear in the IP layer, and maximize the use of available resources. TCP / IP networking in order to effectively share, so we need to know our network is fast or slow, this is what we use TCP layer to solve the problem. How fast the underlying network, how high reliability, if something goes wrong, how we should deal with. Thus, the core idea of ​​TCP / IP is the protocol when transmitting data, dividing the data into a plurality of packets, and each packet will be sent out. Then we keep them, until they recognized the other side, then we only throw them away. In some cases, if the packet loss, it may be sent again, until the final object in the system so far. This is basically done by TCP, it is judged that the packet across the Internet which layers are present, the data packet which has not yet arrived.

  A simple example: a datagram is divided into five, are numbered: 100,200,300,400,500. First, we send three data 100, 200, 100, 300 reached the destination, but the destruction of 200 on the way. After some time, the receiver sends a message telling the sender that it lacks information number 200, so that the sender can infer 100 has been successfully sent, but 200 need to be resent. 200 and re-sending the receipt after receipt, the sender can know 300 has been sent successfully, that this should start from 400, and so on, until the message has been sent.

  We can easily be implemented in a single connection in the above operation, but there are a lot of routers in the network, each router equipped PC may have a total of billions, we need to have reliable data transmission, and therefore every time sent, all you need to store a copy on the local computer.

  In theory this is a very huge project, so in the late 1980s, have predicted that the Internet would die out is not surprising. At that time a group of people that scholars are not smart enough, not build a network, but also with the development of NSF network, more and more computers are connected together in the background, the trunk is too slow, and so began to fail. It seems that all the computer vendor say scholars can not build a strong, scalable network prophecy would be true.

Van Jacobson - Slow Start Algorithm

  After all, no predictions come true, because in 1987, when serious network crash, computer available in the market are the VanJacobson patch is installed, it makes the network better, and this was also the last time the entire Internet crashes occur. People named it Van Jacobson protocol. But the interesting thing is, he does not like this name, because he is a shy and modest man. But in any case, it will not change the fact that, at that moment in the late 1980s, he invented the slow start algorithm to save the network. In fact, included in this class of the recording process, the technology is also being used with.

  Speaking of Van to study the causes of the slow start algorithm, it is necessary to start from the earliest campus Ethernet, then there are many campus Ethernet, you can make it a larger network via a cable, and then use NSF announced 56kb the campus network cables that connect together to form a huge network. Before suddenly can not talk to people through the network can talk, people send e-mail, move large files, everyone all excited about this technology, however, any campus will make a thousandfold overload on the network, make many network discards the packet.

  At that time, Van do in Lawrence Berkeley laboratory researcher, it was the mid-1980s, each course has a news group, like a small news group, all jobs will be put online. Van try to learn from LBL's office to download course material on a machine Berkeley Evans Hall, but he was surprised to find that the network throughput is actually zero, probably only one packet every ten minutes, so he went to investigate this and Mike Karels question, the results of a coincidence, Mike Karels led the development of Berkeley Unix BSD team also received a report on the subject from all over the country sent two hit it off, decided to study together to solve this trouble.

  The easiest time running TCP / IP approach is to start Berkeley Unix, because it is embedded in a very good application ARPA-funded, but the performance of this program is very poor, small-scale test of time during the collapse, Van worked for them months to find where the problem, but there is no breakthrough, this time Van think this is due to ignorance of the line is how it works, if we can understand how it works, and that naturally has the appropriate treatment method, so they the focus shifted to the "agreement is how to handle bandwidth changes."

  When the transport message, they may be regarded as a packet, if you pressed into these packets bandwidth that it will immediately diffuse out, eventually reaching a receiver that the receiver will each data packet into a the ACK, so you We got a lot of ACK, which in turn returns ACK to the transmitting side. The ACK like some kind of clockwork tell the sender what time it is safe to send each new group, they are always based on the time point in the network slowest spaced apart. The key question is how to use the fastest speed sorting them without waste, Van after observing, TCP found at run time is actually perfect, but if when it suddenly starts, it will be a gateway easily into saturation, this time sending the message will continue to be lost into the state; but if you gradually start it, you will not overload, because you'll get a clock (ACK) to control TCP datagram output queue for cache control

  And how to make this mechanism implanted into the global TCP / IP in it? Van uses a more vice way, he let senior kernel hackers in the group to write a program that will get the user datagram, and kernel errors occur, they put all these programs packaged on TCP / IP mailing list a lot of people will go to download the trial, then the program crashes, Van to get their feedback, and repair, and then collapse, and repair, and so on, and ultimately generate a version of a kernel error does not occur.

The Domain Name System

  Domain Naming System (DNS) is somewhat similar to the Internet and transport layer in between, or somewhere between the Internet and the link layer, of course, it certainly does not belong to the application layer, does not belong to the link layer, although the use of the domain name system the link layer, but it is only added to this layer of secondary protocols.

  计算机使用IP地址,所以路由器实际并不知道域名,它只是简单地根据IP地址移动数据,然而数字地址对于人们来说是很难记住的,随着网络越来越发达,接入的计算机与服务器也越来越多,复杂的IP地址让人应接不暇,所以域名系统的概念被发明了出来,使用这些可见的名称,我们就可以不为人察觉地切换其对应的IP地址,因此,域名系统就像互联网的地址簿,它是一个大而快速的分布式数据库。域名系统使用缓存,即使部分网络出现故障,本地的域名解析速度仍会很快。

 

  随着进度的加深,讲课的内容也开始慢慢贴近生活,相对于之前几讲理解起来会容易一些。

Guess you like

Origin www.cnblogs.com/ptolemy/p/11105095.html