It rained on the weekend and I picked up a few papers from 10 years ago at home.
I mainly have a question, is BBR really Von Jacobson's idea? After all, the author of the paper has his name. I want to know what role Van Jacobson plays in BBR.
But I didn't find it. But still dig up some interesting things.
A 2011 paper "BufferBloat: What's Wrong with the Internet?":
https://queue.acm.org/detail.cfm?id=2076798
This article is a discussion about bufferbloat. I have excerpted some interesting passages.
Configuring large buffers is conducive to equipment sales: the
following are some of Van Jacobson’s complaints: the
following is interesting, it is proposed to evaluate the buffer from a time perspective rather than the size:
you have to say "I have a 10ms buffer", but this must be specified The rate at which data packets enter and leave the buffer. But this rate cannot be fixed. But despite this, the view of "double the speed and save half the delay" is quite tricky. In fact, this is the case, isn't it?
The following paragraph provides evidence of why bufferbloat always occurs on the edge, at least theoretically: in
reality, economic factors are actually the most important factor. Whether the core backbone does not need a buffer or the edge network tends to have a larger buffer, it is an economic factor:
An important point is rarely discussed and mentioned in such papers, that is , the function of buffer is only to smooth the burst of statistical multiplexing . This is in the "statistical multiplexed store-and-forward packet switching network" The 1960s at the beginning of the design is set. However, the buffer was selected as one of the core components of congestion control around 1986. As a result, the buffer has part-time congestion control responsibilities. Its task is to provide a signal to the end-to-end TCP protocol when the buffer overflows. That's it. So, this is the beginning of bufferbloat.
Nowadays, BBR adopts another method for congestion control. It no longer aims at filling the buffer to obtain the packet loss convergence signal. This is obviously an innovation. In fact, the Delay-based congestion control algorithm has been committed to this. However, because it cannot coexist with AIMD and therefore cannot be promoted and deployed, no, BBR finally faces this problem, so BBRv2 cannot remain pure.
In addition to making a fuss about congestion control algorithms, AQM is another area. The following paper is more interesting "Controlling Queue Delay
A modern AQM is just one piece of the solution to bufferbloat.":
https://queue.acm.org/detail.cfm?id=2209336
Although the Jacobson pipeline described below is a basis, it is very good, and it makes people smell the taste of BltBW and RTprop:
on this basis, this paper mainly wants to propose the logic of codel, which is a very Exquisite, simple and sophisticated queue management algorithm.
AQM must be simple and must be adaptive. This is the core requirement that must be followed at the beginning of the Internet design. This is also the core reason why the transmission control protocol is end-to-end. We must never forget the core principle of the Internet: Intelligence Edge && Dummy Core
The leather shoes in Wenzhou, Zhejiang are wet, so they won’t get fat in the rain.