[e2e] Do we have buffer bloat on edge routers or on core routers?
Fred Baker (fred)
fred at cisco.com
Sat Mar 30 15:11:34 PDT 2013
On Mar 28, 2013, at 9:48 AM, Detlef Bosau <detlef.bosau at web.de> wrote:
> Perhaps my question is a stupid one, however, can someone help me here?
Typically, buffer bloat is reported in broadband networks, in the CPE router or the CMTS/BRAS that it connects to. That is the reason for RFC 6057; certain applications have the behavior of overloading broadband networks, preventing the service provider from delivering service as advertised to all of his customers consistently, and making him do something to schedule traffic. It is also observed in wifi networks, typically when heavily used, such as in metropolitan or conference wifi. A place folks might not think too hard about, but where switch manufacturers think about it, is in input-queued switches. Imagine I have many ingresses receiving traffic for the same egress, and all are at the same speed. I now have the classic situation of having the possibility of many inputs feeding the same queue and as a result overloading the queue; in this case, the ingresses are not only different, but on different cards and therefore different queue controllers. This is where we find ourselves interested not only in queue depth but actual time or expected time in queue; if the oldest packet in a queue is 10 or 15 ms in queue, even though the total amount of traffic in a queue is only 2-3 ms worth, I might want to start signaling to data sources.
Mark Allman recently published a paper, in which he says that his massively over-provisioned FIOS network doesn't seem to have this problem, so he doesn't think it's a real problem. I see a lot of papers that commit the six philosophers fallacy (six blind philosophers approach an elephant, feel the part closest to them, and describe the elephant - "it's like a tree", "no, it's like a snake", "no, it's like a wall"...). Another is the paper that looked at one of the best engineered backbones in the world, measured traffic on high capacity fiber that was all rate shaped by coming through relatively low speed broadband links, and concluded that all traffic in the Internet showed little if any variation or queuing - and hence, we could almost live without queues at all. Leland's paper on self-similar Ethernet traffic (which is actually incorrect; it's heavy-tailed, not self-similar) was a surprise to the academic community because of the same basic fallacy.
yes, buffer bloat happens. If it doesn't happen in a specific network, that's a good thing and I'm happy for the network. The primary way that network designers can reduce buffer bloat is over-provisioning. But there are applications that will drive to over-use any bandwidth given. At the point where over-provisioning makes a financial impact on the operator, expect him to do something to manage his costs.
More information about the end2end-interest
mailing list