[Tsvwg] Re: [e2e] Are you interested in TOEs and related issues
Marko Zec
zec at tel.fer.hr
Sat Mar 6 02:21:37 PST 2004
On Saturday 06 March 2004 17:48, H.K. Jerry Chu wrote:
> ...
> <snip>
>
> >> It's mainly the TCP receive window, not the PMTU that dictates how
> >> much buffering will be needed. Routers that do cut-through need
> >> not buffer the whole pkt. More significant is the drop of the DRAM
> >> price and increase of the DRAM size making memory a non-issue in
> >> many cases today.
> >
> >If the DRAM price alone were the only factor preventing vendors from
> >implementing unlimited packet queues, we would already have seen
> > tons of such devices on the market over the past 10 years or so.
> > Excessive queuing delays on routers are bad, especially for
> > high-speed traffic managed by TCP-like congestion control schemes.
> > The queue lengths on routers have always been and will always
> > present a tradeoff between a smallest usable window for handling
> > traffic bursts, and the desire of keeping the queueing delays as
> > low as possible on the other hand.
>
> Assuming pkt switching speed has scaled up too, I don't see why
> today's much faster routers with a larger max queue length will cause
> the average queuing delay to be any longer than routers 10 years ago.
Congestions and associated queuing delays typically arise due to finite
line speeds at outgoing network interfaces, which are sometimes offered
more traffic than they can transmit, so the remaining packets have to
be either queued or discarded. This phenomenon is well known and is
largely independent of the switching fabric performance.
So, when such congestions do occur, the max. queuing delay will be
approximately equal to the max. buffer length (in bits) divided by the
line card speed. Note that with today's faster line speeds we cannot
afford to have the similar level of queuing delays as on routers 10
years ago - those delays have now to be reduced (preferably)
proportionally to the increase in line rates. And that can't be
enforced with unlimited buffer lengths.
>
> A perhaps oversimplified comparison can be made between a conforming
> traffic using 9K jumbo frames vs a non-conforming, traffic burst of
> 13 back-to-back regular Ethernet frames. They both require the same
> amount of memory bandwidth in the forwarding path (2*9K ~= 13*1500).
> If all the forwarding optimization in today's routers has made the
> switching speed of 13 consecutive Ethernet "cells" from the same flow
> similar to 2 back-to-back jumbo frames, why would the former be any
> more evil than the latter?
The problem is that in today's routers the buffer sizes are typically
accounted in packets, not in bytes. So looking at your example, and
supposing that the line card buffer is limited to 100 packets, a burst
of small frames will instantly consume 13% of available buffer slots,
while jumbos will only use 2%. This is of course no problem if the line
card can transmit all those frames instantly, but what if it cannot?
>
> >> BTW, I asked a few transport folks in Minneapolis IETF about how
> >> "evil" is traffic burst in today's enviroment, but did not get any
> >> concrete answer. Perhaps this topic should be discussed in tsvwg
> >> or tcpm.
> >
> >Because queues in todays routers have finite maximum lengths, and
> > this model is unlikely to change in the forseeable future,
> > excessive traffic bursts will be more likely subject to drop-tail
> > policing than other kinds of more smoothly shaped traffic. More
> > than that, the bursty traffic will not only have less chance of
> > reaching its target with all fragments in place, but it will also
> > most probably do much harm to
>
> Note that these are not IP fragments. Dropping one of them from a
> burst is no different from dropping one from any other random flow.
It _is_ different, because with spiky bursts, you won't be occasionaly
losing only a packet or two, but you can expect to loose the entire (or
most of the) burst at once.
Marko
More information about the end2end-interest
mailing list