[Tsvwg] Re: [e2e] Are you interested in TOEs and related issues
H.K. Jerry Chu
Jerry.Chu at eng.sun.com
Thu Mar 11 11:19:43 PST 2004
[Sorry for the late response due to traveling]
>> Assuming pkt switching speed has scaled up too, I don't see why
>> today's much faster routers with a larger max queue length will cause
>> the average queuing delay to be any longer than routers 10 years ago.
>
>
>Congestions and associated queuing delays typically arise due to finite
>line speeds at outgoing network interfaces, which are sometimes offered
>more traffic than they can transmit, so the remaining packets have to
>be either queued or discarded. This phenomenon is well known and is
>largely independent of the switching fabric performance.
>
>So, when such congestions do occur, the max. queuing delay will be
>approximately equal to the max. buffer length (in bits) divided by the
>line card speed. Note that with today's faster line speeds we cannot
>afford to have the similar level of queuing delays as on routers 10
>years ago - those delays have now to be reduced (preferably)
This is not obvious to me. Yes the bandwidth delay product will be
larger. But TCP algorithm has also advanced in the past few years to
perform well under a larger congestion window and many different drop
patterns.
>proportionally to the increase in line rates. And that can't be
>enforced with unlimited buffer lengths.
>
Reducing network latency is always an amicable goal. This is especially
important for new applications such as VOIP. But latency and throughput
often impose conflicting design tradeoff.
One solution is to have routers examine the TOS byte and put latency
sensitive pkts ahead of throughput pkts so they don't get excessive
delay due to a large queue.
>
>>
>> A perhaps oversimplified comparison can be made between a conforming
>> traffic using 9K jumbo frames vs a non-conforming, traffic burst of
>> 13 back-to-back regular Ethernet frames. They both require the same
>> amount of memory bandwidth in the forwarding path (2*9K ~= 13*1500).
>> If all the forwarding optimization in today's routers has made the
>> switching speed of 13 consecutive Ethernet "cells" from the same flow
>> similar to 2 back-to-back jumbo frames, why would the former be any
>> more evil than the latter?
>
>
>The problem is that in today's routers the buffer sizes are typically
>accounted in packets, not in bytes. So looking at your example, and
>supposing that the line card buffer is limited to 100 packets, a burst
>of small frames will instantly consume 13% of available buffer slots,
>while jumbos will only use 2%. This is of course no problem if the line
>card can transmit all those frames instantly, but what if it cannot?
>
>
>>
>> >> BTW, I asked a few transport folks in Minneapolis IETF about how
>> >> "evil" is traffic burst in today's enviroment, but did not get any
>> >> concrete answer. Perhaps this topic should be discussed in tsvwg
>> >> or tcpm.
>> >
>> >Because queues in todays routers have finite maximum lengths, and
>> > this model is unlikely to change in the forseeable future,
>> > excessive traffic bursts will be more likely subject to drop-tail
>> > policing than other kinds of more smoothly shaped traffic. More
>> > than that, the bursty traffic will not only have less chance of
>> > reaching its target with all fragments in place, but it will also
>> > most probably do much harm to
>>
>> Note that these are not IP fragments. Dropping one of them from a
>> burst is no different from dropping one from any other random flow.
>
>
>It _is_ different, because with spiky bursts, you won't be occasionaly
>losing only a packet or two, but you can expect to loose the entire (or
>most of the) burst at once.
Many new TCP features and algorithms in recent years have been developed
for the transmit side to recover quickly from just about any drop pattern
except for a complete tail drop. One would hope with RED tail drop will
be rare.
Jerry
>
>Marko
>
More information about the end2end-interest
mailing list