[Tsvwg] Re: [e2e] Are you interested in TOEs and related issues
H.K. Jerry Chu
Jerry.Chu at eng.sun.com
Sat Mar 6 08:48:14 PST 2004
...
<snip>
>> It's mainly the TCP receive window, not the PMTU that dictates how
>> much buffering will be needed. Routers that do cut-through need not
>> buffer the whole pkt. More significant is the drop of the DRAM price
>> and increase of the DRAM size making memory a non-issue in many cases
>> today.
>
>
>If the DRAM price alone were the only factor preventing vendors from
>implementing unlimited packet queues, we would already have seen tons
>of such devices on the market over the past 10 years or so. Excessive
>queuing delays on routers are bad, especially for high-speed traffic
>managed by TCP-like congestion control schemes. The queue lengths on
>routers have always been and will always present a tradeoff between a
>smallest usable window for handling traffic bursts, and the desire of
>keeping the queueing delays as low as possible on the other hand.
Assuming pkt switching speed has scaled up too, I don't see why today's
much faster routers with a larger max queue length will cause the
average queuing delay to be any longer than routers 10 years ago.
A perhaps oversimplified comparison can be made between a conforming
traffic using 9K jumbo frames vs a non-conforming, traffic burst of 13
back-to-back regular Ethernet frames. They both require the same amount
of memory bandwidth in the forwarding path (2*9K ~= 13*1500). If all
the forwarding optimization in today's routers has made the switching
speed of 13 consecutive Ethernet "cells" from the same flow similar to
2 back-to-back jumbo frames, why would the former be any more evil than
the latter?
>
>
>>
>> Over the past decade many components involved in providing high-speed
>> networking have scaled up an order of magnitude. This including link
>> bandwidth, CPU speed, I/O bus, memory size..., but not the Ethernet
>> MTU and certain TCP parameters (such as the every-other-pkt acking
>> policy). This is really hurting the throughput performance of the
>> hosts. IMHO the amount of burstiness by TCP over WAN should be
>> allowed to scale up an order of magnitude too. If stretch ACKs are
>> fully adopted into TCP algorithm (see rfc2525 for a number of issues
>> with stretch acks), one can use LSO on the transmit side, and
>> per-flow pkt coalescing on the receive side to provide effectively a
>> simple, stateless AAL5 layer for the Ethernet "cells" without
>> requiring jumbo frames or complex TOE engine.
>>
>> BTW, I asked a few transport folks in Minneapolis IETF about how
>> "evil" is traffic burst in today's enviroment, but did not get any
>> concrete answer. Perhaps this topic should be discussed in tsvwg or
>> tcpm.
>
>
>Because queues in todays routers have finite maximum lengths, and this
>model is unlikely to change in the forseeable future, excessive traffic
>bursts will be more likely subject to drop-tail policing than other
>kinds of more smoothly shaped traffic. More than that, the bursty
>traffic will not only have less chance of reaching its target with all
>fragments in place, but it will also most probably do much harm to
Note that these are not IP fragments. Dropping one of them from a burst
is no different from dropping one from any other random flow.
Also don't forget the end2end path covers two endpoints on the host side
so we must consider host side requirement too. This looks like a tug of
war between the host side favoring large burst vs the network side
favoring small burst. Perhaps TOE can be a good bridge between the two.
Rgds,
Jerry
Also
>other innocent and otherwise well-behaving flows as well.
>
>Marko
>
>
>>
>> Jerry
>>
>> Sr. Staff Engineer
>> Solaris Networking & Security Technology
>> Sun Microsystems, Inc.
>>
>> >I don't have a good answer but going much higher than 16Kbytes MTUs
>> >seems unlikely... and at 10Gig this is still close to 100kpps.
>> >
>> > cheers
>> > luigi
>
More information about the end2end-interest
mailing list