[e2e] TCP in outer space
J. Noel Chiappa
jnc at ginger.lcs.mit.edu
Thu Apr 12 18:53:41 PDT 2001
> From: Alex Cannara <cannara at attglobal.net>
<Re-organized to put the important points up front...>
>> 2) ignore the evidence ... that the TCP/IP protocol design has
>> successfully coped with new situations that were not imagined in 1979.
>> This include ... a huge range of dynamic performance.
> Using transport-layer software to partially address network-layer
> congestion is not really something to crow about in any case.
Oh, really? OK, I'll bite.
So how do you propose to do congestion control/management below the transport
layer? Some sort of explicit notification (a la ICMP SQ)? Been there, done
that. Having the switches get involved in managing individual traffic streams
to provide back-pressure (and this presumably means inter-switch flow
control)? Been there too.
ECN is about the only thing left that hasn't been tried extensively, and I'll
wager that the reason that it hasn't been done is that the stuff we *do* have
(VJCC, etc) "works well enough" - i.e. well enough that people are more
interested in flaying *other* alligators which *are* biting their a##.
Deploying ECN is going to be a lot of work, and maybe people just made a
value judgement that the cost/benefit ratio of working on that was lower than
some of the alternatives (e.g. deploying Web caches, or whatever).
You seem to be blithely ignoring the fact that missing ACK's are a *very*
cheap way of finding out that packets have gone missing - and it's a
mechanism that inherently *is not available below the transport level*,
because that's where the knowledge is about what has and has not gone missing.
And may I also point out that even ECN is a feedback mechanism, i.e. one
which will work well with elephants, but not mice. If you're dealing with
elephants, even if you're not using TCP, you've generally got some equivalent
to missing ACK's which you can use.
Yes, it would be nice to have a mechanism like ECN which was available for
those cases where you have elephants, but no ACKS, but again, resources
are limited, and other things (e.g. routing meltdown) are probably more
critical.
> Or, one can even try altering existing TCP stacks in compatible ways,
> so that a sender knows what packet is being acknowledged, thus reducing
> silly events like unnecessary retransmissions, etc.
Well, some things to help (with unneccessary slowdowns as well as unnecessary
retransmissions) *are* being deployed, like SACK - not to mention a variety
of tweaks to the basic VJCC algorithms.
> The goal, of course is not to make future TCPs do this, but to
> demonstrate effectiveness in real situations
Again, whether a particular mechanism works is *not* the only question.
There's a limited pool of brainpower/time/energy available, and the other
question is "will the improvement from deploying this particular thing be
larger than the improvements we would get from putting that much effort in
somewhere else"?
> where a <1% errored loss causes a >15% slowdown in throughput, simply
> because current TCP knows nothing but to assume all losses are
> congestive (a poor design decision).
It's far from a poor design decision. In *most* environments on the Internet,
the vast majority of losses *are* congestive drops.
Yes, there are some places where the mechanisms based on that assumption
produce poorer performance than if a more sophisticated mechanism had been
used. But it's far from clear that for the user base as a whole, the overall
performance of the whole system would be better, if the effort that went into
finding/deploying a mechanism that [i) worked as well as the current one in
the vast majority of the network (where congestive loss is the main reason
for loss) and ii) worked better in those regions of the network where that
wasn't true] had been put into doing that, instead of whatever other things
it *was* put into.
> There is also a set of suggestions that have been made regarding
> network-layer admission and flow management.
You're thinking of router-router communication to pass along back-pressure
back to the entry point? Or something less based on feedback from a
particular traffic stream? In either case, I can't see it being that good.
>> 1) ignore the evidence that the fundemental design of the Internet
>> protocols has been highly successful.
> On 1), "highly successful" means what? ... Providing the
> business-support parameters that make the current economics of the
> Internet so economically important?
Until recently, being business-friendly was not a design goal, so it's hard
to lambaste the internet architecture for not being ultra-successful along
that axis.
> Or, simply, having permeated the end-system/router space, due to
> applications unforeseen by official Internet folks (e.g., WWW). We all
> know the last of these is rarely a good measure of good design, having
> more to do with accident of market and subsidy.
First, the Internet was well on the way to dominating large-scale networking
even before the WWW appeared (although more because it had the installed
base, I will readily concede); not a counter-argument to the basic point,
just pointing out that the piece of evidence is incorrect.
Second, I think it's a measure of the success of the basic Internet
architecture that something as radically different as the WWW *could* deploy
rapidly without any change to the existing structure.
Noel
More information about the end2end-interest
mailing list