[e2e] Is a non-TCP solution dead?
Cannara
cannara at attglobal.net
Tue Apr 1 10:30:14 PST 2003
Good questions Rich. These are among the first to cover in any networking
class (along with what oft-misused terms like bandwidth, thoughput, etc.,
really mean)! It's intriguing that the title of this group is "end-to-end",
but we're given a protocol (TCP) at the transport level that's been patched to
avoid embarrassment at the network level -- not end-to-end.
A transport is by definition reliable. This has been done in all transports
(TCP, SPP, SPX, DDP...) by:
1) Acknowledgement of data received.
2) Retransmission by sender after a specific time lacking a receiver's Ack.
3) Sequencing pkts individually for receiver's benefit.
4) Windowing -- allowing receiver to control sender's rate (pkt, byte...).
Note that there's no mention of anything but receiver and sender in the
protocol. No mention of a congested network-layer node that must be made up
for. No mention of anything but the "ends". This is an "end-to-end"
transport. This is what it must do and all it must do. This is what TCP
orginally did, albeit on a rather inefficient, byte basis, because of its
founding in a byte-transmission world.
Why flow control end-end? Because the receiver may be a TRS80 with a GbE
interface and the sender an array of whatever's fastest today, running GbE FDX
into the poor Trash 80. The speed/buffer differentials between ends should
have no effect on reliability of transport. This is appropriate transport
behavior.
What is not appropriate is when algorithms are added to the sender's
'transport' to make up for imagined problems within the network path that the
network layer is doing nothing about because of its design. The telco nets
have always done resource allocation, flow control, path/error recovery, etc.
So, IP running over such networks was always ok. IP running over reliable
LANs is ok. IP running over compound, variable-technology, non-managed paths
is not so ok, because IP was designed assuming the mysterious link that its
packets were being set on was the good old reliable telco system.
Some of the 'fixes' to TCP have been especially naive, like the "every other
pkt Ack" trick. Imagine the delays additive in a large file transfer when
blocking (e.g., SMB) results in odd pkts. How someone could imagine saving
every other Ack was worth, say, 150mS penalty every 30kB, is unfathomable,
unless they just never really thought out the implications. No one designing
TCP/IP ever thought that individuals would be going to chain stores and buying
PCs to connect to a global network, nor that these buyers should be taught
parameter optimization. Indeed, the design choices mean consumers should be
enlightened!
Since IP was not ready for mixed Layer 2's and things like congestion control
in networks built of non-telco components (Bay or Cisco routers...), its
vulnerability led to the near collapses of the Internet in the '80s. "What to
do?" angsted (a verb?) the Internet establishment (at meetings in sunny
locales). Well, lots of traffic goes via TCP, so let's see what we can do
there to control the flows seen by the network layer components. And, we all
know the result -- Slow Start, RTT gymnastics, yadda, yadda.
A transport needs no slow start. It has a window statement from the receiver
to work with immediately. Doing fancy, publishable things with RTT helps IP
too, but a transport only needs RTT if it wants to dynamically optimize both
the receiver's and sender's windows. One can go down the list of what went
into the TCP we know today, and what additional kludges have been suggested.
I've crassly suggested my own -- communicate the last IP sequence # for the
packet being Acked so the sender knows if a retransmission was needed or not.
There's even room in usused TCP fields to do this, and it's so clearly a
melding of both layers that folks would then have to admit to the sin of layer
violation.
However, like all kludges, mine and the 'fixes' to TCP since the '80s are
misplaced. Just as Ethernet is responsible for a 15-retry "best effort" pkt
transmission onto a CSMA/CD LAN segment, just as the telco add/drop
multiplexors, rings and switches are responsible for their own system
management, an IP network is responsible for handling everything needed by
datagrams, no matter how many are fed into a given choke point, etc. The real
problem, that exposes the folly of kludging, is that TCP is not dominant in
pkt counting from now over the future. So, the usefulness of the flow-
control kludging of TCP is diminishing and, in fact, working counter to its
original purpose -- TCP use is penalized vs non-TCP traffic. Will Rogers used
to say that "When Congress makes a joke, it's a law." We want to avoid past
Internet jokes becoming laws in the future, especially since we all paid for
them out of our taxes.
I like the statements about work on choosing multiple paths for RF pkts, based
on available capacity. This is responsible network-layer design. It is part
of "best effort". Just as all the work on items like optical and electrical
link FEC is. IP as currently implemented, is hardly best effort. At some
point, every system will top out and real limits will hurt services above, but
that's no reason to kludge the higher service in such a way that a simple
lower-level event, like 0.4% bit loss, causes extreme performance loss.
Alex
Richard Carlson wrote:
>
> Alex;
>
> One point I don't see in this discussion is that TCP runs over IP and IP is
> an unreliable datagram service. Thus TCP needs control functions to
> guarantee the reliable, in-order, delivery of packets to the receiving
> application. Is your argument that congestion control is not a reliability
> issue (dealt with at the transport level in the TCP/IP stack)? If so, what
> functions do you see required for reliability and what functions are
> required for network health?
>
> Rich
>
> At 07:03 PM 3/31/03 -0800, Cannara wrote:
> >I think these statements illustrate the ingrained nature of the problem:
> >"
> > > Optimizing transport protocols for particular link technologies may
> > > seem a good thing in the short term, but it harms the future. It's
> > > hard enough to get TCP right without link-layer dependencies in there.
> > > And it's harder still if you have to optimize for arbitrary
> > > concatenated sequences of link-layers.
> > >
> > > On the other hand, if we can identify useful commonality between
> > > link-layers, and if we can pass up hints to the transport to make
> > > things work better without sacrificing generality or security, then
> > > this seems a reasonable thing to me. This has however proved rather
> > > difficult every time it's been raised before.
> >"
> >The Transport should not be used to make up for behavioral variations in
> >segments or links in a path. It should be able to depend on the Network to
> >provide datagram service and to handle congestion, errors, etc. on the
> >Network's own. This is why we have great efforts going on in link
> >optimizations, forward error correction, etc. None of this is a Transport
> >responsibility. It was a mistake to attempt such corrections in the TCP
> >transport, even if it was alleged to avoid the embarrassments of Internet
> >collapse in the '80s.
> >
> >Alex
> >
> >Mark Handley wrote:
> > >
> > > >If I had to choose between
> > > >i) optimise a L2 protocol for a particular transport and
> > > >ii) optimise a transport protocol in order to cope with different L2
> > > >protocols (not simultaneously)
> > > >
> > > >I would almost instinctively choose option ii) (probably becuase I have
> > > >convinced myself that if it is e2e it must be good :-)
> > > >without suggesting that L2 protocols should not be "well-designed"
> > (whatever
> > > >this means)
> > >
> > > It's not that e2e must be good - it's an evolvability issue.
> > >
> > > There are a vast number of end-systems out there. To a first
> > > approximation, they all speak more or less the same end-to-end
> > > transport protocols, and this is necessary for interoperability. For
> > > this reason, plus the more recent proliferation of firewalls, NATs,
> > > etc, it's likely that the number of end-to-end transport protocols
> > > will remain small, with most of the service evolution happening above
> > > the transport layer. While transport protocol evolution does happen,
> > > I'd bet money on the set of widely used transport protocols being
> > > similar in ten years time.
> > >
> > > There are many different link technologies out there. Almost none of
> > > them are the same link technologies that were around ten years ago.
> > >
> > > Optimizing transport protocols for particular link technologies may
> > > seem a good thing in the short term, but it harms the future. It's
> > > hard enough to get TCP right without link-layer dependencies in there.
> > > And it's harder still if you have to optimize for arbitrary
> > > concatenated sequences of link-layers.
> > >
> > > On the other hand, if we can identify useful commonality between
> > > link-layers, and if we can pass up hints to the transport to make
> > > things work better without sacrificing generality or security, then
> > > this seems a reasonable thing to me. This has however proved rather
> > > difficult every time it's been raised before.
> > >
> > > - Mark
>
> ------------------------------------
>
> Richard A. Carlson e-mail: RACarlson at anl.gov
> Network Research Section phone: (630) 252-7289
> Argonne National Laboratory fax: (630) 252-4021
> 9700 Cass Ave. S.
> Argonne, IL 60439
More information about the end2end-interest
mailing list