[e2e] Open the floodgate
David P. Reed
dpreed at reed.com
Wed Apr 21 11:32:43 PDT 2004
I'd like to offer the following comment, which is intended as constructive
technical criticism.
The Internet's value is focused on solving a subtle problem - allowing many
different sets of technical requirements to coexist in a single common
networking infrastructure resource. As such, the measure of "optimality"
for the Internet is its ability to be maximally adaptable to as wide a set
of uses as possible. There is no clear evidence that delivering massive
files between two points is either the best and highest economic use of the
Internet or a representative sample of the future Internet.
It is well known how to maximize the speed of an unlimited dragster (a
specialized automobile) on Bonneville salt flats. Such techniques
contribute to human knowledge. They do not, however, address many of the
key requirements of engineering a universal transportation system; and in
particular, focusing on such speed does not optimize such things as
maneuverability, low lifetime maintenance cost, minimizing greenhouse gas
emissions, etc.
Now getting to TCP flow control and congestion control. I am as concerned
as anyone that the current TCP algorithms are not evolving to new
situations. However, the situations that I believe we must take into
account are the newly emerging classes of applications, whatever they may
be - not just the applications that benchmark well. Though super-computer
file transfers are one such case, there are many other, far more diverse
operating points that it is desirable for the network to concurrently
support. Such operating points include very high burst rates where the
flow lifetime is too short to provide flow-based congestion control, and
operating points with very high rates of reconfiguration and mobility,
during the lifetime of a "connection". But even those are quite simple.
At the same time, the potential to congest the network is not going
away. The solution to congestion is coordination algorithms that make
reasonable and fair decisions in at least two independent dimensions: how
to obtain additional capacity where needed, and which traffic to restrain,
and how to restrain it. (we almost always tend to neglect the former,
since we are all poor engineers who live on limited budgets, and are not
used to having to make investment decisions to deploy new network capacity,
except in our own homes, where it is cheaper to assume that when we need a
gigabit LAN, it will come down to the price of today's 100 mbit LAN. But
in the arena of transport systems, that turns out to be true as well. We
are nowhere near the physical limits of our ability to get bits between two
points on the earth, and we are deploying capacity at an exponential
rate). We almost certainly need such coordination algorithms to be
completely decentralized and "future proof" in the sense that they can be
adapted to innovative new uses of the network.
The problem of congestion is not the simple academic theory problem that
you can solve either by benchmarking Internet2 drag-races or by doing
papers about session level flow control as if that is all that matters
(because the only connections that matter are FTPs or WWW transfers - using
today's applications as if they represent the mid- or long-term future is a
major research strategy error).
The real research problems around congestion and control are much less
obvious and much more important.
So improving the startup time of an individual TCP connection (or a few) is
nice and useful, and worth doing. But if we let it get in the way of
seeing through to the really hard problems of managing interactions in an
ever more complex Internet, then a whole community of researchers is
wasting its time. That's what advanced development is about, not systems
research.
More information about the end2end-interest
mailing list