[e2e] Clarifying the End-to-End Principle
Arnaud Legout
legout at castify.net
Mon Mar 4 02:58:07 PST 2002
Hi,
to add a comment to your interesting discussion on the end-to-end argument,
the interpretation of the end-to-end argument in the context of
congestion control is interesting but subjective.
I studied how to improve congestion control in the context of the
Internet, in order to improve, let say for short, the overall
satisfaction of the end users (for a more formal discussion you can
refer to:
A. Legout, and E. W. Biersack. Revisiting the Fair Queueing Paradigm
for End-to-End Congestion Control. To appear in IEEE Network Magazine,
2002
http://www.eurecom.fr/~btroup/FormerPages/legout_htdoc/Research/research.html
Adding Fair Queueing (FQ) in the network significantly improve TCP
performance and largely facilitate the design of new congestion
control protocols. The question is whether adding FQ to improve
congestion control breaks the end-to-end argument. As FQ can be
considered as an element of the congestion
control protocols, we can consider FQ as scheduling mechanism that is added in
the network just to improve the performance of end-to-end
congestion control protocols (e.g. TCP). But, has FQ is of
broad utility, it does
not break the end-to-end argument. Indeed, as the end-to-end argument
is a design principle, we can add (as you
said in your email) mechanisms inside the network without breaking the
end-to-end argument. The main point here is that these mechanisms must
be of broad utility, or at least must not be harmful.
The interpretation of "broad utility" and "not harmful" is broadly
subjective. How can we imagine the applications of tomorrow. The
actual success of the Internet was enabled by the end-to-end design
principle. In 1974 Leonard Kleinrock [1] wrote: "First let me say
that this is the most exciting time to be conducting research in the
field of Computer Communications. The area has certainly come of age,
the applications have clearly been identified, the technology exists
to satisfy those needs and the public may even be ready for the
revolution."
Of course Kleinrock was true, nobody at that time could imagine the
actual evolution of the Internet. How clever they were to not add
specific mechanisms in the network to help the well identified
applications like email, or file transfer. These mechanism would have perhaps
precluded the development of the WWW (for instance by introducing
delays).
As a general matter, once there are strong financial interests the
inertia for innovation if very high. There are rooms for innovation that
bring immediate benefits, even if the mid-term and
long-term benefits will vanished (NAT is an example). But I believe
that new architectural choices/innovation/etc. that will bring large
benefits for mid or long term will be very hard to deploy.
I agree with you that
the end-to-end argument needs to be clarified in the actual context in order to
define whether new protocols/mechanisms/architectures follow or break
this argument, as this argument has shown to be undoubtedly a clever
architectural choice that enabled (an hopefully will
enable) deployment of new services.
My two cents morning contribution.
Regards,
Arnaud.
[1] K. Leonard, Research Areas in Computer Communication, In Computer
Communication Review, ACM SIGCOMM, volume 4, July 1974.
--
----------------------------------------------------------------------
Arnaud Legout, Ph.D.
Castify Networks Phone : 00.33.4.92.94.20.91
2229, route des Cretes Fax : 00.33.4.92.94.20.88
06560 Sophia Antipolis E-mail: legout at castify.net
FRANCE Web : http://www.castify.net
----------------------------------------------------------------------
More information about the end2end-interest
mailing list