[e2e] Are we doing sliding window in the Internet?
Lars Eggert
lars.eggert at nokia.com
Mon Jan 7 02:46:47 PST 2008
References: <003801c73448$713c10c0$6e8944c6 at telemuse.net> <45A4D230.5020105 at web.de> <477D6A3E.7080900 at isi.edu> <603BF90EB2E7EB46BF8C226539DFC20701316AE2 at EVS-EC1-NODE1.surrey.ac.uk> <477DA1D1.50809 at reed.com> <477DC5DA.6070800 at isi.edu> <5640c7e00801032220h1fa945b9k61a01c70235fb035 at mail.gmail.com> <477E5677.6010808 at isi.edu> <200801041713.RAA29230 at cisco.com> <477E6D30.1080905 at isi.edu> <477E96B2.3000600 at reed.com> <477EAEBE.9080906 at isi.edu> <AFE0AC8DCDE68842B94E8EC69D5F21D635EB120231 at NA-EXMSG-W602.wingroup.windeploy.ntdev.microsoft.com> <477EC061.4080101 at isi.edu> <AFE0AC8DCDE68842B94E8EC69D5F21D635EB12025E at NA-EXMSG-W602.wingroup.windeploy.ntdev.microsoft.com> <477EC65D.1010606 at isi.edu> <477ED546.6040005 at psg.com> <477EEA23.8070901 at reed.com> <071424AB-2FEB-418E-AEDA-849913F9D0B4 at ncsu.edu> <477F18D9.30106 at isi.edu> <700B57DF-259E-45F0-8234-AE1D504D8027 at ncsu.edu> <477FB5AB.3090601 at isi.edu> <F204A621-B1F0-4699-8E72-5D4409F6006B at ncsu.edu> <47806455.8040508 at isi.edu> <6C616168-133B-4314-81!
4A-33438C96E426 at nokia.com> <AFE0AC8DCDE68842B94E8EC69D5F21D635EB120363 at NA-EXMSG-W602.wingroup.windeploy.ntdev.microsoft.com>
X-Mailer: Apple Mail (2.915)
Return-Path: lars.eggert at nokia.com
X-OriginalArrivalTime: 07 Jan 2008 10:46:49.0657 (UTC) FILETIME=[9B4CF290:01C8511A]
Hi,
On 2008-1-6, at 21:51, ext Christian Huitema wrote:
>> It's important to remember the two reasons for congestion control
>> from
>> Sally's RFC2914: preventing congestion collapse and establishing some
>> degree of fairness.
>
> Wait a minute. This is equivalent to saying that continuous
> stability of the Internet depends on the benevolent cooperation of
> all Internet users. The implementation of slow start in TCP did
> indeed prevent the Internet to collapse at a crucial time in its
> evolution. But that was then. I don't think we can extrapolate the
> 1988 fix into an everlasting principle, not with a billion hosts on
> the Internet.
this is taking us pretty far away from the context in which I made the
statement you quoted above (what we learn from the current CUBIC
deployment), but anyway:
I'm not saying that end system transport-layer congestion control is
all that we'll ever need, especially when end systems become selfish.
But I do think that the stable operation of the Internet has been
depending and probably is still depending on the majority of the
traffic being sent over congestion-controlled transport protocols.
If that changes (and my nightmare scenario is that the BitTorrent guys
realize that they don't need to use the in-kernel TCP stack, all they
need to use is the TCP packet format), yes, then we do need something
else, as you say below.
> In that research on "network based" mechanisms, we should accept
> that end systems will be primarily motivated by their self interest.
> They are certainly not motivated by a desire to be fair with others.
> The desire of fairness is a social contract, and I don't think we
> can assume such a contract when the Internet covers the entire
> world. If we could, that would indeed be a good thing, we would also
> have worldwide peace and all that kind of thing. So, we have better
> assume that end system will try to maximize their individual
> satisfaction, rather than looking for the common good. If we cannot
> rely on the benevolent sum of individual behaviors, we need to build
> mechanisms in the network that help it guarantee its stability.
>
> In fact, ISP are already attempting to build this "stabilization
> tools" in their networks. We see various forms of traffic shaping
> implemented at bottleneck points. We see various tools used to
> perform "traffic engineering". ISP need to do that if they have any
> hope of providing some kind of guarantees of service. Many on this
> list will find those tools crude, or possibly harmful. Fine, but the
> reaction cannot be to retreat in the ivory tower and leer at those
> lowly network engineers. Instead of clinging to the illusion that we
> can entirely solve the problem in an end to end fashion, that all
> end systems will follow the dictate of the E2E group, maybe we
> should actually address the problem. What is the best mechanisms to
> deploy in the Internet to make it immune to variations in end to end
> algorithms?
I agree with you that there'll need to be something that protects the
network and other users from selfish end systems, and it will need to
be a mechanism that doesn't only rely on the cooperation of those end
systems.
But I'm also not convinced that this functionality should completely
move into the network, which is what I think ISPs are currently
attempting to do. An architecture that gives incentives to the end
systems to behave correctly, rather than controlling everything
network-side, appears more viable to me. (And we've just gotten some
EU money over the next three years to look at how such a system would
look in detail.)
Lars
More information about the end2end-interest
mailing list