[e2e] TCP un-friendly congestion control
Jon Crowcroft
Jon.Crowcroft at cl.cam.ac.uk
Sun Jun 8 02:20:54 PDT 2003
surely in the end we are not looking at just an TCP problem, but an IP
architectural one? and currently ECN is our best solution for the
basic part of the problem - it can be incrementally deployed - we
have almost all TCP implementations capable - the prevention of
deployment in early days was ill-educated firewall default
configurations - this is probably already mitigated on the sortd of links
that are deploying Gbps TCP, and if we can use "end user pressure" on
ISPs to put some demand here, could be enabled (or unblocked)
elsewhere...
many quite interesrting other experiments (i.e. novel service
structures) are enabled by this simplest of changes which is well
documented, fairly well understood, and fairly harmless at worst,
incredibly potentially useful at best...
we need a high energy "back the bit ECN" campaign....
In missive <LPBBIOAJBODIBMICIEBNEEIODCAA.rhee at eos.ncsu.edu>, Injong Rhee typed:
>>
>>
>>I didn't want to comment on Craig's last point; but it seems it is taking
>>life of its own.
>>
>>I know some folks are quick to jump to blame TCP for everything without
>>giving specifics. That is clearly a mistake. But my original post was not
>>unclear; it was simply pointing out the limitation of TCP (regardless of its
>>implementation). Specifically, its increase and decrease policy is not
>>scalable to high bw and delay product environment.
>>
>>There are many ways to tweak TCP to perform it better and also the
>>application level approaches like multiple TCP connections have been taken
>>before (for discussion on this, have a look at the link below). However,
>>there is a limit on this tweak which is more fundamental to the window
>>adjustment policy. When the bandwidth and delay product becomes larger than
>>a certain number, TCP can't use the bandwidth. As Guglielmo Morandin
>>[gmorandi at cisco.com] pointes out, in such environments, you need virtually
>>zero loss rates for TCP to achieve the full bandwidth (even its 75% cutoff).
>>
>>Networking research community at large should have a hard look at this
>>problem and come up with a better congestion control algorithm for this
>>regime. TCP has gone through too much tweaking and we need some
>>alternatives. However, it is not to say we should all abandon TCP. What I am
>>trying to say is that we need to modify TCP in more fundamental ways than
>>simple tweaks.
>>
>>There are several research activities (e.g., FAST, HSTCP, Scalable TCP,
>>etc...for more info, have a look at
>>http://datatag.web.cern.ch/datatag/pfldnet2003/program.html -- at least this
>>link is not a self-serving one :-) It gives pointers to various research
>>activities going on to overcome this problem. Hope this helps.
>>
>>Injong
>>
>>
>>
>>-----Original Message-----
>>From: end2end-interest-admin at postel.org
>>[mailto:end2end-interest-admin at postel.org]On Behalf Of Constantine
>>Dovrolis
>>Sent: Saturday, June 07, 2003 3:24 PM
>>To: Craig Partridge
>>Cc: end2end-interest at postel.org; Ravi Shanker Prasad; Manish Jain
>>Subject: Re: [e2e] TCP un-friendly congestion control
>>
>>
>>
>>taking Craig's last point one step further: many people
>>argue today that TCP cannot saturate network paths with
>>a high bandwidth-delay product, and that a new version
>>of TCP (or a new transport protocol) is needed.
>>That may not be necessarily true however.
>>
>>We recently designed a socket buffer sizing technique
>>that aims to drive a bulk TCP transfer to its maximum
>>feasible throughput. The basic idea is that if the socket
>>buffer size is appropriately limited, the connection
>>can saturate its path but without causing network buffer
>>overflows and subsequent window reductions. An important point
>>about this technique is that it does not require any
>>changes in TCP; all the work is done at the application-layer,
>>through socket buffer sizing, receive-rate measurements,
>>and out-of-band RTT measurements. The technique (called
>>SOBAS) does not also require any prior knowledge
>>of the path's bandwidth/buffering characteristics.
>>
>>If you're interested in the whole story, the paper is
>>available at:
>>
>>http://www.cc.gatech.edu/~dovrolis/Papers/sobas.pdf
>>
>>Random losses, i.e., losses that can occur independent
>>of our connection's window, are still a problem.
>>One way to deal with them, assuming again that we can't
>>change TCP, is to use a few parallel TCP connections.
>>This may not be strictly-speaking "TCP friendly",
>>but it is a pragmatic approach to avoid large
>>window reductions upon the occurence of random losses.
>>
>>
>>Constantinos
>>
>>--------------------------------------------------------------
>>Constantinos Dovrolis | 218 GCATT | 404-385-4205
>>Assistant Professor | Networking and Telecommunications Group
>>College of Computing | Georgia Institute of Technology
>>dovrolis at cc.gatech.edu
>>http://www.cc.gatech.edu/fac/Constantinos.Dovrolis/
>>
>>On Fri, 6 Jun 2003, Craig Partridge wrote:
>>
>>> OK, let me get on my high horse here for a moment.
>>>
>>> The original poster asserted that in an environment where the network
>>> went at 1 Gbps and had 50ms of delay, TCP was hopeless.
>>>
>>> The point I was trying to drive home is that it is not hopeless. That
>>> you have to define the environment far more carefully before you assert
>>> that TCP can or cannot do the job. One of my frustrations these days is
>>> people who fail to be careful. I was trying to encourage care in the
>>> problem statement.
>>>
>>> Thanks!
>>>
>>> Craig
>>>
>>>
>>
>>
>>---
>>Incoming mail is certified Virus Free.
>>Checked by AVG anti-virus system (http://www.grisoft.com).
>>Version: 6.0.488 / Virus Database: 287 - Release Date: 6/5/2003
>>
>>---
>>Outgoing mail is certified Virus Free.
>>Checked by AVG anti-virus system (http://www.grisoft.com).
>>Version: 6.0.488 / Virus Database: 287 - Release Date: 6/5/2003
>>
cheers
jon
More information about the end2end-interest
mailing list