[e2e] Why do we need TCP flow control (rwnd)?
Detlef Bosau
detlef.bosau at web.de
Mon Jun 30 07:56:39 PDT 2008
Michael Scharf wrote:
> On Fri, 27 Jun 2008 at 12:01:30, Saverio Mascolo wrote:
>
>> you are missing main points here:
>>
>> 1. flow control is aimed at avoiding overflow of the receiver buffer. the receiver buffer is assigned on a per-flow base, i.e. it is not a shared resource. This makes flow control a mechanism that is 100% perfect from the point of view of control, i mean that all the feedback required for perfect control is available;
>>
>> 2. congestion control does not know buffer available at routers because they are shared; this is the reason you need a probing mechanism to estimate cwnd. you do not need this probing mechanism with the receiver buffer since the advertised window tells you the exact available buffer.
>>
>> this is why we need flow control. moreover, saturating cwnd with receiver-buffer size (f.i. 64Kb) avoids that any single flow congest network using probing.
>>
>
> Very generally speaking, memory _is_ a shared resource on a host.
>
> On the one hand, you are probably right, since most network stacks
> will have a buffer allocation strategy that somehow ensures that the
> free buffer space, which is signaled in the receiver advertized
> window, is indeed available.
Hopefully, anyone will do that. I have a very critical position against
overcommittment. Particularly, when it comes to kernel memory.
> But since memory allocation in an
> operating system is a rather complex issue, I am not sure whether
> there is a 100% guarantee that the receive buffer has really (at
> least) the announced size.
I beg you pardon?
> Note that modern TCP stacks announce
> windows much larger than 64K (e. g., up to 16MB), and this could be an
> incentive to somehow share buffer space if there are many parallel
> connections.
>
>
Michael, I did not count the times even _this_ year, when I released my
computer here in my room from trashing and memory overcommittment - by
power cycle.
And I really hate a computer "going mad" when I want to work.
> On the other hand, the flow control is not 100% perfect, because of
> the inherent delay of feedback signals.
Where is the problem? You always announce the amount of buffer space
which is actually available.
Actually, I think I do understand what you mean. Some years ago I
thought about this problem for several weeks and I painted dozens of
sketches and scenarios - until I convinced myself, that the "delay" is
in fact no problem here.
> For instance, many TCP stacks
> use receive window auto-tuning and dynamically increase their buffer
> size during the lifetime of a connection.
Could you give a concrete example for "many"? And is this behaviour RFC
conformant?
Particularly, you well remember the "use it or loose it" principle that
will cause a sender to _de_crease its window size , when a flow is
inactive for a period of time.
> This means that, at a given
> point in time, there might be more buffer space allocated in the
> receiver than the sender is aware of.
>
I don't think that this is a problem.
> BTW, if window scaling is negotiated and receiver window auto-tuning
> is enabled, single TCP flows should be able to fill almost any
> pipe. And, this propobably just what an app expects from TCP...
>
Definitely not. At least not me.
I don't see a justification for "auto-tuning" (what you wrote sounds
highly questionable to me) and I do not expect TCP to fill pipes but I
do expect TCP to be well behaved and not to cause problems by weird
window experiments.
--
Detlef Bosau Mail: detlef.bosau at web.de
Galileistrasse 30 Web: http://www.detlef-bosau.de
70565 Stuttgart Skype: detlef.bosau
Mobile: +49 172 681 9937
More information about the end2end-interest
mailing list