[e2e] end2end-interest Digest, Vol 18, Issue 9
Detlef Bosau
detlef.bosau at web.de
Fri Aug 19 14:05:45 PDT 2005
Alok wrote:
>
>
> A flow must not have more packets in transit than the congestion window
> allows (the "equilibrium window") and a packet must not be sent to the
> network until some other packet was taken away.
>
> Alok=> ahh!! and how do we "know that"??
A sender knows this from the acknowledgements.
>
> _This_ and nothing else limits the "energy" put into the network (the
> analogy to physics is obvious: We talk about energy conservation,
> impulse conservation, sometimes I think, Van Jacobson and Sir Isaac are
> best friends :-)) and hence bursts, oscillation etc. are limited.
>
> Alok=> ? so?
>
>
> Recall the Takoma bridge disaster, make the wind to stop blowing - the
> Takoma bridge may oscillate to eternity, but at least it was still there.
>
> Alok=> :-) if u can find the freq, it will still beat!
>
>
So what? As long as it does not _break_ it may beat!
As long as we have no congestion collapse, there is no problem with
queue oscillation.
Of course, there may be a problem with RTT estimation, which was the
original topic of this thread. However, when we have small queues and
perhaps queueing delays turn to be neglectible compared to propagation
delays, RTT estimation becomes easier than today.
>
> The more complex answer can be found e.g. in Jains "Delay" paper:
> Limited queues with a length thoroughly thought through can improve
> network performance.
>
>
> Alok=> My ability to read is limited.
I apologize.
Perhaps we should send you posts in mp3 format? =8-)
I admit, I often write too long posts. However, the issue is extremely
difficult. So, i can´t put too short. (Recall Sireens signature and the
Einstein quote.)
>
>
> Not quite. Think of RED.
>
>
> Alok==> how so?
Some RED disciplines randomly discard packets even when there is no
actual queue overun in order to limit oscillation and increase stability.
>
> Even that would not _require_ a queue. Think of Ethernet. What else is a
> "congestion" than a "collision", when there is no queueing on the router?
>
> Alok=> depends. A collision is the inablity to send something due to a media
> limitation, and *note*, the end host "orginiating" the packet experinces it
> in the case of collision
Not quite. Recall Davids recent post. In 802.11 ad hoc nets a collision
results in a silent "discard" exactly as a congestion.
This perfectly makes sense: Both, a media limitation and a queue
limitation, is a limitation. Some part of the network can not convey the
incoming packet.
>
>
> So, if we had no queues, the Internet would run. Perhaps the throughput
> could be somewhat higher, perhaps the way the Internet runs would be
> more similar to a turtle than to Achilles - but who cares? Isn´t there
> still snail mail delievered sent by soldiers who served with General Custer?
>
> However, too large a queue can have the same effect.
>
> Alok=> define "too large"
>
That´s the million dollar question. Especially as a TCP window is
limited to 64 kBytes by default. However, if one would follow the
"advice" of some "bright" network consultant I read recently, we should
play around with window scaling in LANs to improve performance (God in
Heaven!). Imagine a TCP sender scaled to AWND units of 1 Megabyte (we
will _really_ imrpove performance). So imagine, a TCP sender has an
actual window of 2 Megabyte and a router would support this.
We would introduce a single trip e2e latency of nearly one second here -
from one floor in a building to the other.
This is not really what we want to do.
In addition, in practical networks the vast majority of flows are short
timed flows, so a routers memory is not occupied because there is not
enough data in the flow.
Hoever, theoretically (refer e.g to Jains paper) too large a buffer can
simply bring down a flow´s throughput to _zero_. This is extremely hard
to imagine: A sender´s window may increase beyond all limits, so does a
bottleneck queue and so the time for a packet to stay in the queue may
increase beyound all limits as well.
I must correct the above. It´s not the infinite queueing space which
brings the flow to the ground but the _window_ size.
But this exactly results from unlimited queueing space if you don´t put
an upper limit to a TCP sender´s window.
To put Jain and Nagle short: They investigated the behaviour of packet
switching networks with unlimited queues - and came to the advice: Make
the queues short.
>
> I got a paper submission rejected this year with the enlightning comment
> "overqueing is bad, refer to Reiner Ludwigs PhD dissertation".
> I know Reiner Ludwigs PhD dissertation
> When he claims, overqueueing is bad, he is perfectly right as all the
> researchers before. It´s really an old story.
>
>
> Alok=> yep................ but wrap around the window..right?
I lost you.
BTW: I do not talk about "Nagles algorithm" here but primarily of papers
like: "On Packet Switches With Infinite Storage" from 1987.
So basically, we do not even talk about TCP here.
>
> However, when service times oscillate from milliseconds to _minutes_(!)
> at the last mile (refer to the relevant ETSI/ITU standards for GPRS
> before calling me nuts), traffic might happen to be a little bursty if
> not equalized by queues and appropriate techniques.
>
>
> Alok=> My inability to read does wonders... ;-)
I see. But my posts are a good practice. ITU standards are _much_ longer :-)
Detlef
--
Detlef Bosau
Galileistrasse 30
70565 Stuttgart
Mail: detlef.bosau at web.de
Web: http://www.detlef-bosau.de
Mobile: +49 172 681 9937
More information about the end2end-interest
mailing list