[e2e] Latency Variation and Contention.
Detlef Bosau
detlef.bosau at web.de
Tue Aug 16 01:54:36 PDT 2005
Hi to all.
Recently, I found the following paper by Sherif M. ElRakabawy, Alexander
Klemm and Christoph Lindemann:
http://mobicom.cs.uni-dortmund.de/publications/TCP-AP_MobiHoc05.pdf
The paper proposes a congestion control algorithm for ad hoc networks.
Perhaps, this paper is interesting within the context of our latency
discussion.
However, I´m not yet convinced of this work.
If I leave out some sheets of paper, some simulations and many words,
the paper basically assumes that in ad hoc networks a TCP sender can
measurethe degree of network contention using the variance of (recently
seen) round trip times:
-If the variance is close to zero, the network is hardly loaded.
-If the variance is "high" (of course "high" is to be defined) there is
a high degree of contention on this network.
Afterwards the authors propose a sender pacing scheme, where a TCP
flow´s rate is decreased with respect to the so measured "degree of
contention".
What I do not yet understand is basic assumption: variance 0 <=> no
load; variance high <=> heavy load.
Perhaps the main difficulty is that I believed this myself for years and
it was an admittedly difficult task to convince me that I was wrong %-)
However,
@article{martin,
journal = " IEEE/ACM TRANSACTIONS ON NETWORKING",
volume ="11",
number = "3",
month = "June",
year = "2003",
title = "Delay--Based Congestion Avoidance for TCP",
author = "Jim Martin and Arne Nilsson and Injong Rhee",
}
eventually did the job.
More precisely, I looked at the latencies themselves, not the variances.
Let´s consider a simple example.
A network B
"network" is some shared media packet switching network.
Let´s place a TCP sender on A and the according sink on B.
The simple question is (and I thought about this years ago without
really coming to an end - I´m afraid I didn´t want to):
Is a variance close to zero really equivalent for a low load situation?
And does increasing variance indicate increasing load?
Isn´t it possible that a variance close to zero is a consequence of a
fully loaded network? And _decreasing_ load in that situation would
cause the latencies to vary?
If we could reliably identify a low load situation from a varaince close
to zero, we could use the latencies themselves as a load indicator
because we could reliably identify a "no load latency" and thus could
identify imminent congestion by latency observation.
One could even think of a "latency-congestion scale" which is calibrated
first by variance observation in order to get the "unloaded" mark and
second by drop observation and some loss differentation technique to get
the "imminent congestion" mark.
To my knowledge, this is extensively discussed in literature - until
Martin, Nilsson and Rhee found the mentioned results.
Now, back to my example and the basic question: Does the assumption,
latency variations indicate the degree of contention in an ad hoch
network, really hold?
I admit, I personally do not yet see an evidence for this.
Detlef
--
Detlef Bosau
Galileistrasse 30
70565 Stuttgart
Mail: detlef.bosau at web.de
Web: http://www.detlef-bosau.de
Mobile: +49 172 681 9937
More information about the end2end-interest
mailing list