[e2e] Some simple, perhaps stupid, idea. was: Re: Agility of RTO Estimates, stability, vulneratibilites
Detlef Bosau
detlef.bosau at web.de
Thu Jul 28 12:14:40 PDT 2005
Detlef Bosau wrote:
>
> Lucky me, I found Raj Jains Paper on Divergence of Timeout Algorithms...
> online. And now I´m trying to obtain the paper by Lixia Zhang, Why TCP
> Timers don´t work well. And primarily of course I want to get and to
> read the paper on Edge´s algorithm. I think, that´s what I want and what
> is my next step. Understanding the rationale behind Edge´s algorithm,
> which is used until today AFAIK.
>
...
>
> The simple question is: When I hide a network´s last mile, e.g. a mobile
> wireless network, behind some kind of PEP and thus provide the sender
> with a "changed RTT behaviour", which any spoofing or splitting PEP
> does, what is the ideal RTT behaviour? I.e. an RTT behaviour which makes
> Edge´s algorithm work perfect?
Eventually, I got Edge´s paper. Perhaps, we should turn away from
computers and go for good old books. O.k., I got Edge´s paper as pdf
File. However: The longer I study literature on TCP, the more I get the
impression: The older it is, the better it is. This may a stupid
prejudice. But much of the somewhat older papers are really carefully
thought through. And perhaps do not follow blindly the "publish or
perish" principle, which is perhaps modern nowadays.
In this post, I simply want to share a very spontaneous idea, presumably
it´s stoneaged, but it is quite simple and clear. And it illustrates my
thoughts.
Edge poses rather weak requirements for the RTT process. E.g., the
individual stochastic variables T(n) must share a common expectation and
variance. And there is some requirement concerning the convariance.
This is far from being "i.i.d." or memoryless or Poissonian or something
like that.
Particularly, there is absolutely no assumption for the T(n) to obey
some specific distribution funkcion. The rationale for the RTO itself is
based on Chebyshev´s inequality, thus it is absolutely generic.
However, Edge wants the T(n) to share a common expectation and variance.
Now, when I think of RED strategies, I remember a strategy where there
are two thresholds a, b, a < b, for a queuelength q. If q < a, packets
are accepted. If b < q, packets are rejected. If a <= q <= b packets are
rejected randomly with a probality p which is linear increased from p=0
if q=a to p=1 if q=b.
Question: Would it make sense to chose a and b that way, that
i) q has a constant expectation and
ii) q has a constand variance
for certain periods of time?
Expection and variance could well be chosen appropriate for the load
situation.
When I consider a network as a sequence of links and queues (I know.....
but I will do this for the moment), the varying part of the RTT is the
queueing delay, as long as the path for a connection does not change.
So, if every router on the path would try to maintain a constant
expectation and variance for queue lenghts, the queueing delays would
have a constant expectation and variance.
Therefore, the observed Tn would have a constant expectation and
variance, at least for small periods of time.
Would it be possible to achieve this by management of the thresholds a
and b? If so, this could be achieved by each router individually.
As a consequence, at least the requirement for a common expectation and
variation of the T(n) would be met.
So far. It´s spontaneous, it´s perhaps stupid.
But it shall illustrate my way of thinking, that it may be reasonable to
perhaps make the network meet a protocol´s requirements instead of
always make a protocol suitable for a network.
However, I expect that someone has discussed this before, it´s just too
simple.
Detlef
--
Detlef Bosau
Galileistrasse 30
70565 Stuttgart
Mail: detlef.bosau at web.de
Web: http://www.detlef-bosau.de
Mobile: +49 172 681 9937
More information about the end2end-interest
mailing list