[e2e] Agility of RTO Estimates, stability, vulneratibilites
David P. Reed
dpreed at reed.com
Tue Jul 26 11:10:09 PDT 2005
Sireen Habib Malik wrote:
>
> What "network" should one consider when estimating RTO's (as given in
> the topic)?
>
>
A discursive answer - from the abstract to the concrete - follows:
That depends on why you are doing the measurement - in particular, what
you intend to use the measurement for... a corollary of my rant is
that measurements acquire their meaning partly from their context of use.
So, if you want to measure RTO as part of an adaptive control algorithm,
the relevant evaluation of measurement approach is the one that is most
suited for the purposes of that algorithm. (e.g. tuning a tight control
loop or deciding when to plan for more capacity are quite different
contexts).
Of course the algorithm is probably correct based on an argument
expressed in terms of assumptions about what the measurement measures.
The logic is essentially meta-circular. We won't go into that
recursion, but instead assume that one or more non-trivial fixed points
exist.
In less abstract terms, there are a large class of candidates to be
plugged in to the slot in your algorithm called "RTO measurement". A
subset of those measurements give outputs that make your algorithm work
well. One supposes that candidate measurement algorithms that will
work give answer sequences in the near neighborhood of an ideal
"correct" answer which is an idealization that may not even be well
defined (since the actual RTO exists only when a bit experiences that
actual RTO value, the "correct" RTO is an extension of actual real RTOs
over a domain of time instants where its value is not even of interest -
does a network have an RTO when no packet is actually being sent?).
Consider for example a control problem involving noisy and missing data
(such as using RTO measurements to control congestion). It's been shown
in some cases, for some control algorithms, that the control system can
still work quite well with very "large" errors, whereas trying to
correct the errors by some strategy that smooths or delays the arrival
of measurements actually results in far worse control.
So "accuracy" need not be something that is calculated by comparison of
the results of the measurement numerically as if there is some
well-ordered invariant frame of reference. Confidence intervals are
usually defined in terms of numerical quantity, not in terms of effect
on a larger system.
On the other hand if the use of the RTO measurement is getting a paper
published, accuracy is best calculated as whatever it takes to get peer
reviewers to nominate your paper for publication. One hopes that peer
reviewers are quite familiar with the normal needs for which such
measurements are done. But for new fields and for "mature fields"
where theory has gone a separate way from practice, peers may be just as
limited as the author in terms of their perspective. This is the
"danger" I refer to.
More information about the end2end-interest
mailing list