[e2e] [Fwd: RED-->ECN]
Steven Low
slow at caltech.edu
Thu Feb 1 14:14:36 PST 2001
Julo
I should indeed qualify my statements, but it's hard to be precise
without getting too involved. Let me nonetheless try to give a
rough sketch.
All statements are made within the context of a simple deterministic
fluid optimizaiton model of flow control first proposed by Frank Kelly,
where one can think of the goal of flow control is to optimally allocate
source rates so as to maximize aggreguate utility subject to capacity
constraint. The issue is how to solve this convex program in a
distributed
manner. Different approaches to solve it leads to different flow
control
schemes; in other words, flow control can be thought of as a distributed
algorithm to solve this convex program.
It turns out that one can interpret various flow control algorithms,
such as TCP AIMD, Vegas, RED, in this framework. The "congestion
measure"
in my previous emails is the lagrange multiplier. A view is to think
of
source rates as primal variables and congestion measures as dual
variables,
and the various algorithms as different primal-dual (Lagrange) methods
to
solve the otpimization program. The various schemes differ in
a) what is the corresponding dual variable (lagrange multiplier) in the
model
(loss rate for AIMD/DropTail, queue length for RED, queueing delay for
Vegas)?
b) how is the dual variable updated?
c) how is the primal variable updated?
TCP source algorithm (AIMD, Vegas) are particular ways to do c), and AQM
(RED,REM)
are particular ways to do a) and b).
Now, given a problem instance, the *equilibrium value* of the lagrange
multiplier
is fixed and is independent of the algorithm to reach it (i.e. flow
control process).
Therefore, if queue is used as the congestion measure, then we have no
control on
its equilibrium value; it will be high if, say, the number of sources is
high.
If instead, we compute some number as the congestion measure to which
source
rates react, then we can update the congestion measure in a way that
drives the
queue to zero and aggregate input rate to link capacity.
How can you do this in networks? Examples are
Frank Kelly and Srikant's virtual queue, the "price" in REM, or the
probability
in Misra's PI controller (which is equivalent to REM).
Is this unconventional? Not at all. In economic systems, we set prices
to control
demand (and supply). When post office is overloaded, we raise price to
hire more
officers, not throwing away mails to curb demand.
There are two limitations in such a simple framework. First
these optimization models are convenient to describe only
the equilibrium situation. Yet it already
gives great insight on how various schemes behave. Even though the real
network may always be in transient, it tells us the direction it is
moving
in. Second, stochastic effects are not captured (but see Kelly's
papers).
I hope I have not made everything much more confusing by being half
precise.
To really clear things up, I'd refer you to the followng papers:
A Duality Model of TCP flow controls
Optimization Flow Control, I: Basic Alg & Convergence
Understanding Vegas: a daultiy model
REM: Active Queue Management
on our website: netlab.caltech.edu
and various papers by Kelly, Srikant, Misra, Walrand and Anatharan.
Steven
julian_satran at il.ibm.com wrote:
>
> I would be very interested to understand how you see the "virtual-measure",
> you are suggesting exists, relate to accepted theory. It is usually
> though that high utilization rates and long queues are related (i.e., you
> have long queues when utilization goes up) and this is the reason queue
> length is used as a measure of congestion. And this is true for all
> accepted queueing models. Do you have in mind some "quantum statistics"
> for networks that we are all unaware of? Do those have some tunnel effects
> that allow you to dream about high utilization AND short queues? Is there
> some mathematics that can support your statements?
>
> Julo
>
> Julian Satran
> IBM Research Lab. at Haifa
More information about the end2end-interest
mailing list