[e2e] "congestion" avoidance...
Jon Crowcroft
J.Crowcroft at cs.ucl.ac.uk
Tue Apr 17 04:38:21 PDT 2001
In message <5.0.2.1.2.20010417065036.03058760 at mail.reed.com>, "David P. Reed" t
yped:
>>I think I was trying to "napsterize".
yes, i agree with this, but i was going further - that we don't need
to have global per source/dest agreement - it can be on arbitrary
aggregations (a la RSVP wildcard matching type stuff) and then withi
nthose aggregations, the tradeoffs could be agreed differently -
>>The seed of the idea is that you get all kinds of signals as a result of
>>congestion that can be used in a control loop, so why not add a simple
>>signal that can be modulated on the basis of source/dest intent, and which
>>is shared with other participants in the congestion. (It's like money in
>>the sense that its scarcity means you only want to use it at a rate you can
>>afford.)
>>
>>Besides your packet rate (which directly affects other flows in the
>>"flux"), having a bit per packet input that is combined into the other
>>flows' flags would seem to be a powerful differential signalling channel.
>>Let's say that a normal flow has about 50% of its packets with "urgency"
>>on, 50% off.
>>Say when you turn on ECN in a packet with the "urgency" flag set, you turn
>>off the next "urgency" flag you see on a packet arriving into the
>>queue. Then the source/dest of the other flow sees that it is interfering
>>with an urgent flow because it sees that its own urgency rate has
>>dropped. [This is the sketch of how a cross-flow signalling scheme might
>>provide information from one flow to others - the algorithm is not intended
>>to work, of course.]
>>
>>At 11:20 AM 4/17/01 +0100, Jon Crowcroft wrote:
>>
>>
>>>so the problem i have with the current "ECN pricing" thinking is that
>>>it ignores users preferences for stability and predictability over
>>>cheapness (and we have a LOT of evidence gathered from mobile phone
>>>contracts, web and traditional telewphony as well as airline procing
>>>and so on i can cite)
>>>
>>>the shadow price for a packet (smart market) is one model, but leads
>>>to potential rapid fluctation in price aroudn flash crowd periods,
>>>which are all to common in IP networks
>>>
>>>the simple alternative is a shadow price for a virtual circuit - this
>>>gives a stable price for the throughput over the "lifetime" of a
>>>session.....
>>>
>>>
>>>if you Mix this, you can get into nasty arbritrage situations....and i
>>>don't buy into the story that yo ucan offer users a choce via risk
>>>brokers - for the very reason that the traffic is highly dynamic...
>>>
>>>also, risk brokers form markets themselves.....
>>>
>>>what i was thinking ewas to "democratise" (disintermediate) the risk
>>>broker and let users form their +own+ cartels dynamically...
>>>
>>>i.e. we napsterise congestion pricing for packets and flows...
>>>
>>>In message <5.0.2.1.2.20010417060401.03063510 at mail.reed.com>, "David P.
>>>Reed" t
>>>yped:
>>>
>>> >>Interesting thoughts. However, money or something like it needs to enter
>>> >>into the thinking. I.e. some notion of sharing responsibility for costs
>>> >>imposed on others.
>>> >>
>>> >>IE: At a point of congestion, the "indirect channels" among competing
>>> flows
>>> >>provide a way of signalling (at some bitrate) for a bargaining scheme.
>>> >>
>>> >>What range of bargaining schemes can be piggybacked on this signalling
>>> channel?
>>> >>
>>> >>For example, what if a single (urgency) bit per packet (like the ECN
>>> flag,
>>> >>but provided by the source to the congested queue) could be modulated at
>>> >>the source, tracked in a state variable at a router queue, and coupled
>>> into
>>> >>a bit in each outgoing packet that controls rate like ECN.
>>> >>
>>> >>
>>> >>At 10:06 AM 4/17/01 +0100, Jon Crowcroft wrote:
>>> >>
>>> >>
>>> >>
>>> >>>there have been some steps recently to look at a range of rate and
>>> >>>window based mechanisms for sharing the net amongst a set of sources (or
>>> >>>sinks if we include receiver based multicast schemes) - i was looking
>>> >>>at these and wondering if it isnt time to revisit some of the
>>> >>>congestion control and avoidance thinking
>>> >>>
>>> >>>some schemes have been proposed that smooth the adjustment so that
>>> >>>over an RTT we creep up to the operating rate, and creep down, on a
>>> >>>packet by packet (inter-packet delay adjustment) basis
>>> >>>(RAP from Handley et al)
>>> >>>
>>> >>>other schemes have proposed different powers for the increase decrease
>>> >>>function (and assert that so long as we decrease x^n, and increase
>>> >>>x^(n-1), we ought to be "ok" for some definition of ok)
>>> >>>(binomial adjustment etc from
>>> >>>
>>> >>>the TCP AIMD with fast retransmit scheme has several motivating factors
>>> >>>some intended, some lucky happenstance (serendipitous)...
>>> >>>
>>> >>>1/ sampling network conditions and eliminating noise:
>>> >>>
>>> >>>currently, this operates over the RTT timescale, but is memoryless
>>> >>>after that....estimates for loss effectively im,plicit in the AIMD
>>> >>>operation, but the noise filter (number of dupacks) is somewhat
>>> >>>rigid...
>>> >>>
>>> >>>2/ safe/stable operation:
>>> >>>given feedback controller, its reasonable to operate this over
>>> >>>packet conservation/self clocking makes it more smooth
>>> >>>
>>> >>>3/ relating end system rate adjustment timescale to buffering
>>> provisioning
>>> >>>the AIMD scheme has the bandwidth/delay product wrth of network
>>> >>>buffering as a necessary side effect - other adjustment schemes might
>>> >>>need less (some might need more but that almost certainly means they
>>> >>>are trouble:-)
>>> >>>
>>> >>>4/ social coupling - we have a target operating point which will be
>>> >>>some fraction of a bottleneck link
>>> >>>if we take a flow f, and a flux (sum of flows into a bottleneck) F,
>>> >>>then the idea is that we get a share proportional to the _resource_ we
>>> >>>use, which (approximately) includes 1/RTT as a factor (kelly et al, le
>>> >>>boudec et al) the idea is that a set of fs in an F are coupled by the
>>> >>>loss or ECN feedback function, and by some reaction period being at
>>> >>>least in the same ballpark....
>>> >>>
>>> >>>in fact, though we don't have to have smooth functions at all, nor do
>>> >>>we have to sample only the average loss rate, nor choose the sample
>>> >>>rate to be an RTT - the RTT is a way of _loosely _ coupling things,
>>> >>>but is perhaps too strong
>>> >>>
>>> >>>what if someone wanted a _rate_ that persited for all (or a larger
>>> >>>part) of a connection? how could we work out some sort of congestion
>>> >>>model that accommodated both packet and connection timescales?
>>> >>>
>>> >>>at least one factor seems missing, and that is some estimate of the
>>> >>>number (and rate of change of number) of flows....if we alter the
>>> >>>sample period, and sample bot hte hcongestion feedback Mean, AND its
>>> >>>variance, we might be able to (assuming the social coupling function
>>> >>>was still "social") estimate this
>>> >>>
>>> >>>obviosuly if people want to they can behave anti-socially (but that is
>>> >>>and wil lalways be true unti lwe do pricing or othewr forms of
>>> >>>admission control) - letsassume they behave "nicely"....
>>> >>>
>>> >>>could someone choose to operate a "very slow" congestion control
>>> >>>scheme? why not? lets say i run a connection that takes 1/10 of the
>>> >>>capacity, but there are 5 other connections, then why should i react
>>> >>>to loss unless my longer term loss (or ecn) rate tells me that
>>> >>>there's now 9+ other flows? currently, if i run any adjustment
>>> >>>scheme based just on average, i have a chance of adjusting wrongly...
>>> >>>
>>> >>>more importantly, maybe
>>> >>>secondly, how about re-examining the social coupling function? why
>>> >>>shouldn't ten people _agree_ a different congestion partition function
>>> >>>(e.g. they have an application that requires n sources)
>>> >>>
>>> >>>i guess this could be implemented via the Congestion Manager type API,
>>> >>>but i am interested in the general family of functions that fit this
>>> >>>more general model - for example, it seems to me that you can have
>>> >>>radically different increase/decrease if you have
>>> >>>a) a different sample period and a more accurate deascripotion of the
>>> >>>evolutuon of the loss/load process over time (e.g. some sort of fancy
>>> >>>bayesian predictor)
>>> >>>b) a different share/social function - e.g. if we have 10 sources
>>> >>>agreeing on a different load, then how do they distribute this
>>> >>>information and how do we make sure they aren't penalized by any
>>> >>>extra fancy stuff people might later add!
>>> >>>
>>> >>>j.
>>> >>
>>>
>>> cheers
>>>
>>> jon
>>
>>- David
>>--------------------------------------------
>>WWW Page: http://www.reed.com/dpr.html
>>
>>
cheers
jon
More information about the end2end-interest
mailing list