[e2e] admission control vs congestion control
Jon Crowcroft
Jon.Crowcroft at cl.cam.ac.uk
Sun Apr 23 04:56:11 PDT 2006
congestion control and admission control are just two points on a large spectrum
of design space for deferred gratification. whatever you do to control the operating point of a network
for users, individually and collectively, you will
a) not run a net at 100%
b) not let all the users do all they want all the time.
1. call blocking versus rate adaption myth:
what you have to ascertain are
i) overheads of different mechanisms (reservation, AQM, loss based feedback etc)
ii) loss of customer base and customer satisfaction due to either rate limit, or call blocking.
it turns out no-one has _ever_ done this in a single public network, so we actually dont know how to trade off
users willingness to pay, and users' experience/satisfaction with how long we defer gratification
during congested periods, which i define as periods when some users traffic will displace other users traffic,
downloads go slower with rate adaption, but
realtime services like voice move into a
regime of being (nearly or completely) unusuable and are either blocked, or
the users try and give up -
2. elestic versus inelestic myth:
but are these a simple pair of opposites?
no they arent. you cannot tell how _important_ it is that an activity be completed by a certain point in time,
whether that activity is to hold a voice conversation, watch a movie, or just to download the train timetable,
election result, or some file to execute or print - it isnt _intrinsic in the data, its extrinsic (in the users'
requirements).
3. overprovisioning versus user controls myth
you can currently overprovision the core, so really we are mainly looking at these mechanisms operating at the
edges - so why would you need network admission control at the edge? yo dont (except perhaps in overloaded servers,
which _already do it).
4. devil in the details non myth
of course, on the congested knee, there _is_ a problem of jitter and loss _today_ in the current
internet due to large buffers causing potentially large delay and delay variance which does mean that
just before the system starts to displace significant other users traffic, it also has moved the operating point
for voice/video to where its unpleasant - but there's a LOT of useful debate right now on smaller buffers and on
better edge control mechanisms
5. the problem of adapting multimedia and user satisfaction myth
so we spent a long time in 1985-1995 trying to do congestion control for multimedia flows - there's a whole pile of
code (and papers) dedicated to adaptive audio and loss toelrant audio (and video) - turns out this is
not a good idea according to later papers on user tolerance for adaption. BUT it is a good idea to choose the right
codec up front, and _collectively, in aggregate_, this is good enough to survive the knee effect above, but of
course, then you are probably super critical in terms of failure if any more load is added. of course, most
streaming tools today (and some interactive audio tools) make the initial codec (and parameter) selection pretty well...
Putting this all together is quite an intersting challenge, and as time moves on things change - for example, core
nets may not be fast enough soon:) then we can have this debate all over again...
my three cybercents
cheers
j.
More information about the end2end-interest
mailing list