[e2e] Ressource Fairness or Througput Fairness, was Re: Opportunistic Scheduling.
Detlef Bosau
detlef.bosau at web.de
Thu Jul 12 07:12:37 PDT 2007
There is one important point which is perhaps missing in our whole
discussion. (One of the "advantages" of being unemployed for quite al
ong time is that you often cannot sleep in the night because the whole
situation simply does not leave you alone. And during the last night, I
thought about our discussion. It´s better than thinking about how to
find an employment, when you cannot sleep. Sorry for being somewhat off
topic but I cannot always hide my situation to that degree I want to.)
O.k. The result of the last night is perhaps trivial and some months
ago, when I played around with very rough simulations of opportunistic
scheduling pointed exactly into that direction and it perfectly matches
the observation of the mixup of concepts with the term "rate" here:
Do we talk about ressource fairness?
Or do we talk about throughput fairness?
To my understanding e.g. of TCP, we shall talk about ressource fairness.
And to my understaning e.g. of the congavoid paper and all the work
based upon this we share network _ressources_ among network participants.
Ressource fairness is typically an end to end issue: E.g. the major
burden of end to end congestion control is upon the terminals. They may
be assisted by nodes in between, e.g. by RED or ECN, however it´s up to
the terminals to identify congestion, to relieve the network from too
much load and - as a side effect - assigning a fair share of
_ressources_ to each flow.
When I look at OS, there is some local decision how the optimal rates
for a link are to be set and then the flow´s average rate are made to
achieve / follow these rates. It is obvious that this approach will
cause the competing flows to achieve equal throughputs or a predefined
throughput vector.
And of course, this may cause unwanted consequences, e.g. scheduling
jitter: When PF scheduling targets at equal average throughputs for a
number of flows, a flow which starts with some delay will cause all
other flows to stop because its average throughput is far less than that
of the competitors.
Anf of course, the whole approach requires infinitely backloged queues.
And of course, this assumes greedy sources. And of course, there are
numerous approaches to alleviate jitter etc. when e.g. sources are not
greedy.
However, I come to the conclusion that it is exactly the mixup of "rate"
and "ressources" in HSDPA and the lilke and the goal of achieving
throughput fairness instead of ressource fairness is the very reason for
potential problems.
A note on throughput fairness: What is throughput fairness? What is
throughput? The rates we´re talking about in this context are code rates
or service rates resulting from a certain MCS/PS. We don´t talk about
block error rates. We don´t talk about necessary retransmissions, be it
on layer 2 or end to end. We don´t take into accout whether the
application is error tolerant or not. So we don´t have an idea what
"throughput" means for the user and in terms of the application. So, any
fair distribution of throughput on layer 2 is necessarily somewhat
arbitrary. Some "self made fairness goal", which hopefully matches the
end to end goals.
I well remember that we discussed some economical aspects here on the
list some time ago. And it is exactly the economical view that leads to
the basic criterion: When two users pay the same price, they shall get
the same service. And for a base station in a cellular network that
means: They shall get the same amount of sending time.
When one user places his mobile directly at the antenna station and the
other hides behind a wall of steel concrete then the user with the
mobile placed at the antenna station will of course receive a better
TCP _good_put than our nearly hidden terminal which perhaps will not
achieve any goodput at all.
But is this the networks responsibility? Definitely not! When both users
pay the same price, the network will spend the same effort for both
users to deliver any pending packets.
When I buy a new watch and I pay for a watch that is not water resistant
I will _get_ a watch that is not water resistant. And when I go diving
afterwards in 50 meter depth with this wonderful new watch on, I can
hardly hold the watchmaker liable when the watch is broken afterwards.
When I now take into account the work by Frank Kelly and if I understand
this correctly, this work gives users the opportunity to get better
service than others - when they are willing to pay more than others. Of
course, Kelly discusses proportional fairness as an example, but this is
not the clue of the paper. To my understanding, users may define there
own utitlity functions as long as these are strictly concave and then we
lern from Frank Kelly how to find an optimal schedule even for those
utility functions _and_ we find a way to charge users appropriately. So,
when a user definitely wants only service with high rates, he is free
to so define his utility function. Consequently he is so served - and so
charged.
In some sense, the utility function of elastic traffic defines the
traffic´s QoS requirements. And the optimization problem discussed by
Kelly is to negotiate a trade off between the users´ requirements and
the network´s actual capacity.
So, we have _no_ best effort network but a network with QoS requirements
instead.
Anf of course, Kelly may talk about (service) rates because in Kelly´s
model rate and service time are reciprocally proportional.
When we talk about code rates, i.e. coding schemes or puncturing
schemes, i.e. the service time for a block remains the same and only the
the payload / redundancy ratio varies from slot to slot, Kelly´s model
will definitely fail.
So, to make a long story short, I see three concerns here:
1.: In literature dealing with HSDPA and the like whe have a mixup
between the terms service rate and code rate.
2.: PF scheduling pursues throughput fairness whereas in best effort
networks, we want to pursue ressource fairness instead.
3.: Kelly´s "recipe" on elastic traffic simply does not apply here.
Elastic traffic is not best effort traffic. Elastic traffic comes with a
utility function while best effort traffic does not. And because best
effort traffic is equally charged on a per ressource basis, I don´t see
Kelly´s work to be applicable here.
O.k., and now I take criticism :-)
(And admittedly, I like the idea of exploiting periods of good channel
conditions and perhaps, I have some vague idea in mind how this can be
done in a reasonable way which is 1. simple and which 2. pursues
ressource fairness instead of throughput fairness. And perhaps, I
eventually will write it down.)
Detlef
--
Detlef Bosau Mail: detlef.bosau at web.de
Galileistrasse 30 Web: http://www.detlef-bosau.de
70565 Stuttgart Skype: detlef.bosau
Mobile: +49 172 681 9937
More information about the end2end-interest
mailing list