[e2e] Ressource Fairness or Througput Fairness, was Re: Opportunistic Scheduling.
Dave Eckhardt
davide+e2e at cs.cmu.edu
Mon Jul 23 13:04:16 PDT 2007
> One difficulty was, and during the last years I learned that this is
> perhaps the most basic reason why adaptation of multimedia documents in
> mobile networks is condemned to fail before it's even started, that
> there is no serious possibility to have a long-term or even medium-term
> prediction of a wireless channel's properties.
As far as I can tell, this is indeed fiendishly difficult. A couple of
times people asked for my bit-level traces in order to fit some sort of
model to them, but nobody who did so was ever heard from again... this
is one reason why my scheduling approach was essentially reactive rather
than predictive, and works without needing to measure error rates. It
would be easy enough to plug in an oracle if one were available, of course.
> But when I read your paper, I saw two TCP flows and one Audio flow and
> one Video flow. And then I saw something on throughput, which is necessarily
> comparing peaches and oranges in that case, because no one is interested in
> TCP throughput. One is typically interested in TCP _goodput_. And that has
> to take into account of couse TCP retransmissions and can be "slightly"
> differ from any kind of L2 throughput in faulty networks.
I have never been a fan of the word "goodput". One layer's "goodput" is
just the "throughput" of the next layer up, after all--if the higher layer
is thrashing, your "goodput" isn't any good, and you have no way of knowing
that. Since there are pre-existing words for "effort" and "outcome", it
makes sense to me to use them.
Anyway, rest assured that the authors of the ELF scheduling paper know
about "goodput" and gave the matter due treatment--but, due to space
constraints, not in that paper.
> In consequence, I'm not quite sure whether it makes sense to handle TCP
> and media flows by the same kind of scheduler anyway. More drastically
> spoken: I'm strongly convinced that this is simply nonsense.
What we were trying to accomplish was conceptualizing the scheduling of
high-error wireless links in terms of effort-fair vs. outcome-fair,
arguing that a hybrid is frequently desirable, and demonstrating a basic
implementation.
It's fine with me if you wish to argue that for data outcome should be
measured as "100%-correct packet bytes with latency below 250 ms" but
that for voice outcome should be measured in terms of "85%-correct
packet bytes with latency below 50 ms". And I wouldn't object if you
wanted to argue that effort should be measured in watt-hz-seconds or
some other measure of how much spectrum resource is expended.
But I believe that in a high-error environment it *does* make sense to
integrate scheduling of disparate flow types according to a tradeoff
between effort and outcome (and we were arguing for a particular model
very different from utility curves).
Note that a couple messages back my motivating example for cell phones
was that an operator may be able to very slightly degrade the voice quality
of some customers in order to "unfairly" boost the experience of another
customer in a "dead spot", and that this might keep the customer talking
instead of hanging up. No part of that example depends on TCP, "goodput",
persistent ARQ, etc. The key issue is the notion of fairness.
I don't think we know "the story" on running voice over data-centric networks
versus running data over voice-centric networks or whether there is a neutral
ground. Last time I looked Real Audio was mostly running over TCP, not UDP...
let alone anything involving link-level options to deliver partially-mangled
packets. And initially GSM was kind of dubious for data because of the
voice-centric deep interleaving, right? I think there are plenty of open
questions.
But I haven't yet seen anything to convince me that the concepts of effort-fair
and outcome-fair don't make sense or that either one is better than a tunable
hybrid.
Dave Eckhardt
More information about the end2end-interest
mailing list