[e2e] Question the other way round:
Detlef Bosau
detlef.bosau at web.de
Sat Nov 16 09:28:01 PST 2013
Am 16.11.2013 17:43, schrieb Vimal:
> I am not sure I entirely followed your previous email, but you seem to
> point out that buffers are inevitable. Yes, I think buffers are
> necessary for good utilisation as arrivals are hard to predict. In an
> ideal world the buffer would be sized just about right to fully
> utilise link (assuming that is something we care about).
Basically: Yes, buffers are nearly inevitable in an asynchronous system.
However, I think we should not focus too much on link utilization.
Sender -----(1 G link) ---- Switch with buffer -------------------(100
MBit/s long distance link) -----------------------Receiver
In a scenario like that, if we added 16 GByte memory to the switch, a
"greedy source" (which is rate, thanks to god) would blast 16 MByte data
into the net - to utilize the buffer. And if the long distance link is
long enough and we would use BIC, we would even care for blasting the
data into the net extremely fast.
And afterwards, we would complain about buffer bloat problems and
unsatisfactory RTT.
Yes, we would utilize the buffer then ;-)
>
> As you pointed out rightly -- so far, again as far as I am aware -- we
> have designed congestion control algorithms for a specific objective
> (which seems hard enough). From an optimisation perspective, the
> moment you have two objectives, it is not clear, and not meaningful to
> talk about "optimising" anything. It exposes a tradeoff -- I don't
> think there is hope of finding a universal scheduling algorithm that
> works best for all objectives. What is the 'right' tradeoff? I have
> no idea.
Me neither, so that's another concern (I'm still expecting flames) in a
pure end to end way of thinking, that we use THE ONE scheduling
algorithm, wich is mainly the self clocking / self scheduling algorithm
used by VJCC. Although alternatives could make sense in some cases.
>
> You mentioned buffer sizing for low RTT and high throughput. I think
> achieving a particular objective might also need cooperation from
> end-hosts.
May be.
> Also, instead of one size fits all, you can have a hierarchical
> scheduler setup:
!!!!! (looking for the thumb up emoticon :-))
>
> - At the top level, divide bandwidth in some fashion between class A
> and B (say equally)
or defined appropriately.
> - Class A has small buffers.
> - Class B has large buffers.
and now upon something completey different. *eg*. How is that achieved
in the approach by Ford and Iyengar, Manu Lochin pointed to?
(nasty question, I know.)
> - Flows that need low delay are directed to class A's queues.
> - Flows that need high throughput are directed to class B's queues.
>
Absolutely. The good old DiffServ ID.
Or (some people claim, I would have the first grey hairs on my head) the
good ol' TOS bits.
> This way you can get a "bit" of both objectives while ensuring each
> class gets a certain bandwidth guarantee.
>
I'm not thinking in guarantees here. In my opinion, the success of the
Internet is mainly due to the best effort concept.
However, what is "best effort" all about?
--
------------------------------------------------------------------
Detlef Bosau
Galileistraße 30
70565 Stuttgart Tel.: +49 711 5208031
mobile: +49 172 6819937
skype: detlef.bosau
ICQ: 566129673
detlef.bosau at web.de http://www.detlef-bosau.de
More information about the end2end-interest
mailing list