[e2e] Hiccups and scheduling in mobile networks
Detlef Bosau
detlef.bosau at web.de
Sun Jan 7 16:05:44 PST 2007
Joe Touch wrote:
>> Variations in delivery times can be handled via PEPs that don't spoof
>> ACKs, e.g., ones that pace the data and/or ACK paths, but don't actively
>> participate in the communication.
>>
>>
>
And my humble comment was:
> Really? I agree with you for the Remote Socket Architecture
> (Schlager/Wolisz) because that architecture actually does not split
> the connection but places the PEP mechanism at the application/socket
> interface.
>
> Otherwise the problem is: When the bandwidth sender - splitter is,
> e.g., the average bandwidth / rate splitter-sender but far less than
> the maximum rate splitter / sender than a simple router perhaps would
> hardly store any data and thus hardly equalize the rate / delivery times.
> Thierry describes delay spikes of several seconds. If we think about
> UMTS, we can imagine a wireless link were nothing happens for up to
> several seconds - thus even no data is clocked out from the sender -
> and then we have about 2 Mbps throuhput for a short time - which is
> perhaps much more than the actual Internet path can carry. In such a
> scenario we want to have the router / splitter / PEP / whateverbox
> buffer the data and equalize the rate variations. Can this be achieved
> by pure pacing in the one or other direction?
>
> Detlef
>
>
>
O.k. So, I see: Splitting is unsellable :-) So the question is whether
we really need it.
So, this weekend I spent some time adding hiccups to a quite complex
network scenario:
Sender-----(internet)--------BS----(mobile net)-------Receiver
And the mobile net suffers from hiccups :-)
What I would like to know (and AFAIK Andreas dealt with questions like
these, therefore I put him on the cc: list) is "how bad" this hiccups my
become. As I said before, Thierry Klein published a paper at Globecom
2004 on this issue. There, he observed delay spikes from up to two seconds.
For the moment, I simply model the wireless link as a link with a
constant high bandwidth (e.g. 10 Mbps) which reflects its _physical_ rate
and I add hiccup times to the serialization delay (i.e. txtime in NS2).
These are drawn from a two point distribution: Either the hiccup time is
zero or it is 1 second. The probabilities are chosen that way that a
given average throughput is achieved.
Of course that´s extremely simplified. However: Is this reasonable as a
first approach? I would appreciate any comment on this one.
I would like to study different pacing techniques in this scenario,
_intendedly_ without splitting.
AFAIK, there is a variety of scheduling algorithms available for
networks like GPRS or UMTS. So, the question is whether we have a, if
extremely rough, "worst case model" to get a feeling for what TCP has to
cope with. The idea of my model above is to insert constant, say 1
second, delay spikes randomly into the flow, just in a way that I can
estimate the average throughput on the link.
Is this completely weird? Or does it sound reasonable?
Thanks
Detlef
More information about the end2end-interest
mailing list