[e2e] RES: Why Buffering?
Detlef Bosau
detlef.bosau at web.de
Sat Jun 27 15:42:03 PDT 2009
Hi Lachlan,
Lachlan Andrew wrote:
> If you think of what I said in terms of "number of events divided by
> time interval", I think you'll find it makes sense. If not, feel free
> to point out an error.
>
Perhaps, the difficulty is less the definition of the term "rate" but
more the knowledge of an actual rate.
I just had a _very_ first glance at the Golestani paper, Keshav pointed to.
The term (r,T) smoothness reminds me of a "latency throughput product"
for a certain period of time.
And I think, that's the vicious circle, we may be trapped in: How large
is a "latency throughput product" even for a well defined period of
time? Basically, we define the rate as "customers served / service
time". Fine. So, if 5 customers are served within 1 second, we have rate
of 5 customers/s.
The problem is: How do we know how many customers are served within this
period of time?
Particularly as customers are packets, and therefore the Declaration of
Human Rights does not apply?
I.e.: Packets may be killed, the death penalty is neither abolished nor
does it depend on any judgement.
Packets may be made to stay in some buffer in some intermediate system
for an unknown time.
Packets may get crippled or corrupted, o.k., L2 may care for some kind
of Euthanasia then....
After my first attempts to work with wireless networks, it took me years
to understand that exactly this is the problem: When we send a number of
packets to a wireless link, we have actually no idea how many packets
will see the receiver.
Basically, the term "rate" is just another twist of "customers served" -
btw: do we mean "served, no matter how"? Or do we mean "successfully
served"?
When you remember my paper submission this year, which was rejected,
this was exactly the point I tried to address.
From a user's perspective, I want to know the number of successfully
served packets within a certain time.
From the congestion control perspective, I want to know how much system
capacity is occupied, or available respectively.
Be it in computer networks or be it in some kind of public transport
system. (Stop and go queuing reminds me of a typically form of "Swabian
Train Queuing." When you go by train here in the city of Stuttgart,
every few moments the train stops and it is said by a speaker: "Dear
Passengers! Unfortunately, the route section before us is busy at the
moment. We'll continue our travel shortly.") (There's some rumor of a
British alternative, sometimes called: "London collision queuing", but
it got only limited acceptance because of some minor shortcomings... ;-))
If the departure rate of our packets / trains are known and there are no
problems along the route, things are quite easy.
However, when we don't know in advance how many packets will be served,
even ex post we generally don't know at link layer whether a packet has
been _successfully_ served, it is in fact a moot discussion whether we
ask for a rate or a number of served packets. Neither is known.
And you're of course right: This does not depend on the length of the
period of time we talk about.
In this context, it is extremely important (and to the best of my
knowledge, this difference is hardly being made at the moment) to make a
difference between service at all and _successful_ service (i.e. intact
delivery).
Is this modeled in literature, i.e. does the TCP literature model
(actually unknown!) loss processes on a line?
In a nursing home, there is no difference: No matter whether an
inhabitant moves away or passes away, the apartment is available
afterwards. However, the local relocation company and the local
undertaker are likely to see different arrival rates here...
Formally spoken: There are several kinds of "death processes" here or,
differently put, several kinds of servers.
Some packets are delivered without errors, others are dropped, others
are delivered partially correct - with acceptable errors. The latter is
no problem for telcos, they take a SNMP attitude here and define noise
and the like to be the problem of the customers. Formally spoken: For
voice transfer with cell phones, there is no CRC check. Either the
listener understands what I'm talking about - or it's bad luck. So, a
telco is like a nursing home then. For packet transfer, the situation is
different. And perhaps the most nasty thing here are partially correct
packets. Apart from UDP Lite, we hardly talked about this issue in TCP/IP.
Just a thought which comes to my mind at the moment: There is of course
a difference between _congestion_ control and _flow_ control depending
on whether a packet only leaves the path or successfully enqueues for
being delivered to the application.
So, the scenario I'm thinking about are lossy links and links where the
service time is unknown or hardly to predict.
Detlef
>
>
>>> Why? *Long term* rates are meaningful, even if there is short-term
>>> fluctuation in the spacing between adjacent pairs of packets.
>>>
>> Not only in the spacing between adjacent pairs of packets.
>>
>> I'm still thinking of WWAN. And in WWAN, even the time to convey a packet
>> from a base station to a mobile or vice versa is subject to short-term
>> fluctuation.
>>
>
> In that case, we need to distinguish between the rate of *starting* to
> send packets and the rate of *completing* finishing packets. However,
> in the "long term" the two will still be roughly equal, where "long
> term" means "a time much longer than the time to send an individual
> packet". If a packet can take up to 3 seconds to send, then the two
> rates will roughly agree on timescales of 30s or more.
>
>
>>>> One problem is that packets don't travel all parts of a path with the
>>>> same speed. TCP traffic may be bursty, perhaps links are temporarily
>>>> unavailable.
>>>>
>>> True. Buffers get their name from providing protection against (short
>>> timescale) fluctuation in rate.
>>>
>> Is this their main objective?
>>
>
> It was. Buffers in different places have different purposes. I've
> said many times that I think the current main objective of buffers on
> WAN interfaces of routers is to achieve high TCP throughput. (Saying
> it again doesn't make it more or less right, but nothing in this
> thread seems a compelling argument against it.)
>
>
>>>> I once was told that a guy could drastically improve his throughput by
>>>> enabling window scaling.....
>>>> On a path from the US to Germany.
>>>>
>>>> I'm not quite sure whether the other users of the path were all amused
>>>> about
>>>> the one guy who enabled window scaling ;-)
>>>>
>>> Yes, enabling window scaling does allow TCP to behave as it was
>>> intended on large BDP paths. If the others weren't amused, they could
>>> also configure their systems correctly.
>>>
>> However: Cui bono? If the only consequence of window scaling is an end of
>> the commercial crisis, at least for DRAM manufactures, at the cost of
>> extremely long round trip times, we should rather avoid it ;-)
>>
>
> But that isn't all it does. On a high BDP link, if you don't use
> window scaling a single flow can get much less throughput than the 75%
> of capacity which is possible with window scaling and without
> significant buffering.
>
>
>> The problem is: Buffering shall provide for work conservation, as Jon
>> pointed out. As soon as buffers "overcompensate" idle times and don't avoid
>> idle times but introduce latency by themselves, the design is not really
>> helpful.
>>
>
> True. A buffer which never empties is too big (for that situation).
>
> Cheers,
> Lachlan
>
>
--
Detlef Bosau Galileistraße 30 70565 Stuttgart
phone: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau
ICQ: 566129673 http://detlef.bosau@web.de
More information about the end2end-interest
mailing list