[Tsvwg] Re: [e2e] Are you interested in TOEs and related issues
Craig Partridge
craig at bbn.com
Mon Mar 8 13:50:43 PST 2004
In message <8052E2EA753D144EB906B7A7AA399714022A05B7 at mailserv.hatteras.com>, "S
tephen Suryaputra" writes:
>I think Marko is referring to the situation when there is a congestion.
>Because of that the queue builds up and if the queue space is in term
>of the number of packets, then a bunch of small packets can potentially
>dominate the queue and left little room for large packets eventhough
>there is a space in the buffer to accomodate the large ones.
Well, I tried to nail this down by setting it up as a queueing system
but that's awfully hard. Here's the cut:
1. Basically you have a general arrival process delivering pieces of work
of variable size, at arrival rate A subject to the condition that
if there's an event of size w1 at time t1, then the next event cannot
arrive sooner than time t1+w1. [I.e. A(t2,w2) - A(t1,w1) >= w1].
2. You then have a series of queues Q1-Qn attached to servers S1-Sn that
serve events at a fixed rate R, where R1 <= A (i.e. we can handle
the maximum rate).
3. You then have a departure queue D1, and departure server DS, which has
a service rate D, where D can be defined such that D(w) <= w (that is,
the output is faster or equivalent speed to the input) or D(w) >= w
(output is slower).
If D(w) <= w, then I think one can argue simply by inspection that the total
work unit buffering in the system is [max(w)/min(w)]-n. That is you
simply need enough queueing to handle a queue that develops because a big
packet causes a bunch of little packets to queue behind it (because they
arrive while the big packet is going out). See Comment (**) below.
I can't find an easy solution for D(w) >= w, because, in general, the queue
can grow without bound.
** If you believe this model, what it says is that if you go with packet
size buffers, and the ratio between the smallest possible packet and
largest possible packet is big, then you can find your queues growing quite
large, due not to classic congestion (where there is too much demand
fighting for one output link) but rather to serialization delays.
Craig
More information about the end2end-interest
mailing list