<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=hz-gb-2312">
<META content="MSHTML 5.50.4611.1300" name=GENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=#ffffff>
<DIV><FONT face=Arial size=2><FONT size=3>I agree with this
argument.<BR>Burstiness will happen in networks. And buffering is to deal with
the<BR>burstiness and prevent packet loss. Of course the buffer size should not
be<BR>too large. Holding tens of thousands of packets is of no use.<BR>Actually,
in a paper on router design by someone in Bell lab(98?), they've<BR>discussed
this problem and proposed to use certain amount of buffer to<BR>prevent loss and
guarantee performance. I think<BR>their result is collected from router
prototypes and shoudl be reliable.<BR><BR>Regards.<BR><BR><BR>Yan
Wu<BR><BR>Telecommunications Research Center<BR>Electrical Engineering
Department<BR>Arizona State University<BR><BR><BR><BR><BR><BR><BR><BR>-----
Original Message -----<BR>From: "Vos, E.W." <</FONT><A
href="mailto:E.W.Vos@kpn.com"><FONT size=3>E.W.Vos@kpn.com</FONT></A><FONT
size=3>><BR>To: "End2End" <</FONT><A
href="mailto:end2end-interest@postel.org"><FONT
size=3>end2end-interest@postel.org</FONT></A><FONT size=3>><BR>Cc: "Neil
Spring" <</FONT><A href="mailto:nspring@zarathustra.saavie.org"><FONT
size=3>nspring@zarathustra.saavie.org</FONT></A><FONT size=3>>; "'Tan
Koan-Sin'"<BR><</FONT><A href="mailto:freedom@csie.nctu.edu.tw"><FONT
size=3>freedom@csie.nctu.edu.tw</FONT></A><FONT size=3>>; "Jeong-woo Cho"
<</FONT><A href="mailto:ggumdol@comis.kaist.ac.kr"><FONT
size=3>ggumdol@comis.kaist.ac.kr</FONT></A><FONT size=3>><BR>Sent: Friday,
August 24, 2001 9:21 AM<BR>Subject: RE: [e2e] Fundamental Questions about Router
Queue in High Speed IP<BR>Networks<BR><BR><BR>> Thanx for this question. I
was also wondering why buffer sizes are so<BR>small<BR>> (as I understand,
Cisco advises 50 packets, independant of link speed).<BR>><BR>> I don't
see why buffering is so wrong. Actually, a buffer should be large<BR>> enough
to accomodate incidental bursts of traffic. Aren't the effects of<BR>>
dropping a packet much worse than delaying one a little bit?<BR>><BR>> Of
course, we don't need buffers which hols tens of thousands of packets.<BR>>
But a queue of 50 packets on an STM1 (155Mbps) only adds (assuming 1500B<BR>>
packets) 4 ms extra delay. Max! And this is only a slow link! Also,
as<BR>link<BR>> speed grows, the queue size needed to accomodate bursts does
not grow<BR>> proportionally. So why not allow a few 10s of ms delay on
STM1s?<BR>><BR>> Remember, this delay should not constantly occur, and is
not meant to<BR>> enlarge throughput, but to prevent loss (which, by the way
does enhance<BR>> throughput and fairness). Of course, when links speeds are
to slow, no<BR>> solution (large buffers or loss) will work. But I actually
believe these<BR>> delay effects are much less worse than
loss.<BR>><BR>> Esther<BR>><BR>><BR>> > -----Original
Message-----<BR>> > From: Tan Koan-Sin
[mailto:freedom@csie.nctu.edu.tw]<BR>> > Sent: donderdag 23 augustus 2001
10:32<BR>> > To: Jeong-woo Cho<BR>> > Cc: End2End; Neil
Spring<BR>> > Subject: Re: [e2e] Fundamental Questions about Router Queue
in High<BR>> > Speed IP Networks<BR>> ><BR>> ><BR>>
><BR>> > FYI.<BR>> ><BR>> > Robert Morris studied TCP
behavior with many flows, and<BR>> > proposed two methods to cure the
time-out problem in the many-flow<BR>> > situation [1]. The first one is
that instead of having a buffer about<BR>> > one round-trip time as
suggested in [2], the buffer at router<BR>> > should be proportional to
the total number of active flows so<BR>> > that TCP flows can survive
well. Both FRED [3] and FPQ [4]<BR>> > take this approach. The second
approach is to make TCP less<BR>> > aggressive and more adaptive when its
congestion window is small.<BR>> > Wu-chang Feng et al proposed SUBTCP in
[5], but SUBTCP used a<BR>> > multiplicative increase/ multiplicative
decreased algorithm,<BR>> > which will not converge to a fair
point.<BR>> ><BR>> > When cwnd is below 4 packets. Limited Transmit
[6] can help a<BR>> > little bit.<BR>> ><BR>> > [1] Robert
Morris, "TCP behavior with many flows," ICNP '97.<BR>> > [2] Curtis
Villamizar and Cheng Song, "High performance TCP<BR>> > in
ANSNET,"<BR>> > ACM CCR, vol. 24, no. 5, Oct.
1994.<BR>> > [3] Dong Lin and Robert Morris, "Dynamics of random early
detection,"<BR>> > SIGCOMM '97<BR>> > [4]
Robert Morris, "Scalable TCP Congestion Control," Ph.D thesis,<BR>>
> Harvard University, 1999.<BR>> > [5] Wu-chang
Feng, Dilip D. Kandlur, Debanjan Saha, and Kang<BR>> > S. Shin,<BR>>
> "Techniques for eliminating packet loss in
congested<BR>> > TCP/IP networks,"<BR>> >
Tech. Rep., University of Michigan, 1997<BR>> > [6] Mark Allman, Hari
Balakrishnan, and Sally Floyd, "Enhancing TCP's<BR>>
> loss recovery using Limited Transmit, Jan.
2001, RFC 3042.<BR>> ><BR>> > on Thu, Aug 23, 2001 at 04:24:26PM
+0900, Jeong-woo Cho wrote:<BR>> > > > 1. That the number of
connections that can occupy a<BR>> > > > router's buffers limits the
number of connections that<BR>> > > > can traverse a router. (it
doesn't)<BR>> > ><BR>> > > I know it doesn't. Router's buffer
DOES NOT LIMIT the no.<BR>> > of connections that can traverse a router.
But TCP needs<BR>> > buffering. TCP experiences coarse timeouts when
cwnd(current<BR>> > windows size) is smaller than 4 packets.<BR>> >
><BR>> > > ><BR>> > > > 2. That increased buffering
would yield increased<BR>> > > > performance: if there's a
bottleneck, queueing only adds<BR>> > > > delay, not
throughput. buffering -> delay, which hurts<BR>> > > >
everyone, particularly new tcp flows and those without<BR>> > > >
window scaling.<BR>> > ><BR>> > > I found that RED with 160
kbytes buffer, it can support<BR>> > only 100 flows with satisfactory
fairness. (Standard<BR>> > deviation of each flow's share to be smaller
than 0.2) Over<BR>> > 100 flows, RED induces TCP's coarse timeouts and
fairness<BR>> > could not be achieved AT ALL.<BR>> > ><BR>>
> > I agree that "BUFFERING only adds DELAY, not THROUGHPUT".<BR>> >
But I want to stress that "BUFFERING improves FAIRNESS"<BR>> >
><BR>> > > ><BR>> > > > 3. That tcp's retransmission
timeout would be helped by<BR>> > > > increased queue space: varying
rtt would likely confuse the<BR>> > > > retransmission estimator
(increasing the retransmission<BR>> > > > timeout), while causing
spurious retransmissions. There<BR>> > > > are plenty of ways
to avoid timeout based retransmission,<BR>> > > > (ecn, fast
retransmit, sack) at least whenever the window<BR>> > > > is larger
than a few packets.<BR>> > ><BR>> > > ECN should be
implemented in all routers in the earth. I<BR>> > don't think that it
would be possible in a short time. Fast<BR>> > retransmit can operate well
only when each TCP connection's<BR>> > cwnd(current window size) is larger
than or equal to 4 packets.<BR>> > ><BR>> > > ><BR>>
> > > 4. That queues in core routers should be provisioned for<BR>>
> > > fairness instead of utilization and price. fairness
is<BR>> > > > a harder problem than this.<BR>> > ><BR>>
> > Without guaranteeing fairness to a certain extent, what do<BR>>
> a router can guarantee for us? Fairness is more important for<BR>> >
real time applications. I found that fairness improves the<BR>> >
smoothness of real time applications' instantaneous sending rates.<BR>> >
><BR>> > > ><BR>> > > > That said, I wouldn't be
surprised if excessively large<BR>> > > > queues exacerbate the
performance difference between TCP<BR>> > > > connections with
varying RTT's, as the nearer sender is<BR>> > > > better able to
saturate the queue, defeating the original<BR>> > > > fairness
goal.<BR>> > > ><BR>> > > > -neil<BR>> > >
><BR>> > ><BR>>
><BR>></FONT><BR></FONT></DIV></BODY></HTML>