[e2e] TCP Loss Differentiation
David P. Reed
dpreed at reed.com
Fri Feb 20 18:46:55 PST 2009
I think we have no serious debate on any of these things. I know
Cisco's products support ECN, it's really an endpoint stack problem, and
the word "lead" was meant to suggest use of Cisco's bully pulpit (white
papers, etc.).
And real practical "optimality" is hard to define. I strongly emphasize
the role of latency and jitter. You would de-emphasize it a bit. This
is nuance.
Fred Baker wrote:
> On Feb 19, 2009, at 9:55 PM, David P. Reed wrote:
>> Fred, you are right. Let's get ECN done. Get your company to take
>> the lead.
>
> ECN has been in the field, in some products, for the better part of a
> decade. Next step; get ISPs to turn it on. The products that don't
> support it don't because our customers tell us they don't need it
> (nobody is paying them to turn it on) or are simply not asking for it.
>
> That said, I'm not at all convinced that the end system can't do this
> effectively for itself. Setting Vegas and CalTech FAST aside (Vegas
> has problems and FAST has IPR that gets in the way, and neither
> actually tunes to the knee, they try to keep alpha in the bottleneck
> queue for some definition of alpha), there are some reasonably good
> delay-based algorithms around. The guys at Hamilton Institute have one
> (not HSTCP, their other one) that actually tunes to Jain's "knee" and
> appears to be fairly effective in preliminary work.
>
>> The ideal state in a steady state (if there is an ideal) would be,
>> that along any path, there would be essentially a single packet
>> waiting on each "bottleneck" link between the source and the
>> destination.
>
> I'll dispute that a little; the ideal state is that the amount of
> traffic that the end system is keeping in the network is the minimum
> that will maintain its maximum throughput rate, what Jain would call
> the "knee". That might mean several or even many segments in the same
> lambda, as one can often maintain a number of packets in a lambda due
> to speed of light issues. And on access interfaces, it can mean three
> or four packets in the same queue at some times. On access interfaces,
> it's not uncommon to see a burst of lets-say-three packets arrive and
> play out, and while the third packet is playing out, get the Ack back
> that triggers the next couple of packets. In such a case, "the minimum
> that will maintain the maximum rate" turns out to be a cwnd of 3-4
> packets. I have a great capture of an upload to Picasa that would
> demonstrate this; between my Mac (BSD) and Picassa's Linux system,
> what we actually see is queues being built up to 1130 ms RTT when 3
> packets (92 ms RTT when Linux is acking every other packet) would do
> the job. And as a result, I have to get in the queues in the router
> and play QoS games to make my VoIP at all useful.
>
>> Any more packets in queues along the way would be (as you say, Fred)
>> harmful, because the end-to-end latency would be bigger than needed
>> for full utilization. And latency matters a lot.
>>
>> In contrast, if there are fewer packets in flight, there would be
>> underutilization, and adding a packet enqueued along path would make
>> all users happier, until latency gets above that minimum.
>>
>> So the control loop in each TCP sharing a path tries to "lock into"
>> that optimal state (or it should), using AIMD, triggerered by the
>> best congestion signals it can get. Prefer non-loss congestion
>> signalling such as ECN over RED over Queue overflow triggered packet
>> dropping. Shortening signaling delay would suggest (and literature
>> bears out) that "head drops" or "head marking" is better than "tail
>> drops" for minimizing latency, but desire to eke out a few percent
>> improved throughput for FTPs has argued for tail drops and long
>> queues on all output links. (bias of theory community toward
>> throughput measures rather than latency measures is wrong, IMO).
>>
>> What makes it complex is that during a flow, many contentious flows
>> may arise and die on "cross traffic" that makes any path unstable.
>> Increased utilization under such probabilistic transients requires
>> longer queues. But longer queues lead to more latency and increased
>> jitter (higher moments of delay statistics).
>
> Well, yes and no. The bottleneck link is almost invariably the access
> link at one end or the other; in the core of the network the ISPs try
> pretty hard to stay ahead of the curve. Cross traffic happens, but I
> think the case is less obvious than it might appear.
>
>> Good control response and stability is best achieved by minimizing
>> queueing in any path, so that control is more responsive to transient
>> queue buildup.
>>
>> Most traffic in Internet apps (that require QoS to make users
>> happier) care about end-to-end latency or jitter or both, not maximal
>> throughput. Maximal throughput is what the operator cares about if
>> their users don't care about QoS, only bulk FTP users care about the
>> last few percent of optimal throughput vs. minimizing latency/delay.
>>
>>
>> Fred Baker wrote:
>>> Which begs the question - why are we tuning to loss in the first
>>> place? Once you have filled the data path enough to achieve your
>>> "fair share" of the capacity, filling the queue more doesn't improve
>>> your speed and it hurts everyone around you. As your cwnd grows,
>>> your mean RTT grows with it so that the ratio of cwnd/rtt remains
>>> equal to the capacity of the bottleneck.
>>>
>>> Seems pointless and selfish, the kind of thing we discipline our
>>> children if they do.
>>>
>>> On Feb 19, 2009, at 7:07 PM, Injong Rhee wrote:
>>>
>>>> Perhaps I might add on this thread. Yes. I agree that it is not so
>>>> clear that we have a model for non-congestion related losses. The
>>>> motivation for this differentiation is, I guess, to disregard
>>>> non-congestion related losses for TCP window control. So the
>>>> motivation is valid. But maybe we should look at the problem from a
>>>> different perspective. Instead of trying to detect non-congestion
>>>> losses, why not try to detect congestion losses? Well..congestion
>>>> signals are definitely easy to detect because losses are typically
>>>> associated with some patterns of delays. So the scheme would be
>>>> "reduce the congestion window ONLY when it is certain with high
>>>> probability that losses are from congestion". This scheme would be
>>>> different from "reduce whenever any indication of congestion
>>>> occurs". Well my view could be too dangerous. But given that there
>>>> are protocols out there, e.g., DCCP, that react to congestion much
>>>> more slowly than TCP, this type of protocols may not be so bad...
>>>>
>>>>
>>>> ----- Original Message ----- From: "Fred Baker" <fred at cisco.com>
>>>> To: "David P. Reed" <dpreed at reed.com>
>>>> Cc: "end2end-interest list" <end2end-interest at postel.org>
>>>> Sent: Wednesday, February 11, 2009 5:07 PM
>>>> Subject: Re: [e2e] TCP Loss Differentiation
>>>>
>>>>
>>>>> Copying the specific communicants in this thread as my postings to
>>>>> end2end-interest require moderator approval (I guess I'm not an
>>>>> acceptable person for some reason, and the moderator has told me
>>>>> that he will not tell me what rule prevents me from posting
>>>>> without moderation).
>>>>>
>>>>> I think you're communicating just fine. I understood, and agreed
>>>>> with, your comment.
>>>>>
>>>>> I actually think that a more important model is not loss
>>>>> processes, which as you describe are both congestion-related and
>>>>> related to other underlying issues, but a combination of several
>>>>> underlying and fundamentally different kinds of processes. One is
>>>>> perhaps "delay processes" (of which loss is the extreme case and
>>>>> L2 retransmission is a partially-understood and poorly modeled
>>>>> contributor to). Another might be interference processes (such as
>>>>> radio interference in 802.11/802.16 networks) that cause end to
>>>>> end packet loss for other reasons. In mobile networks, it might
>>>>> be worthwhile to distinguish the processes of network change -
>>>>> from the perspective of an endpoint that is in motion, its route,
>>>>> and therefore its next hop, is constantly changing and might at
>>>>> times not exist.
>>>>>
>>>>> Looking at it from a TCP/SCTP perspective, we can only really
>>>>> discuss it as how we can best manage to use a certain share of
>>>>> the capacity the network provides, how much use is
>>>>> counterproductive, when to retransmit, and all that. But
>>>>> understanding the underlying issues will contribute heavily to
>>>>> that model.
>>>>>
>>>>> On Feb 11, 2009, at 7:20 AM, David P. Reed wrote:
>>>>>
>>>>>> I don't understand how what I wrote could be interpreted as "a
>>>>>> congestion-based loss process cannot be modeled or predicted".
>>>>>>
>>>>>> I was speaking about *non-congestion-based* "connectivity loss
>>>>>> related loss process", and I *said* that it is not a single-
>>>>>> parameter, memoryless loss process.
>>>>>>
>>>>>> I said nothing whatsoever about congestion-based loss processes,
>>>>>> having differentiated carefully the two types of loss (which
>>>>>> differentiation was what Detlef started this thread with).
>>>>>>
>>>>>> Clearly I am not communicating, despite using English and common
>>>>>> terms from systems modeling mathematics.
>>>>>>
>>>>>> Xai Xi wrote:
>>>>>>> are you saying that a congestion-based loss process cannot be
>>>>>>> modeled or predicted? a tool, badabing, from sigcomm'05, claims
>>>>>>> to be highly accurate in measuring end-to-end loss processes.
>>>>>>>
>>>>>>> David wrote:
>>>>>>>
>>>>>>>> A "loss process" would be a mathematically more sound term,
>>>>>>>> because it
>>>>>>> does not confuse> the listener into thinking that there is a
>>>>>>> simplistic, memoryless, one-parameter model that> can be
>>>>>>> "discovered" by TCP's control algorithms.
>>>>>>>
>>>>>>>> That said, I was encouraging a dichotomy where the world is far
>>>>>>>> more
>>>>>>> complicated:
>>>>>>>> congestion drops vs. connectivity drops. One *might* be
>>>>>>> able to make much practical
>>>>>>>> headway by building a model and a theory of
>>>>>>> "connectivity drops".
>>>>>>>
>>>>>>>
>>>>>>> _________________________________________________________________
>>>>>>> Drag n’ drop—Get easy photo sharing with Windows Live™ Photos.
>>>>>>>
>>>>>>> http://www.microsoft.com/windows/windowslive/products/photos.aspx
>>>>>>>
>>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>
>
More information about the end2end-interest
mailing list