[e2e] TCP Loss Differentiation
David P. Reed
dpreed at reed.com
Thu Feb 19 21:55:05 PST 2009
Fred, you are right. Let's get ECN done. Get your company to take the
lead.
The ideal state in a steady state (if there is an ideal) would be, that
along any path, there would be essentially a single packet waiting on
each "bottleneck" link between the source and the destination.
Any more packets in queues along the way would be (as you say, Fred)
harmful, because the end-to-end latency would be bigger than needed for
full utilization. And latency matters a lot.
In contrast, if there are fewer packets in flight, there would be
underutilization, and adding a packet enqueued along path would make all
users happier, until latency gets above that minimum.
So the control loop in each TCP sharing a path tries to "lock into" that
optimal state (or it should), using AIMD, triggerered by the best
congestion signals it can get. Prefer non-loss congestion signalling
such as ECN over RED over Queue overflow triggered packet dropping.
Shortening signaling delay would suggest (and literature bears out) that
"head drops" or "head marking" is better than "tail drops" for
minimizing latency, but desire to eke out a few percent improved
throughput for FTPs has argued for tail drops and long queues on all
output links. (bias of theory community toward throughput measures
rather than latency measures is wrong, IMO).
What makes it complex is that during a flow, many contentious flows may
arise and die on "cross traffic" that makes any path unstable.
Increased utilization under such probabilistic transients requires
longer queues. But longer queues lead to more latency and increased
jitter (higher moments of delay statistics).
Good control response and stability is best achieved by minimizing
queueing in any path, so that control is more responsive to transient
queue buildup.
Most traffic in Internet apps (that require QoS to make users happier)
care about end-to-end latency or jitter or both, not maximal
throughput. Maximal throughput is what the operator cares about if
their users don't care about QoS, only bulk FTP users care about the
last few percent of optimal throughput vs. minimizing latency/delay.
Fred Baker wrote:
> Which begs the question - why are we tuning to loss in the first
> place? Once you have filled the data path enough to achieve your "fair
> share" of the capacity, filling the queue more doesn't improve your
> speed and it hurts everyone around you. As your cwnd grows, your mean
> RTT grows with it so that the ratio of cwnd/rtt remains equal to the
> capacity of the bottleneck.
>
> Seems pointless and selfish, the kind of thing we discipline our
> children if they do.
>
> On Feb 19, 2009, at 7:07 PM, Injong Rhee wrote:
>
>> Perhaps I might add on this thread. Yes. I agree that it is not so
>> clear that we have a model for non-congestion related losses. The
>> motivation for this differentiation is, I guess, to disregard
>> non-congestion related losses for TCP window control. So the
>> motivation is valid. But maybe we should look at the problem from a
>> different perspective. Instead of trying to detect non-congestion
>> losses, why not try to detect congestion losses? Well..congestion
>> signals are definitely easy to detect because losses are typically
>> associated with some patterns of delays. So the scheme would be
>> "reduce the congestion window ONLY when it is certain with high
>> probability that losses are from congestion". This scheme would be
>> different from "reduce whenever any indication of congestion occurs".
>> Well my view could be too dangerous. But given that there are
>> protocols out there, e.g., DCCP, that react to congestion much more
>> slowly than TCP, this type of protocols may not be so bad...
>>
>>
>> ----- Original Message ----- From: "Fred Baker" <fred at cisco.com>
>> To: "David P. Reed" <dpreed at reed.com>
>> Cc: "end2end-interest list" <end2end-interest at postel.org>
>> Sent: Wednesday, February 11, 2009 5:07 PM
>> Subject: Re: [e2e] TCP Loss Differentiation
>>
>>
>>> Copying the specific communicants in this thread as my postings to
>>> end2end-interest require moderator approval (I guess I'm not an
>>> acceptable person for some reason, and the moderator has told me
>>> that he will not tell me what rule prevents me from posting
>>> without moderation).
>>>
>>> I think you're communicating just fine. I understood, and agreed
>>> with, your comment.
>>>
>>> I actually think that a more important model is not loss processes,
>>> which as you describe are both congestion-related and related to
>>> other underlying issues, but a combination of several underlying and
>>> fundamentally different kinds of processes. One is perhaps "delay
>>> processes" (of which loss is the extreme case and L2 retransmission
>>> is a partially-understood and poorly modeled contributor to).
>>> Another might be interference processes (such as radio interference
>>> in 802.11/802.16 networks) that cause end to end packet loss for
>>> other reasons. In mobile networks, it might be worthwhile to
>>> distinguish the processes of network change - from the perspective
>>> of an endpoint that is in motion, its route, and therefore its next
>>> hop, is constantly changing and might at times not exist.
>>>
>>> Looking at it from a TCP/SCTP perspective, we can only really
>>> discuss it as how we can best manage to use a certain share of the
>>> capacity the network provides, how much use is counterproductive,
>>> when to retransmit, and all that. But understanding the underlying
>>> issues will contribute heavily to that model.
>>>
>>> On Feb 11, 2009, at 7:20 AM, David P. Reed wrote:
>>>
>>>> I don't understand how what I wrote could be interpreted as "a
>>>> congestion-based loss process cannot be modeled or predicted".
>>>>
>>>> I was speaking about *non-congestion-based* "connectivity loss
>>>> related loss process", and I *said* that it is not a single-
>>>> parameter, memoryless loss process.
>>>>
>>>> I said nothing whatsoever about congestion-based loss processes,
>>>> having differentiated carefully the two types of loss (which
>>>> differentiation was what Detlef started this thread with).
>>>>
>>>> Clearly I am not communicating, despite using English and common
>>>> terms from systems modeling mathematics.
>>>>
>>>> Xai Xi wrote:
>>>>> are you saying that a congestion-based loss process cannot be
>>>>> modeled or predicted? a tool, badabing, from sigcomm'05, claims
>>>>> to be highly accurate in measuring end-to-end loss processes.
>>>>>
>>>>> David wrote:
>>>>>
>>>>>> A "loss process" would be a mathematically more sound term,
>>>>>> because it
>>>>> does not confuse> the listener into thinking that there is a
>>>>> simplistic, memoryless, one-parameter model that> can be
>>>>> "discovered" by TCP's control algorithms.
>>>>>
>>>>>> That said, I was encouraging a dichotomy where the world is far more
>>>>> complicated:
>>>>>> congestion drops vs. connectivity drops. One *might* be
>>>>> able to make much practical
>>>>>> headway by building a model and a theory of
>>>>> "connectivity drops".
>>>>>
>>>>>
>>>>> _________________________________________________________________
>>>>> Drag n’ drop—Get easy photo sharing with Windows Live™ Photos.
>>>>>
>>>>> http://www.microsoft.com/windows/windowslive/products/photos.aspx
>>>>>
>>>>>
>>>
>>>
>>
>
>
More information about the end2end-interest
mailing list