[e2e] High Packet Loss and TCP
Vishal Misra
misra at cs.columbia.edu
Thu May 1 15:31:28 PDT 2003
Take an idealization of TCP, additive increase-multiplicative decrease.
Numerous papers have shown that the relationship between loss probability
"p" and average window size "W" is
(W^2)*p = k
where k is some constant between 1.5 and 2 (the "square root p formula").
At loss rates above 10% (p > 0.1), the average window size drops down to 3
or below. Any loss of packet then, in a real implementation of TCP results
in (multiple) timeouts with high probability (you do not have a large enough
window to get triple duplicate acks).
This is a handwaving explanation. The Padhye et al. paper does a detailed
performance analysis (where they show that the drop in performance is in
fact worse).
-Vishal
On Thu, 1 May 2003, Jonathan Stone wrote:
>
> My experience was that TCP collapses at around 25%-30%, in a very
> special sense of "collapse", namely that "HTTP pages never finish
> downloading, faster to kill the conneciton and start from scratch".
> I'd thought this was fairly well-known.
>
> Someone at Stanford once asked me why this phenomenon happened. I
> took packet traces from some sites in Euroe traversing a badly
> overloaded link. A tcptrace-like tool showed packet loss rates were
> so high that fast recovery/fast retransmit never got 3 dupacks (due to
> drop, the receiver got at most two segments, so the receiver got at
> most 2 dupacks.)
>
> At that point not only has throughput gone to hell, but the
> time-constant on whatever goodput can be gotten, has shifted from (a
> constant factor of) network RTT, to (some factor of) the slow-retransmit
> timeout. That's quite a different regime from the 3% to 5% 5 loss
> over, (for example) overloaded CDMA nets, where (with careful tuning)
> even soft-real-time flows can be quite doable.
>
--
http://www.cs.columbia.edu/~misra
More information about the end2end-interest
mailing list