[e2e] 10% packet loss stops TCP flow
Jonathan Stone
jonathan at dsg.stanford.edu
Fri Feb 25 16:08:04 PST 2005
In message <20050225230810.1244E86AE5 at mercury.lcs.mit.edu>,
Noel Chiappa writes:
> > From: Jonathan Stone <jonathan at dsg.stanford.edu>
> > Are you deliberately tempting your readers to fall into the fallacy of
> > assuming packet loss is statistically independent, and thus of assuming
> > Poisson behaviour of packets in the Internet?
>
>Well, it depends on what's causing the packet loss, doesn't it?
Yes, very much so. And calculating a back-of-the-envelope
approximation (like Craig's) is clearly much better than asking here.
>E.g. if it's
>congestion, yes there is a chance it will be non-random (although it will
>depend on what drop algorithm the switches/routers are using). However, if
>it's a lossy radio link, it might be fairly well random.
Maybe, maybe not. I used to use RF loss with Metricom radios as a
very similar example. But Stuart Cheshire was keen to remind me that
in the environments we saw loss (think lecture halls full of students
with laptops, with growing fraction of wireless users), those lossy RF
links may well be lossy due to congestion. And (at least with
Metricoms), not all radios are equal. Nor all lecture halls.
I've recently been debugging a TCP SACK implmeentation, involving
several hours staring at tcptrace graphs from my home LAN. I have two
802.11 cards, one of which shows significantly higher drops than the
other. wifi-to-wifi drop looks persistently bursty, and the rate of
"hardware duplicates" (same IP-id, same TCP segment size and sequence
number) is over 1%, which I find disturbingly high. I have no idea
how representative that is.
>May I also note that Pc calculated by the simplistic formula I point toward is
>actually something of an *upper* bound, and any deviation away from perfect
>randomness in packet dropping will *reduce* it. (If the losses are not evenly
>randomly distributed, then the probability of the loss of the T retransmission
>of a given packet needed to kill the connection is even *higher* than the Pp^T
>you get with a random model, no?)
I think it depends. Reduce in aggregate, when summed over all users?
I'd buy that. But there could well be some poor soul who consistently
loses to other users, due to RF/building geometry, or (FM) capture
effect, or whatever other reasons apply. And those poor souls can be
consistently worse off than with statistically-independent loss ---
Rather like my former coax neighbour whose Ethernet card persistently
garbled outbound packets.
If backbone utilization is really on the order of 10%, I wonder what
fraction of aggregate end-to-end drop is due to domestic or office
wireless congestion, or ether-over-cable-TV adoption rates (some might
call it oversubscription); and what those drop distributions look like.
More information about the end2end-interest
mailing list