[e2e] Is a non-TCP solution dead?
Reiner Ludwig
Reiner.Ludwig at eed.ericsson.se
Tue Apr 1 01:55:59 PST 2003
There is so much *myth* about how "bad" TCP performs over wireless, and I
get so tired of reviewing papers that start with "... wireless links are
characterized by high packet loss rates caused by transmission errors ...".
Having worked on this subject for a couple of years, we have learned that
TCP performs *great* most of the times (yes, rare corner cases exist) *if*
the wireless subnetwork is designed right. On this subject, I can only
recommend to read draft-ietf-pilc-link-design-13.txt which is a
soon-to-be-BCP-RFC. RFC3366 provides additional detail on the subject.
The globally deployed (E)GPRS- and WCDMA-based wide-area wireless access
networks have been (mostly) designed with these recommendations in mind.
There are no problems with transmission errors here. However, TCP over
WCDMA with e2e RTTs of 250-500ms and bit rates of 16-384 kb/s is close to
the regime of high BxD TCP connections. Hence, we can see slow-start and
short file transfers causing lower than possible link utilization and lower
than possible e2e throughput. That, and only that, might be a motivation to
deploy split-proxies in such a wireless subnetwork.
More comments to Mark inline ...
At 21:47 31.03.2003, Mark Handley wrote:
>This raises a higher level issue: to what extent is a wireless link
>error a sign of congestion?
>
>Probably it isn't congestion in the sense of overflowing a router
>queue. However, if the link layer is well-designed, and attempts
>limited retransmissions (or similar techniques), and the packet still
>doesn't go through, then at that moment in time the link has zero
>bandwidth. Thus this is a form of congestion.
>
>[...]
>
>So perhaps it becomes an issue of timescales. If link
>congestion/corruption is coming and going on timescales a lot less
>than an end-to-end RTT, then using *any* end-to-end congestion control
>is going to be pretty inefficient (unless you get some really
>predictable average conditions). If link corruption is coming and
>going on timescales of an RTT or greater, then theoretically an
>end-to-end congestion control mechanism can in principle do OK.
Exactly! If you follow draft-ietf-pilc-link-design-13.txt and RFC3366, the
use of persistent link layer ARQ will translate transmission errors on the
wireless link into congestion, i.e., queueing delays at the wireless link.
For the case of most *wide-area* wireless links, the packet transmission
delays across those links often dominate the e2e RTT. Thus, the queueing
delays caused by transmission errors may often cause large and sudden RTT
spikes on the order of the e2e RTT. But as you say, TCP's congestion
control loop is mostly doing fine here. A number of publications confirm
that. An open issue, though, are the spurious timeouts that the mentioned
RTT spikes can cause, but that is being addressed in the IETF (TSV WG) with
the Eifel response algorithm.
>[...] whether TCP does OK depends a lot on buffering and
>queue management at the wireless hop.
Exactly! And large queues, as suggested by someone before, is certainly not
the answer. Instead, the queue size (or the AQM thresholds) should be
dynamically adapted as the capacity of the wireless link changes as
per-mobile-host bit rates are switched up and down (on the timescales of an
e2e RTT). This approach leads to high link utilization, high e2e
throughput, and low e2e delays. We have recently presented a paper on that
subject:
Mats Sågfors, Reiner Ludwig, Michael Meyer and Janne Peisa, "Queue
Management for TCP Traffic over 3G Links", IEEE WCNC 2003.
///Reiner
More information about the end2end-interest
mailing list