[e2e] First rule of networking: don't make end-to-end
promise s you can't keep.
David P. Reed
dpreed at reed.com
Fri Apr 23 16:45:19 PDT 2004
At 01:38 PM 4/23/2004, Naidu, Venkata wrote:
> I got exactly the opposite supportive argument when I read
> RFC 2884 (http://www.ietf.org/rfc/rfc2884.txt). This RFC
> clearly states that the explicit congestion signal is efficient
> is most of the situations.
That happens to be exactly what I was saying, *not* the opposite. NOTE
ECN (which was called EARLY congestion notification by some, because it
suggests congestion at a lower threshold than buffer exhaustion) is not
what Alex Cannara is referring to - which is a signal that a packet was
"dropped because of congestion". ECN is a signal that signals impending
congestion before the congestion gets bad enough that packets must be
dropped, thus shortening the end-to-end control loop when it succeeds in
getting through.
But ECN's effective functioning as a control system depends on the fallback
that packets *will* be dropped if congestion occurs rapidly enough that ECN
can't slow the source, or if ECN packets are lost due to errors, and when
they are dropped, there are no floods of congestion-amplifying packets
delivered to the target or the source. If you have only ECN, but don't
allow packets to be dropped on congestion, the network will still go into
congestion collapse.
ECN (and RED, and head-dropping) are elegant ideas that build on the basic
idea that packet drop = congestion ==> backoff.
Cannara's basic idea occurs to many sophomores - that packet errors should
cause the source to send with a *larger* window (that's why he keeps saying
it is bad to "back off" when a packet is lost). It's a strange idea that
is based on a theory that keeping the network buffers as full as possible
is a "good thing" for goodput - despite the fact that it pessimizes
end-to-end latency due to queueing delay, thus increasing the effective
delay through the control loop. The end-to-end performance might go up if
you have errors but absolutely no congestion. But if you have congestion,
too, you don't get much value out of filling up the congested path - the
path is already delivering all it can.
As I mentioned before, it's simple to see (based on understanding how
pipelining works in pipelined processes) that the optimum steady-state
operating point for a network would occur if the slowest link on every
active end-to-end path has exactly one packet in its buffer ready to go
when it completes the packet in progress. That means that on every
end-to-end flow, there is an end-toi-end window of at most one packet per
hop, and less than that if there is congestion. The problem is that the
TCP window also includes buffering for the target application, which may be
on an overloaded system, which wants to "batch" lots of bits when it is
ready to take them, and sometimes buffering for the source application (if
the API is designed so that it offers to suck up lots of bits from the
source so that they are ready to go while the source process is put to
sleep). Since the window includes the source and target buffering, it's
tempting to let that stuff fill up the network switch and router buffers -
where it causes congestion and screws up the control loop length so that
the network gets out of control.
More information about the end2end-interest
mailing list