[e2e] How many TCP flows fit in the Internet?
Matt Mathis
mattmathis at google.com
Sun Mar 31 11:08:39 PDT 2013
You may have overlooked one additional important detail:
Linux TCP ignores the requirement in RFC 2018 that the SACK scoreboard
be cleared on a timeout. As a consequence, Linux TCP can attain total
goodput=throughput for a single forward path bottleneck with average
window size way below one segment. This is because it will not
retransmit segments that are delivered. This region has important
theoretical interest (it evades congestion collapse!) but is
irrelevant from an operational perspective (nobody want to use a
network that is so congested that the average available capacity is
measured in bits per second.)
There has been some work on multiple forward bottlenecks, which can
exhibit congestion collapse (e.g. dead vs zombie packets). I
believe that these cases are well understood in the "decongestion
control" work, etc.
I don't know how well the bidirectional high loss case has been
studied, but feels mathematically straight forward. It does matter
if TCP loss probing is in effect (see Nandita's Internet Draft).
There is an errata against 2018, with the relevant details.
Thanks,
--MM--
The best way to predict the future is to create it. - Alan Kay
Privacy matters! We know from recent events that people are using our
services to speak in defiance of unjust governments. We treat
privacy and security as matters of life and death, because for some
users, they are.
On Sun, Mar 31, 2013 at 4:10 AM, Detlef Bosau <detlef.bosau at web.de> wrote:
> I got some criticism on my post yesterday, so I think I should elaborate at
> least on one point.
>
> However, a general remark in the beginning. I have a strong focus on TCP
> here. Of course, TCP is neither the only protocol in the world nor the only
> one that may cause grief in some circumstances. The distinguishing property
> of TCP is responsiveness. TCP reacts upon packet loss, which is seen as load
> indicator, by reducing the load offered to the network. In TCP,
> responsiveness is (mainly) achieved by protocol means while for other
> protocols, e.g. voice streaming, responsiveness is often left to the
> application. It makes hence sense to have a look at TCP and understand how
> congestion, buffer bloat etc. can be handled and take this as a model for
> other protocols.
>
> Back to the point.
>
> Am 30.03.2013 15:29, schrieb Detlef Bosau:
>>
>> ...
>>
>> Among all strategies of congestion control, in VJCC I miss the real
>> establishment of the two by far most obvious.:
>> In case of congestion
>> 1. reduce the rate of existing flows. (In VJCC and actually existing TCP
>> Implementations we hardly can reduce a flow's rate beyond 1MSS/CWND. Exept
>> perhaps by employing a pacing scheme, however I'm not quite clear yet about
>> possible consequences.)
>
>
> Yesterday, I was told TCP can well reduce it's rate beyond 1 MSS/CWND. Now,
> to my understanding this is not possible with the pure sliding window
> mechanism itself. A TCP socket must not send anything else than a "complete
> TCP frame", with our without payload. It cannot send, say, "two bytes only".
> So a congestion window of, say, 3 bytes or less wouldn't make any sense.
>
> Of course, the GOODPUT (and that's what I was pointed to yesterday) can be
> arbitrarily low: In case of a packet being not acknowledged in time a
> sending socket does it's usual time out handling, including a timer backoff
>
> RTO *= 2;
>
> So, when a switch along the path cannot forward a packet due to insufficient
> queue capacity, the packet remains unacknowledged and is hence
> retransmitted. So, while the switch delivers the net from the "overload"
> imposed by this packet (the packet is dropped) the sender will repeat this
> packet over and over, until a user defined time out is exceeded or the
> packet is eventually acknowledged.
>
> This reduces GOODPUT.
>
> AND
>
> it causes additional network load, i.e. by retransmissions.
>
> So, we have the classical choice between Skylla and Charybdis here. Either
> we drop packets - and cause retransmissions, or we add buffer space to the
> switch and allow for buffer bloat. In the first case, the perceived goodput
> is reduced by timer back off, in the second case the rate CWND/RTT is
> reduced by increasing the RTT by increasing the queueing latency. Neither of
> these is a reasonable reduction of THROUGHPUT which deliveres the network
> from load.
>
> As I said above, I don't discuss ping or voice streams or online games here,
> resource sharing, load control and congestion control in non responsive
> protocols is left to the application.
>
>
> --
> ------------------------------------------------------------------
> Detlef Bosau
> Galileistraße 30
> 70565 Stuttgart Tel.: +49 711 5208031
> mobile: +49 172 6819937
> skype: detlef.bosau
> ICQ: 566129673
> detlef.bosau at web.de http://www.detlef-bosau.de
>
More information about the end2end-interest
mailing list