[e2e] TCP "experiments"
Matt Mathis
mattmathis at google.com
Sat Jul 27 19:20:37 PDT 2013
The real issue is the diversity of implementations in the Internet that
allege to be standard IP and TCP NAT, but contain undocumented "features".
No level of simulation has any hope of predicting how well a new protocol
feature or congestion control algorithm will actually function in the real
Internet - you have to measure it.
Furthermore: given that Google gets most of its revenue from clicks, how
much might it cost us to "deploy" a protocol feature that caused 0.01%
failure rate? If you were Google management, how large of sample size
would you want to have before you might be willing actually deploy
something globally?
Thanks,
--MM--
The best way to predict the future is to create it. - Alan Kay
Privacy matters! We know from recent events that people are using our
services to speak in defiance of unjust governments. We treat privacy and
security as matters of life and death, because for some users, they are.
On Sat, Jul 27, 2013 at 5:33 PM, Lachlan Andrew <lachlan.andrew at gmail.com>wrote:
> Greetings John,
>
> On 28 July 2013 04:36, John Day <jeanjour at comcast.net> wrote:
> > One never does experiments with a production network.
> >
> > An arbitrary network of several hundred nodes
> > or even a few thousand is not that big a deal.
>
> Greetings John,
>
> You are absolutely right that testbed experiments should be performed
> before "live" experiments. However, it is not so much the size of the
> network as the mix of applications running on it that makes the test
> representative. It is still very difficult to perform a test with a
> few thousand human users all doing their thing. That means that live
> experiments still have a place.
>
> Of course, that doesn't excuse un-monitored deployments as occurred
> when Linux started using BIC as the default. To my mind, the solution
> would be for the IETF to provide more practical guidance on how to
> perform limited-scale, monitored tests on the real Internet. The
> process of getting a protocol "approved", even as an experimental RFC,
> is far too cumbersome for most researchers, especially since there is
> no way to police the use of non-approved protocols. The IETF will be
> most relevant if its processes reflect its power. We (or at least I)
> want the Internet to be inherited by those who try to play by the
> rules rather than those who flaunt them, but the if the only way to
> make timely progress is by breaking the rules then we won't achieve
> that (as we saw with CUBIC and NATs). Getting the balance right is
> difficult, but important.
>
> $0.02,
> Lachlan
>
> --
> Lachlan Andrew Centre for Advanced Internet Architectures (CAIA)
> Swinburne University of Technology, Melbourne, Australia
> <http://caia.swin.edu.au/cv/landrew>
> Ph +61 3 9214 4837
>
More information about the end2end-interest
mailing list