[e2e] Google seeks to tweak TCP
Daniel Havey
dhavey at yahoo.com
Mon Feb 6 11:29:41 PST 2012
> > Hmmm, not really. If a service provider drops
> non-TCP packets then my TCP alternative will never have a
> chance to get off the ground. I could build it, but,
> what is the point if nobody will use it?
> >
> > I believe that any new solution must not only play nice
> with TCP, but be indistinguishable from TCP. Otherwise
> the packets may be dropped.
> >
>
> However, this is implementation and development and not
> primarily research.
>
IMHO, a complete research work would includes the possibility of implementation. Even though the process may be long and arduous.
> >> And whenever people are exited by simple recipes
> which save
> >> the world, we always should keep in mind RFC 1925.
> >>> (6) It is easier to
> move a problem around (for example, by moving the problem to
> a different part of the overall network architecture) than
> it is to solve it.
> > This is getting interesting. So the goal of Cubic
> (from the paper) is to provide more RTT fairness. It
> does this.
>
> Oh, I should better not look at the Cubic paper. Why shall
> we pursue "RTT fairness"? Particularly, when wireless
> networks are included, this is no longer RTT fairness but
> fate sharing instead.
>
> > I don't think that queuing delay is the only cause of
> large RTTs. Sometimes a link is just slow and
> sketchy. Happens all the time. maybe 802.11n has
> negotiated a slow rate or there are lots of
> retransmissions. There will probably be a lot of
> packets in the queue because of this, but, the link is slow
> and that is why the RTTs are large. I'm thinking of
> projects like "Wireless Africa" where links are slow and
> lossy.
>
> According to my experience, it is difficult to sell lossy
> links in papers. However, being lossy and being slow are
> often just two sides of the same mountain.
>
True, because the MAC will twist itself into a pretzel before allowing a packet to drop.
At the transport layer we will experience these losses as increased delay. We shouldn't ignore them because we can feel their effects, even if we don't see the actual loss.
As conditions become poor, you will see the actual loss combined with the delay from trying to prevent that loss.
IMHO, these packets are probably not worth so much trouble. Reliability is expensive, and 100% reliability is even more expensive. But, this is the viewpoint of a person who just wants to stream video. I don't really need "all" of the packets, just an adequate number of them and a few losses are really not a big deal for my applications.
It sounds like I need a non-TCP solution, but, see above. I just take it as a starting point for my research that transport must be TCP. Without this as a starting point then the work will not be "implementable".
>
> > Such links have difficulty even reaching their tiny
> capacity with Reno. They do much better with Cubic.
>
> What's the very reason for this behaviour? Is it because
> Reno cannot deal well with losses?
>
Haha! Yeah, good point. I don't actually know. My experiments just used Cubic as a baseline, because it is the default.
I suspect that Reno would have more difficulty with false congestion signals then Cubic, provided the false signals occurred on a time scale larger than the amount of time required for Cubic to return to it's exponential probing phase. Otherwise both protocols would be toast.
> --
> ------------------------------------------------------------------
> Detlef Bosau
> Galileistraße 30
> 70565 Stuttgart
>
> Tel.: +49 711 5208031
>
>
> mobile: +49
> 172 6819937
>
>
> skype:
> detlef.bosau
>
>
> ICQ:
> 566129673
> detlef.bosau at web.de
>
> http://www.detlef-bosau.de
> ------------------------------------------------------------------
>
>
>
More information about the end2end-interest
mailing list