[e2e] Is a control theoretic approach sound?
Yunhong Gu
ygu1 at cs.uic.edu
Wed Jul 30 09:31:43 PDT 2003
On Wed, 30 Jul 2003, Panos Gevros wrote:
> my assumption is that for any adaptive transport protocol the issue is to
> discover the (subjective) "target" operating point (as quickly as possible)
> and stay there (as close as possible) - tracking possible changes over time,
>
> so one optimisation plane is
> (smoothness, responsiveness)
> another optimisation plane is
> (capacity, traffic conditions)
> and I think it is a fair assumption that there is no single scheme that
> operates better than the rest over the entire spaces. (if there are claims to
> the contrary any pointers would be greatly appreciated).
>
> the question is at the boundaries of the ( Capacity, Traffic) space
> particularly at the (hi, low) end of this space
> *simple* (and appropriate) modifications to the existing TCP control mechanisms
> (i.e no rtt measurements and retransmission schemes more aggresive slow start
> and/or more aggresive AI in congestion avoidance)
> could have the same effect in link utilisation and connection throughput.
> I believe that this is possible but the problem with this approach is that it
> is "TCP hostile".
Well, I think to decide how "aggressive" the AI will be is not that
*simple* a problem :) It is not the more aggressive the better (even if
the per flow throughput is the only objective), right?
>
> Also my guess is that most of the complexity in "new" TCPs is because
> implementors attempt to be "better" (by some measure) while remaining
> "friendly" to the standard.
Yes, I agree, this is a headache problem.
>
> I have seen TCP implementation which in the case of the remote endpoint being
> on the same network it allows a very high initial cwnd value at slowstart -
> solving all performance problems (in the absence of "social" considerations..
> of course)
>
> Wouldnt this be a much simpler answer to the problems of the "demanding
> scientists who want to transfer huge data files across the world" (citing form
> the article in the economist magazine)
> ..in their case they know pretty much that the links they are using are in the
> gigabit range and there are not many others using these links at the same time.
>
But what if there are loss, especially continuous loss during the bulk
data transfer? No matter how large the cwnd is initially, it can decrease
to 1 during the transfer, then the problem arise again.
Gu
More information about the end2end-interest
mailing list