[e2e] How shall we deal with servers with different bandwidthsand a common bottleneck to the client?
Agarwal, Anil
Anil.Agarwal at viasat.com
Mon Dec 25 15:38:44 PST 2006
Detlef,
In my earlier description, I had incorrectly assumed that link 2-3 was at 10 Mbps. The nature of the problem is similar whether link 2-3 is at 10 Mbps or 100 Mbps.
Here is a corrected description for your network scenario -
Take the case when both connections are active and the queue at router 3 remains non-empty.
Every T seconds, there will be a packet departure at router 3, resulting in the queue size decreasing by 1 packet.
At router 3, if a packet from node 1 departs at time n*T, then at time (n+1)*T + ta1 + t0, another packet will arrive from node 1.
ta1 is the time taken by the Ack to reach node 1 from node 4.
t0 is the transmission time of a packet at 100 Mbps.
At router 3, if a packet from node 0 departs at time n*T, then at time n*T + ta0 + 2 * t0, another packet will arrive from node 0.
ta0 is the time taken by the Ack to reach node 0 from node 4.
t0 is the transmission time of a packet at 100 Mbps.
Another packet (of a packet pair) from node 0 may arrive at time n*T + ta0 + 3 * t0.
In the scenario, ta0 << T, ta1 << T, and t0 = T / 10, ta0 + 2 * t0 > ta1 + t0. I am assuming that propagation delays were set to 0 in the simulations.
It can be seen, that when a node 1 packet arrives at node 3, the queue is never full - a packet departure takes place ta1 + t0 seconds before its arrival, and no node 0 packets arrive during ths interval.
No such property holds for node 0 packets - hence node 0 packets are selectively dropped.
Changing bandwidths a bit or introducing real-life factors such as propagation delays, variable processing delays and/or variable Ethernet switch delays will probably break this synchronized relationship. RED will also help.
One can construct many other similar scenarios, where one connection is selectively favored over another. Perhaps, one more reason to use RED.
Anil
________________________________
From: end2end-interest-bounces at postel.org on behalf of Agarwal, Anil
Sent: Mon 12/25/2006 11:35 AM
To: Detlef Bosau; end2end-interest at postel.org
Cc: Michael Kochte; Martin Reisslein; Frank Duerr; Daniel Minder
Subject: Re: [e2e] How shall we deal with servers with different bandwidthsand a common bottleneck to the client?
Detlef,
Here is a possible explanation for the results in your scenario -
Take the case when both connections are active and the queue at router 2 remains non-empty.
Every T seconds, there will be a packet departure at router 2, resulting in the queue size decreasing by 1 packet at time T.
If a packet from node 1 departs at time n*T, then at time (n+1)*T + ta1, another packet will arrive at router 2 from node 1.
ta1 is the time taken by the Ack to reach node 1.
If a packet from node 0 departs at time n*T, then at time n*T + ta0 + t0, another packet will arrive at router 2 from node 0.
ta0 is the time taken by the Ack to reach node 0.
t0 is the transmission time of a packet at 100 Mbps.
Another packet from node 0 may arrive at time n*T + ta0 + 2 * t0.
In the scenario, ta0 << T, ta1 << T, and t0 = T / 10, ta0 + t0 > ta1. I am assuming that propagation delays were set to 0 in the simulations.
It can be seen, that when a node 1 packet arrives at node 2, the queue is never full - a packet departure takes place ta1 seconds before its arrival, and no node 0 packet arrive during the ta1 seconds.
No such property holds for node 0 packets - hence node 0 packets are selectively dropped.
Changing bandwidths a bit or introducing real-life factors such as propagation delays, variable processing delays and/or variable Ethernet switching delays will probably break this synchronized relationship.
Regards,
Anil
Anil Agarwal
ViaSat Inc.
Germantown, MD
________________________________
From: end2end-interest-bounces at postel.org on behalf of Detlef Bosau
Sent: Sun 12/24/2006 5:52 PM
To: end2end-interest at postel.org
Cc: Michael Kochte; Daniel Minder; Martin Reisslein; Frank Duerr
Subject: Re: [e2e] How shall we deal with servers with different bandwidths and a common bottleneck to the client?
Detlef Bosau wrote:
I apologize if this is a stupid question.
I admit, it was a very stupid question :-)
Because my ASCII arts were terrible, I add a nam-screenshot here (hopefully, I´m allowed to send this mail in HTML):
Links:
0-2: 100 Mbit/s, 1 ms
1-2: 10 Mbit/s, 1 ms
2-3: 100 Mbit/s, 10 ms
3-4: 10 MBit/s, 1 ms
Sender: 0,1
Receiver: 4
My feeling is that the flow server 1 - client should achieve more throughput than the other. From what I see in a simulation, the ratio in the secnario above is roughly 2:1. (I did this simulation this evening, so admittedly there might be errors.)
Is there a general opinion how the throughput ratio should be in a scenario like this?
Obviously, my feeling is wrong. Perhaps, I should consider reality more than my feelings :-[
AIMD distributes the path capacity (i.e. "memory") in equal shares. So, in case of two flows sharing a path, each flow is assigned an equal window. Hence, the rates should be equal as they depend on the window (= estimate of path capaciyt) and RTT. (Well known rule of thumb: rate = cwnd/RTT)
However, the scenario depicted above is an interesting one: Apparently, the sender at node 1 is paced "ideally" by the link 1-2. So, packets sent by node 0 are dropped at node 3 unuduly often. In consequence, the flow from 0 to 4 hardly achieves any throughput whereas the flow from 1 to 4 runs as if there was no competitor.
If the bandwdith 1-2 is changed a little bit, the bevaviour returns to the expected one.
I´m still not quite sure whether this behaviour matches reality or whether it is an NS2 artifact.
Detlef
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20061225/d26abde2/attachment-0001.html
-------------- next part --------------
A non-text attachment was scrubbed...
Name: bild.png
Type: image/png
Size: 21858 bytes
Desc: bild.png
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20061225/d26abde2/bild-0001.png
More information about the end2end-interest
mailing list