[e2e] TCP Performance with Traffic Policing
Barry Constantine
Barry.Constantine at jdsu.com
Mon Aug 15 06:39:46 PDT 2011
Hi,
So I reran the 64KB and 128KB window test from Linux client, under same test conditions, but tweaked the bc value of the policer.
The default value was 312,500 (value for the original tests) and I increased it to maximum of 1,000,000.
For the 64KB window test, no packets were dropped but for the 128KB test, the results remained the same.
I also set be to 1,000,000 and same result.
I also played with the policer PIR value, but no luck there.
Any other suggestions?
Thanks,
Barry
-----Original Message-----
From: Agarwal, Anil [mailto:Anil.Agarwal at viasat.com]
Sent: Saturday, August 13, 2011 11:42 AM
To: Barry Constantine; dart at es.net
Cc: Alexandre Grojsgold; end2end-interest at postel.org
Subject: RE: [e2e] TCP Performance with Traffic Policing
Barry,
You might want to set the "burst size" parameter of the policer to
a higher value -
e.g., equal to the bandwidth-delay-product at the policer rate or even
higher. This will have a **similar** effect as having a buffer with an
equivalent rate link.
Also, testing with multiple TCP connections will result in higher
aggregate throughput, even at low burst size values.
Also, check if TCP SACK is enabled in all your test cases.
You should be able to achieve throughput close to the policer rate.
Note that a policer with rate R bps and burst size of x bytes is
not exactly equivalent to a link at rate R bps and x bytes of queue space.
On a R bps link, TCP packets get spaced out more evenly due to the
self-clocking nature of TCP and the transmission time of each packet
at R bps. With a policer, there is no "transmission time" effect at
the policer; packets in packet trains of a TCP connection tend to
get spaced more closely, which can drive a policer into the state,
where it drops packets, even when the average data rate (measured
over an RTT) is < R bps. Having multiple connections helps - their
packet trains tend to get staggered over time.
Regards,
Anil
Anil Agarwal
ViaSat Inc.
-----Original Message-----
From: end2end-interest-bounces at postel.org [mailto:end2end-interest-bounces at postel.org] On Behalf Of Barry Constantine
Sent: Friday, August 12, 2011 3:17 PM
To: dart at es.net
Cc: Alexandre Grojsgold; end2end-interest at postel.org
Subject: Re: [e2e] TCP Performance with Traffic Policing
Thanks for answering this Eli, very well said.
The buffeting of the slower link more gracefully allows TCP to adapt in my experience.
Also thanks to all on this list, my first time posting and the suggestions and information have been fantastic.
Barry
Sent from my iPhone
On Aug 12, 2011, at 2:44 PM, "Eli Dart" <dart at es.net> wrote:
>
>
> On 8/12/11 9:32 AM, Alexandre Grojsgold wrote:
>> Is there a reason to consider X Mbps policing differnet of having a X Mbps link
>> midway between source and destination?
>
> In my experience, policing at rate X behaves like an interface of rate X
> with no buffer. This means a policer must drop if there is any
> oversubscription at all, while an interface can provide some buffering.
>
> This means that TCP sees loss more easily in policed environments,
> especially if there is a large difference in bandwidth between the
> policed rate and the host interface rate (at any instant in time, the
> host is sending at wire-speed for its interface if it's got data to send
> and available window, regardless of average rate on the time scale of
> seconds).
>
> Of course, different router vendors have different buffering defaults
> (and different hardware capabilities), and some policers can be
> configured with burst allowances. However, many policers don't behave
> in the ways that they say they do, even when configured with burst
> allowances. As another post indicated, its quite a mess...
>
> --eli
>
>
>>
>> -- alg.
>>
>>
>>
>>
>> On 12-08-2011 12:48, rick jones wrote:
>>> On Aug 12, 2011, at 7:03 AM, Barry Constantine wrote:
>>>
>>>> Hi,
>>>>
>>>> I did some testing to compare various TCP stack behaviors in the midst of traffic policing.
>>>>
>>>> It is common practice for a network provider to police traffic to a subscriber level agreement (SLA).
>>>>
>>>> In the iperf testing I conducted, the following set-up was used:
>>>>
>>>> Client -> Delay (50ms RTT) -> Cisco (with 10M Policing) -> Server
>>>>
>>>> The delay was induced using hardware base commercial gear.
>>>>
>>>> 50 msec RTT and bottleneck bandwidth = 10 Mbps, so BDP was 62,000 bytes.
>>>>
>>>> Ran Linux, Windows XP, and Windows 7 clients at 32k, 64k, 128k window (knowing that policing would
>>>> kick in at 64K)
>>>>
>>>> Throughput for Window (Mbps)
>>>>
>>>> Platform 32K 64K 128K
>>>> --------------------------------------------
>>>> Linux 4.9 7.5 3.8
>>>> XP 5.8 6.6 5.2
>>>> Win7 5.3 3.4 0.44
>>>>
>>> The folks in tcpm might be better able to help? but I'll point-out one nit - "Linux" is not that much more specific than saying "Unix" - it would be goodness to get into the habit of including the kernel version. And ID the server since it takes two to TCP...
>>>
>>> happy benchmarking,
>>>
>>> rick jones
>>> Wisdom teeth are impacted, people are affected by the effects of events
>>>
>>
>>
>> --
>>
>> _________________________________________________________________
>>
>>
>>
>> *Alexandre L. Grojsgold*<algold at rnp.br<mailto:algold at rnp.br>>
>> Diretor de Engenharia e Operações
>> Rede Nacional de Ensino e Pesquisa
>> R. Lauro Muller 116 sala 1103
>> 22.290-906 - Rio de Janeiro RJ - Brasil
>> Tel: (21) 2102-9680 Cel: (21) 8136-2209
>>
>>
>>
>
> --
> Eli Dart NOC: (510) 486-7600
> ESnet Network Engineering Group (AS293) (800) 333-7638
> Lawrence Berkeley National Laboratory
> PGP Key fingerprint = C970 F8D3 CFDD 8FFF 5486 343A 2D31 4478 5F82 B2B3
More information about the end2end-interest
mailing list