<html>
At 01:08 PM 3/23/01 -0600, Richard Carlson wrote:<br>
<blockquote type=cite class=cite cite>David;<br>
<br>
Can you elaborate on your question? Are you asking if TCP stacks
are really a performance bottleneck, if bandwidth is a scarce resource,
of if we have any proof of this?</blockquote><br>
It was genuinely a question to clarify a press release and website that
are quite puzzling. Fixing a performance bottleneck is a good thing
to do, I just don't understand what the big hoopla is about, or why it
takes $3MM.<br>
<br>
So, none of the above. I include the press release here - I've also
looked at the website. Reading the press release and the website, I
get the idea that there is an answer that is already being disseminated
in the form of software (middleware?), and it has to do with TCP-MIBs and
autotuning.<br>
<br>
So with claims of first distribution of a "solution" implied in
the press release, it would be interesting to know if the researchers in
the TCP field actually have validated that this is the source of the
problem. Crappy applications programs and TCP implementations could
be the problem, as well, one might think. Or maybe the APIs
(Berkeley Sockets? and file system buffering don't work very well).<br>
<br>
And the mystery of why the project is called "WEB" 100?
We know that web protocols have too much handshaking and parsing to be
good bulk transfer vehicles. And what do supercomputer users have
to do with the Web?<br>
<br>
But what most puzzles me is that this is an NSF research project, not a
software development project, yet the press release talks about it as the
latter.<br>
<br>
I'm probably just confused. Maybe this is how science is done these
days, but I'd think that one grad student could have figured out where a
bottleneck is by just a few measurements, then passed the info off to the
community of developers to fix it. Since the project is "open
source" according to the website (but I, at least, can't look at the
source because I don't have a password), one might think that the fix
would simply be posted, at low cost.<br>
------------------------------------------------------------------------------<br>
FOR IMMEDIATE RELEASE <br>
Mar. 19, 2001<br>
Web100 Takes First Step Towards Improving Network Performance<br>
PITTSBURGH -- The Web100 Project has distributed the initial version
<br>
of software that aims to bring data-transmission rates of 100 <br>
megabits per second to users of high-speed networks. Select <br>
researchers at universities and government laboratories are getting a
<br>
sneak peek at the Web100 software to do real-world testing and <br>
provide feedback to developers.<br>
"Today's release of the Web100 software promises improved network
<br>
performance at a time when bandwidth is increasingly precious," said
<br>
Tom Greene, the Senior Program Director for Infrastructure in the <br>
National Science Foundation's Division of Advanced Networking <br>
Infrastructure and Research. "This type of middleware can help us
<br>
use existing resources more efficiently."<br>
While most home users still connect to the Internet with a 56K modem,
<br>
universities, research centers and some businesses today have <br>
connections capable of transmitting data at 100 megabits per second
<br>
(Mbps) or higher. Research has shown, however, that users rarely see
<br>
performance greater than three Mbps. Web100 researchers traced the <br>
problem to software that governs the Transmission Control Protocol <br>
(TCP) -- a "language" that computers use to communicate across
<br>
networks. Networking experts are able to overcome this limit by fine
<br>
tuning connections with adjustments to TCP.<br>
The Web100 software will eventually allow users to take full <br>
advantage of available network bandwidth without the help of a <br>
networking expert. Web100 programmers are refining TCP software in <br>
the Linux operating system to automatically achieve the highest <br>
possible transfer rate. "Our goal is to make it easier for everyone
<br>
to move data across networks at 100 megabits per second or higher,"
<br>
said Matt Mathis, Pittsburgh Supercomputing Center network research
<br>
coordinator and one of the principal investigators of Web100.<br>
Twenty-one researchers at ten institutions -- including Stanford <br>
Linear Accelerator Center, Oak Ridge National Laboratory, Lawrence <br>
Berkeley Laboratory and Argonne National Laboratory -- will test the
<br>
initial release of Web100 software.<br>
At the University of Michigan, for example, Brian Athey will test the
<br>
Web100 software for use with the Visible Human Project. Athey is <br>
working with Art Wetzel at PSC to develop applications that allow <br>
students to view large Visible Human data-sets over high-speed <br>
networks. "In situations of marginal bandwidth availability,"
said <br>
Athey, "tuning could make the difference between a choppy and <br>
unusable 500 Kbps to 1 Mbps stream to a perfectly useful 2 Mbps to 5
<br>
Mbps stream."<br>
The Web100 Project is a collaboration of Pittsburgh Supercomputing <br>
Center, the National Center for Atmospheric Research and the National
<br>
Center for Supercomputing Applications. More information can be found
<br>
at:
<a href="http://www.web100.org/" eudora="autourl"><font color="#0000FF"><u>http://www.web100.org/</a><br>
<br>
</u></font># # # <br>
CONTACT: <br>
Sean Fulton <br>
sfulton@psc.edu <br>
Pittsburgh Supercomputing Center <br>
412-268-4960<br>
[R. Sean Fulton | Public Information Specialist | sfulton@psc.edu]<br>
[***** Pittsburgh Supercomputing Center | 412/268-7141 *****]<br>
-----------------------------------------------------------------------------<br>
<br>
<br>
<blockquote type=cite class=cite cite>From the DOE perspective getting
access to high bandwidth pipes is not the major problem scientific
applications are running into. There is 'easy' access to OC-3 to
OC-48 links both within North America and around the globe. (Take a
look at the number of OC-3/12 links coming into the US from
Europe.) The problem is getting effective e2e throughput (goodput)
through between 2 nodes (i.e., moving a GB of data from a storage system
at SLAC to a users desktop at UTK). The BW*delay product requires
large windows on both end nodes and almost no loss over SLAC's campus
network, ESnet, Abilene, and UTK's campus network.<br>
<br>
The major problem DOE scientists have is determining why the goodput is
so low (i.e., 5 Mbps e2e over a 100 Mbps channel). The Web100
activities are designed to answer the question 'is the biggest problem in
the local host, the remote host, or the network'. Getting an
authoritative answer to this simple question would be of immense value to
the DOE scientific community and well worth the investment NSF is making
in funding the Web100 activities.<br>
<br>
Rich<br>
<br>
At 12:20 PM 3/23/01 -0500, David P. Reed wrote:<br>
<blockquote type=cite class=cite cite>So, I got a press release on
web100.org and its TCP improvement software.<br>
<br>
The press will probably get this completely wrong (the slant in the press
release is that TCP is *the big problem* and that scarce bandwidth is the
reason we can't use 100 MB pipes).<br>
<br>
Has anyone done any studies that would reasonably support the huge
investment here?<br>
<br>
- David<br>
--------------------------------------------<br>
WWW Page:
<a href="http://www.reed.com/dpr.html" eudora="autourl">http://www.reed.com/dpr.html</a><br>
</blockquote><br>
------------------------------------<br>
<br>
Richard A.
Carlson<x-tab> </x-tab><x-tab> </x-tab><x-tab> </x-tab><x-tab> </x-tab>e-mail:
RACarlson@anl.gov<br>
Network Research
Section<x-tab> </x-tab><x-tab> </x-tab><x-tab> </x-tab>phone:
(630) 252-7289<br>
Argonne National
Laboratory<x-tab> </x-tab><x-tab> </x-tab><x-tab> </x-tab>fax:
(630) 252-4021<br>
9700 Cass Ave. S.<br>
Argonne, IL 60439<br>
</blockquote>
<x-sigsep><p></x-sigsep>
- David<br>
--------------------------------------------<br>
WWW Page:
<a href="http://www.reed.com/dpr.html" eudora="autourl">http://www.reed.com/dpr.html</a><br>
<br>
</html>