[e2e] Compression of web pages
Vernon Schryver
vjs at calcite.rhyolite.com
Tue Aug 27 08:33:30 PDT 2002
> From: "Woojune Kim" <wkim at airvananet.com>
> I recently saw an article in the WSJ in the column by Mossberg,
I so often find what he says to be such silly nonsense that I try to
not read his column, and that when I accidently click on a link (via
my on-line subscription) that yields one of his columns, I grumble.
As hard as it is to believe, his columns are often less accurate
than the inter-ad filler in the trade press.
> describing Sprint PCS 's wireless internet access services. It said
> that SprintPCS uses some sort of compression / decompression
> technology to make the user get the look / feel of a 400Kbps
> connection even though the actual physical data rate is only
> 50-70Kbps.
>
> From the description it looks like they have some sort of
> compression agent either in their access box or an external box.
> Something like the WAP servers or a specialized Web Proxy server.
> My guess is that they also had some special decompress software in
> their mobile handsets or laptops.
Are you referring to
http://online.wsj.com/article/0,,SB1029974251705867835,00.html ?
My guess is that is baloney he was fed by Sprint salescritters and that
Mr. Mossberg wouldn't be able to tell the difference between 400 Kbit/sec
and 50-70 miles/hour, and could not and would never admit it.
Recall that PPP compression doesn't do much for web surfing because
the stuff that takes the most time is generally already compressed
pictures. No compression no matter how magical can significantly
compress or speed up the transmission a .gif or .jpg (except perhaps
in wierd, very unusual cases).
The other the bits in web page are usually too few for compression to
matter at link speeds above 30 Kbit/sec. The latencies of DNS resolution,
reach over the Internet, and of the HTTP server itself are generally
greater than the time needed to transimit a few Kbytes of HTML.
400 Kbit/sec through a real link running 50-70 Kbit/sec is a 6:1 to
8:1 compression ratio. That is impossible except in special cases.
The data used by `ping` or lists of IP addresses or email addresses
can often be compressed (e.g. by LZW) by 8:1, 30:1, or even more, but
"average" compression rates for "typical," not already compressed
network data (e.g. not GIFs) using good compression schemes (plenty
of history, and not just per packet as in common uses of LZS) are 2:1
to less than 4:1.
> I was wondering though, wouldn't it be more efficient if the web
> client were able to request "compressed web pages" in the initial
> HTTP request ? So instead of having specialized proxy servers etc.
> the compresssion etc. would be done at the originating server....
> (This would not have to be a performance hit as compressed pages
> could be prepared offline etc.)
>
> Has this idea been floated around and killed already ? Or is it
> already out there in some form ?
You did mention WAP.
There are also distributed caches such as Akamai's and the caching schemes
used (past tense?) by the satellite Internet providers. One recent
article in that general area is http://zdnet.com.com/2100-1105-955245.html
Vernon Schryver vjs at rhyolite.com
More information about the end2end-interest
mailing list