This is G o o g l e's cache of http://www.osopinion.com/perl/story/5715.html.
G o o g l e's cache is the snapshot that we took of the page as we crawled the web.
The page may have changed since that time. Click here for the current page without highlighting.
To link to or bookmark this page, use the following url: http://www.google.com/search?q=cache:www.osopinion.com/perl/story/5715.html


Google is not affiliated with the authors of this page nor responsible for its content.

OPINION:
Peer to Peer: Analyzing the Network Traffic, Part II

Send this Article
Print this Article
Contributed by Adam Barr
osOpinion.com
December 6, 2000


Because Napster sends data in such small chunks, transfers wind up being limited not by bandwidth as they should be.

In This Story:

Testing Ground

Bad Timing

Speed of Light

Close Call

Big Picture

In Part I of this article, I discussed the theoretical reasons why peer-to-peer transfer might be slower than those coming from Web sites.

The conclusion I reached was that the risk in a peer-to-peer transfer was that the bandwidth of the first few hops away from the machine serving the data would limit the speed of the transfer.

Well, enough theoretical babbling. Time to run some tests.

Testing Ground

I compared a few Napster transfers to Internet Explorer (IE) downloads from various Web sites (including a download of the latest Napster binaries from Napster.com). I picked Napster users who claimed to have cable/T1/DSL connections (although I also discovered that at least a few people who claim to have a 56K connection actually have much more, and presumably are advertising less to prevent people from downloading from their computers).

At first glance, it looks like peer-to-peer loses badly. The three Napster transfers came in at 13, 10.5 and 35 kilobytes-per-second. Meanwhile, the three downloads in IE all wound up running in the 40 to 45 kilobyte-per-second range, which is basically the limit of my 384 kilobit-per-second DSL line.

If you look at the captures on a packet sniffer, however, it turns out that the problem is not peer-to-peer networking: the problem is Napster.

Bad Timing

Napster appears to send data in 2K pieces, and wait for an acknowledgement from the other side before sending more (or from a programming perspective, it does a 2K send() over the socket, and waits for it to complete before calling the next send()).

This is very very very bad, because it means that TCP can’t get to work streaming data at a steady clip.

Remember in part I, I stated that in a data transfer over the Internet, the limiting factor was the lowest-bandwidth part of the path, and that the length of the path -- the number of hops involved -- is not important. What I really meant was, it was not important if TCP is operating as it should.

"Operating as it should" means that TCP has a lot of data to send at a time. Napster is basically sending 2K of data, then waiting for the equivalent of a ping response, then sending 2K more, then waiting for a ping response, and so on.

So looking at a Napster capture on a sniff, you see the 2K of data arriving just as fast, if not faster than it does on the IE transfer. But while IE appears to transfer data in a single giant HTTP request and therefore just keeps getting a new 1500-byte packet of data every 31 or 32 milliseconds (which is about the limit for 384 kpbs if you calculate it out), Napster has to wait between 50 and 100 milliseconds after each 2K send.

This idea of sending the data in little pieces does make it easier to interrupt transfers and display status. Someone writing code like this probably realizes that things will run a little slower with the data split up. But they may not realize that it actually makes things run about five times slower than necessary.

Napster displays the "ping" time of each machine that is available to transfer data. In fact, the time it takes to ping a machine, which by default involves sending a small packet on a roundtrip to the machine and is therefore more a function of path length than bandwidth, is not a reliable indicator of how good that machine will be at serving large chunks of data to you.

Speed of Light

To put it another way, it takes about 20 milliseconds for light to travel across the United States. Thus, the theoretical shortest time to ping a machine across the country is 20 milliseconds. Meanwhile a machine on a local Ethernet can be pinged in less than a millisecond.

But if you are sending a large amount of data over TCP, the ping delay just becomes noise compared to the bandwidth available at the narrowest point in the path. You are less interested in the result of a plain ping and more interested in the result of a "ping -1 1400", which lets you specify that a large packet should be echoed during the ping.

But because Napster sends its data in small chunks, transfers wind up being limited not by bandwidth as they should be, but by the path length and other factors that affect a short packet ping. So I guess you could argue that the "ping" times Napster displays are relevant after all, but not for the right reasons!

Close Call

In fact, based on the time-to-live fields in the packets I received, the Napster machines I was transferring from were generally "closer" (fewer hops away) than the Web sites. Now with a properly designed app this should not matter nearly as much as bandwidth at the narrowest point, but it meant that Napster really had a better chance at being faster.

Fewer hops in the path means fewer chances to hit a low-bandwidth link or router. A path through the Internet is made up of two kinds of elements, physical links and routers. Physical links generally have a known bandwidth at which they operate; the bandwidth of a router is simply the maximum amount of data it can pump through without losing packets.

Big Picture

Thus, my summary of these results would be that from a pure networking capacity perspective, peer-to-peer networking does not appear to be disadvantaged compared to serving data from central servers.

When designing a peer-to-peer program, (a) send data in big pieces, and (b) if attempting to automatically pick which server Latest News about Servers to use, base the result on large packet pings.

And memo to Napster: Fix your software!

Talkback Forum


Author's background:
Adam Barr worked at Microsoft for over ten years before leaving in April of 2000. He is working on a book about his time there. Barr lives in Redmond, Washington.

Why should seasoned journalists have all the fun?
Have YOUR Tech/OS Opinion featured on OSO!