[buug] Setting up torrent on wifi

Rick Moen rick at linuxmafia.com
Mon Mar 30 21:33:14 PDT 2009


Quoting Zeke Krahlin (pewterbot9 at gmail.com):

> I need to know how to test what one's upload speed is, in order to
> find out. I guess. Is there a Linux command or application for this?

Sure.  For example, good ol' wget.


The problem with a lot of canned tools often recommended for this
purpose is that either (1) you're not entirely sure what they're
measuring (and if they're measuring it competently) or (2) what they
_are_ measuring isn't especially relevent.  This is particularly true
of tools that check in with a centralised server -- which was also the
key problem with the much-recommended "ShieldsUP!" firewall tester 
at Gibson Research.

So, a better tool, even though it suffers the huge disadvantage of
requiring you to _think_ a bit ;-> , is logic.  You start with the
knowledge that, between you and any Internet site of interest, there
will be some chain of routers.  You can see at least _most_ of those
routers, maybe all[1], using the traceroute and tcptraceroute
commands.[2] 

So, consider for example the download site linux.stanford.edu.  
I'm interested in some of the stuff here:
ftp://linux.stanford.edu/pub/mirrors/centos/5.2/isos/i386/ 

[rick at linuxmafia]
~ $ tcptraceroute linux.stanford.edu
Selected device eth1, address 198.144.195.186, port 59601 for outgoing packets
Tracing the path to linux.stanford.edu (171.64.64.37) on TCP port 80
(www), 30 hops max
 1  198.144.195.185  57.212 ms  57.032 ms  55.216 ms
 2  fe1-0.cr04-200p-sfo.unitedlayer.com (209.237.228.169)  55.162 ms 58.334 ms  57.139 ms
 3  Vlan501.br02-200p-sfo.unitedlayer.com (209.237.224.25)  54.765 ms 57.260 ms  58.840 ms
 4  Vlan902.br01-paix-pao.unitedlayer.com (207.7.129.73)  60.193 ms 68.455 ms  55.201 ms
 5  paix-px1--bungi-fe.cenic.net (198.32.251.57)  55.997 ms  59.144 ms 54.676 ms
 6  svl-dc1--sfo-px1-ge.cenic.net (198.32.251.224)  52.887 ms  53.588 ms 52.965 ms
 7  dc-svl-dc1--sfo-dc1-pos.cenic.net (137.164.22.34)  52.045 ms  51.142 ms  52.950 ms
 8  dc-svl-core1--svl-dc1-ge-1.cenic.net (137.164.46.208)  52.925 ms 53.053 ms  53.654 ms
 9  dc-svl-agg2--svl-core1-ge-1.cenic.net (137.164.46.200)  53.269 ms 52.747 ms  52.961 ms
10  dc-stanford--svl-agg2-ge.cenic.net (137.164.50.34)  52.838 ms 55.224 ms  54.225 ms
11  bbra-rtr.Stanford.EDU (171.64.1.134)  51.661 ms  54.434 ms  53.781 ms
12  * * *
13  linux.Stanford.EDU (171.64.64.37) [open]  54.053 ms  53.977 ms 56.639 ms
[rick at linuxmafia]
~ $



[rick at linuxmafia]
~ $ traceroute linux.stanford.edu
traceroute to linux.stanford.edu (171.64.64.37), 30 hops max, 40 byte
packets
 1  198.144.195.185 (198.144.195.185)  61.838 ms  66.276 ms  54.881 ms
 2  fe1-0.cr04-200p-sfo.unitedlayer.com (209.237.228.169)  57.556 ms 58.494 ms  56.917 ms
 3  Vlan501.br02-200p-sfo.unitedlayer.com (209.237.224.25)  58.380 ms 57.784 ms  55.594 ms
 4  Vlan902.br01-paix-pao.unitedlayer.com (207.7.129.73)  59.195 ms 58.626 ms  55.753 ms
 5  paix-px1--bungi-fe.cenic.net (198.32.251.57)  68.326 ms  63.371 ms 59.707 ms
 6  svl-dc1--sfo-px1-ge.cenic.net (198.32.251.224)  53.769 ms  53.242 ms 53.064 ms
 7  dc-svl-dc1--sfo-dc1-pos.cenic.net (137.164.22.34)  52.945 ms  73.101 ms  53.539 ms
 8  dc-svl-core1--svl-dc1-ge-1.cenic.net (137.164.46.208)  53.697 ms 53.765 ms  56.812 ms
 9  dc-svl-agg2--svl-core1-ge-1.cenic.net (137.164.46.200)  52.640 ms 52.571 ms  53.008 ms
10  dc-stanford--svl-agg2-ge.cenic.net (137.164.50.34)  54.243 ms 53.869 ms  52.983 ms
11  bbra-rtr.Stanford.EDU (171.64.1.134)  52.990 ms  52.167 ms  51.696 ms
12  * * *
13  linux.Stanford.EDU (171.64.64.37)  55.286 ms  54.179 ms  55.104 ms
[rick at linuxmafia]
~ $ 


Hey, pretty decent transit time -- about 50 milliseconds -- at each hop!

Sometimes, the delay on that very first step really stands out, e.g.,
1500 milliseconds or something like that -- sort of like living next to
a really fast, uncongested freeway but having a horribly potholed,
unpaved, long dirt driveway.

Now, watch what I get when I use wget to retrieve a modest CD ISO of
known size:

[rick at linuxmafia]
/tmp $ wget ftp://linux.stanford.edu/pub/mirrors/centos/5.2/isos/i386/CentOS-5.2-i386-netinstall.iso
--21:10:19--
ftp://linux.stanford.edu/pub/mirrors/centos/5.2/isos/i386/CentOS-5.2-i386-netinstall.iso
           => `CentOS-5.2-i386-netinstall.iso'
Resolving linux.stanford.edu... 171.64.64.37
Connecting to linux.stanford.edu|171.64.64.37|:21... connected.
Logging in as anonymous ... Logged in!
==> SYST ... done.    ==> PWD ... done.
==> TYPE I ... done.  ==> CWD /pub/mirrors/centos/5.2/isos/i386 ...  done.
==> PASV ... done.    ==> RETR CentOS-5.2-i386-netinstall.iso ... done.
Length: 8,054,784 (7.7M) (unauthoritative)

100%[====================================>] 8,054,784    101.94K/s ETA 00:00

21:11:38 (101.50 KB/s) - `CentOS-5.2-i386-netinstall.iso' saved
[8054784]

[rick at linuxmafia]
/tmp $


You're seeing there, at the "100%" line, the final state of a progress
metre.  Before that, it had showed the various stages in fetching of the
file, and kept updating the "ETA" in seconds, accordingly.

You will notice that the average download speed thus achieved was about
102 kB/second.

Now, of course, if you were paying attention to my point about there
being a chain of routers between my linuxmafia.com machine and
linux.stanford.edu, you might be wondering:  "Which of those hops was
the bottleneck?"  Good question.  The wget test doesn't, by itself,
identify which hop was the limiting factor.  I could be bottlenecked at
my local aSDL uplink to Raw Bandwidth Communications, my local IP
provider, _or_ I could be bottlenecked at any of the other roughly dozen
hops.  

If I'd been doing this test tomorrow, the release date for CentOS 5.3, 
the odds increase that I'd be bottlenecked at or next to the target
server -- as a result of the CentOS mirrors getting mobbed.

So, how do you tell where the bottleneck -- the limiting-factor router
hop -- is?  Good question.  One way of guesstimating is to _also_ try a
machine that you've determined to be a tiny number of network hops from
you -- the fewer the better.  For example, a little guesswork and poking
determines that Raw Bandwidth Communications (doing business as
Tsoft.net) operates a public ftp server, ftp.tsoft.net.  Let's see
whether it's network-close:

[rick at linuxmafia]
~ $ traceroute ftp.tsoft.net
traceroute to ftp.tsoft.net (198.144.192.42), 30 hops max, 40 byte packets
 1  198.144.195.185 (198.144.195.185)  55.220 ms  56.550 ms  57.398 ms
 2  shell.rawbw.com (198.144.192.42)  54.621 ms  57.051 ms  55.288 ms
[rick at linuxmafia]
~ $ 

[rick at linuxmafia]
~ $ tcptraceroute ftp.tsoft.net
Selected device eth1, address 198.144.195.186, port 59652 for outgoing packets
Tracing the path to ftp.tsoft.net (198.144.192.42) on TCP port 80 (www),
30 hops max
 1  198.144.195.185  92.442 ms  132.211 ms  56.450 ms
 2  shell.rawbw.com (198.144.192.42) [open]  54.705 ms  55.813 ms 57.228 ms
[rick at linuxmafia]
~ $


Wow, that's pretty darned close.  tcptraceroute and traceroute found
_two_ hops to get there, only.  (They also reveal that ftp.tsoft.net 
is an alias for shell.rawbw.com.)

Let's poke around and find a downloadable file of more than trivial
length (so that the average download speed has some hope of being
meaningful).  Ah, ftp://ftp.tsoft.net/pub/tsoft/pktdrvrs/ has one.  So:

[rick at linuxmafia]
/tmp $ wget ftp://ftp.tsoft.net/pub/tsoft/pktdrvrs/pktd11.zip
--21:24:45--  ftp://ftp.tsoft.net/pub/tsoft/pktdrvrs/pktd11.zip
           => `pktd11.zip'
Resolving ftp.tsoft.net... 198.144.192.42
Connecting to ftp.tsoft.net|198.144.192.42|:21... connected.
Logging in as anonymous ... Logged in!
==> SYST ... done.    ==> PWD ... done.
==> TYPE I ... done.  ==> CWD /pub/tsoft/pktdrvrs ... done.
==> PASV ... done.    ==> RETR pktd11.zip ... done.
Length: 435,420 (425K) (unauthoritative)

100%[====================================>] 435,420      102.44K/s ETA 00:00

21:24:55 (98.64 KB/s) - `pktd11.zip' saved [435420]

[rick at linuxmafia]
/tmp $ 


OK, so, in _this_ case, at least, my local download speed from just on
the other side of the aDSL link was just about identical to that from a
dozen hops further away on the Stanford campus:  104 kB/second, as
compared to 102 kB/second.

But tomorrow, CentOS release day, I'll bet there's a severe bog-down.

Now, sometimes you need to provide (or read) something in _bits_ per
second, instead of bytes.  There are eight bits to a byte
(conventionally).  There's a small amount of overhead added for traffic
control, but not as much as in modem days when you had start and stop
bits for serial signalling.  So, I'll leave it up to you what to
multiply wget's kB/second figures by.  It does assume that "k" is 1024
and "m" is 1024x1024, for whatever that's worth.



More information about the buug mailing list