Up one level
How to test the performance of your network and hard drive in Linux
Spencer Stirling

Hard drive performance
Here I will try to address the (unnecessarily complicated) issue concerning what kind of performance you should expect from your hard drive.

Physical characteristics
First, here's a rundown of the physical characteristics of your hard drive. I'll start with the mechanical aspects, and then outline the electrical interface aspects.

Mechanical characteristics
Hard drives are made of several "platters" (or disks). Each platter is divided up into circular "tracks" (or cylinders), and furthermore each track is divided into little chunks called "sectors". Each sector holds 512 bytes worth of data.

Now, to read and write onto the platters, we have little devices called "heads" which scan back and forth across the platters (so they can go from track to track). As the platters spin underneath the heads, the heads can either read or write information to a given track.

Since the platters are double-sided, the number of heads is actually 2*(number of platters). So usually disk geometry is specified in the number of cylinders (tracks), heads, and sectors (per cylinder).

Old School Drive Geometry
In the oldest drives the BIOS would address the drives by specifying the cylinder-head-sector number (CHS). BIOS was capable of addressing 1024 cylinders, 256 heads, and 64 sectors. However the ATA standard for hard drives allowed for 16,384 cylinders, 16 heads, and 64 sectors. Notice that BIOS allows for more heads, and the ATA standard allows for more tracks! DUH!!!

So we have to take the lowest common addresses for these two standards to live together in harmony. That's 1024 cylinders, 16 heads, and 64 sectors. So old school drives were limited to (1024 cylinders)*(16 heads)*(64 sectors)*(512 bytes)=524MB (here 1MB = 1024 bytes). That SUCKS!!!

Please note the following caveat: when actually addressing the CHS numbers, the counting starts from zero, i.e. CHS=(0,0,0)... hence the maximum address is CHS=(1023,15,63) NOT CHS=(1024,16,64)!!!

A Bit Better: Logical Drive Geometry
Now it's pretty easy to see what to do to go beyond that. The BIOS allows for 16 TIMES more heads, but the ATA standard allows for 16 TIMES more cylinders. So we could just put in a little controller that translates between the two. So now the BIOS can access 1024 cylinders, 256 heads, and 64 sectors. The translator multiplies the cylinder number by 16 (giving 16,384 possible values) and divides the head number by 16 (giving 16 possible values) and passes that onto the ATA controller. Sweet. So that allows for (1024 cylinders)*(256 heads)*(64 sectors)*(512 bytes)=8.38GB. I'm not sure what this scheme was called, but I think it was called "logical geometry".

(Important aside: in the above examples we said that we could have up to 16 heads in the ATA standard. That equals 8 double-sided platters! Hard drives usually actually have only 3 or 4 platters. That means that the above "logical geometry" addressing scheme is even MORE complicated than what I indicated above, but whatever... the idea is the same.)

Modern Drive Geometry: Logical Block Addressing (LBA)
But today's drives are much bigger than 8.4GB! Now they use a different scheme called Logical Block Addressing (LBA), which just allows you to ask for a specific sector (the sectors are labelled serially starting from 0 to (a really big number)). Using this, the above maximal drive (with CHS=(0,0,0) to CHS=(1023,255,63)) has an LBA range from 0 to (1024 cyl)*(256 heads)*(64 sect)-1=16,777,215. But we can get MUCH more sectors than that - this is just a small 8.4GB drive!

So now it's up to the hard drive itself to figure out how to assign its "LBA" addresses. This is good, because it allows for a number of changes.

First, we can have as many cylinders, heads, and sectors per cylinder as we want.

Second, now we can have different numbers of sectors per cylinder depending on the cylinder. This makes complete sense. The outside tracks are clearly much LONGER than the inside tracks, hence they should be capable of holding more sectors. Now they DO.

So here are the specifications for a "modern" (although quite old by now) drive, the 34.2 GB IBM Deskstar 34GXP (model DPTA-373420):

cylinders: 17,494
heads: 10 (= 5 platters)
sectors per track: 272 to 452

Important note: if you ASK this drive how many cylinders/heads/sectors it has, it will report CHS=(16384,16,64) - this is because you are asking it a stupid question. All modern hard drives will report this same number, and so it is worthless.

Sample calculation of (theoretical) maximum physical reading performance
Now, to figure out the actual MAXIMUM READING PERFORMANCE of this drive, you just need to know how fast the platters are turning. Let's do a different example with a drive that I actually own. It is a 20GB Western Digital WD205BA (I know... an ancient beast). I am not quite certain what the exact drive geometry is, so I'll estimate. Let's just say it is something like

cylinders: 17,494 (same numbers as above)
heads: 6 (it has 3 platters, I'm pretty sure)
sectors per track: 272 to 452 (same numbers as above).

These numbers make sense since I have 6 heads instead of 10 heads, making the drive capacity only 60 percent of 34.2GB (which is about 20GB). Now the spindle speed is 7200 RPM = 120 RPS (revolutions per second). So this means that, on the OUTSIDE track (the biggest track), I expect to be able to pick up (120 RPS)*(452 sectors)*(512 bytes)=27.7MB/s. I would guess that this is my MAXIMUM drive reading rate. Average reading will be lower - factoring in head seek time, cylinders on the inside of the platters, etc. will significantly alter this number.

Electrical Interface characteristics
The old ATA interface used a 40-wire ribbon cable to connect your drive to the motherboard. You may still have some of these in your computer, although I recommend swapping them out for the newer 80-wire ribbons (they are the same size, but they have smaller wires, and twice as many).

I can't quite recall what the data transfer rate on that old standard was, but it wasn't very good. Maybe like 16MB/s. That's clearly not going to be good enough for the drive I mentioned above, which can actually physically READ data at a rate of 27.7MB/s. This is not to mention the onboard cache, which should be able to max out any kind of cable that you can come up with. (Of course, in Linux I think that the hard drive onboard cache is not nearly as important since Linux keeps a disk cache in memory. It is unlikely that the hard drive's puny 8MB cache will actually contain something that Linux doesn't already have in memory!)

The modern Parallel ATA standard (with the 80-wire ribbons - called Ultra ATA) allows for much faster transfer rates (however SATA - see below - is much better). Furthermore, the standard allows for DMA (direct memory access), which allows the hard drive controller to just shoot its data directly into memory without passing it to the operating system first. The allowed interface speeds on my Western Digital drive given above are

66.6 MB/s (Mode 4 Ultra ATA)
33.3 MB/s (Mode 2 Ultra ATA)
16.6 MB/s (Mode 4 PIO)
16.6 MB/s (Mode 2 multi-word DMA)

Fortunately, with the 80-wire ribbon, the drive is operating in Mode 4, which allows for 66.6 MB/s to pass from the drive to the motherboard. This should be plenty since the drive can only read at a MAXIMUM RATE of 27.7 MB/s. (note: a striped RAID0 array, i.e. reading from multiple disks simultaneously, could however easily max this out). Obviously the cache will max out the 66.6 MB/s.

Now more modern drives also support the following

100 MB/s (Mode 5 Ultra ATA)
133 MB/s (Mode 6 Ultra ATA)

I personally have two more drives which support the Mode 5 (100 MB/s) interface, and I'll compare their performance below. Note that these data rates are PER CHANNEL (meaning that a each master/slave configuration must share this bandwidth). To make a long story short, the 80GB drive can read at about 45 MB/s, and the 200GB drive can read at about 55 MB/s.

Serial ATA (SATA) hard drives have now almost completely replaced the fat Ultra ATA ribbon cables. Internally these drives are the same, but the interface has been VASTLY improved. For one, we get rid of the ugly and bulky ribbon cables connecting your drive to the motherboard. Instead, it uses a sleek serial cable. The SATA1 interface supports 150 MB/s transfer speed, and SATA2 allows for 300 MB/s (that's PER drive - not shared)! Of course, it doesn't take a rocket scientist to figure out the following: if the drive itself can only physically read at a maximum rate of 55 MB/s (like for my 200GB drive mentioned above), then why do I need greater transfer rates? The answer is twofold: 1) as drives get bigger the Ultra ATA interface can be maxed out by a single drive. For example, a 1.5 terabyte Seagate now has reading speed of around 120 MB/s. 2) A striped RAID0 in master/slave configuration (or just accessing the master and slave disks simultaneously - see below) can easily saturate Ultra ATA.

UPDATE: I have recently purchased 4 250GB SATA drives for my testing enjoyment. Please see below for several tests of these drives, including a test of two drives configured in Software RAID0 (striped) mode.

Testing with hdparm
The utility hdparm included with Linux is your friend. Here I will show you how to test the actual performance of your drive. I will test the following 3 drives (all Western Digital, and all 7200 RPM, I think)

/dev/hda (20GB)
/dev/hdc (80GB)
/dev/hdd (200GB)

Just to be sure, I tested each one of these on its OWN interface (no master/slave configuration). I'll compare performance to master/slave configuration below.

The 20GB drive is operating in Mode 4 (66 MB/s can be transmitted through the ribbon cable), and (as calculated above) the MAXIMUM theoretical read speed is 27.7 MB/s. The 80GB drive is operating in Mode 5 (100 MB/s through the ribbon cable), and it has a MAXIMUM physical theoretical reading speed of about 45 MB/s (the calculation is similar to the 20GB drive I performed above). The 200GB drive is also operating in Mode 5 (100 MB/s) and it has a MAXIMUM theoretical reading speed of 55 MB/s.

To test the 20GB drive (as root) I issue the command hdparm -tT /dev/hda. The output is as follows (test done several times)

/dev/hda:
Timing cached reads: 1132 MB in 2.00 seconds = 565.24 MB/sec
Timing buffered disk reads: 72 MB in 3.03 seconds = 23.79 MB/sec

The "Timing cached reads" is a bogus number. I said that the maximum throughput of the cable is 66 MB/s, and this says that it read 565 MB/s. This must be because of the Linux disk-cache in memory.

The important number is 23.79 MB/s. That says that I read off of the drive at 23.79 MB/s. NOT BAD! The maximum theoretical read was 27.7 MB/s. Good!

The same test was performed for /dev/hdc and /dev/hdd, with the following results

/dev/hdc:
Timing cached reads: 1144 MB in 2.01 seconds = 570.38 MB/sec
Timing buffered disk reads: 130 MB in 3.02 seconds = 42.98 MB/sec

/dev/hdd:
Timing cached reads: 1148 MB in 2.00 seconds = 573.51 MB/sec
Timing buffered disk reads: 156 MB in 3.01 seconds = 51.87 MB/sec

That's not bad! The 80GB disk read 42.98 MB/sec, and it had a maximum theoretical read of about 45 MB/s. The 200GB disk read 51.87 MB/sec, and it had a maximum theoretical read of 55 MB/s.

Now I'll test the 80GB drive and the 200GB drive in a master/slave configuration (so they are SHARING the same ribbon, meaning they are sharing the same Ultra ATA Mode 5 100 MB/s interface). This should cause some slowdown, since both of them together in theory max out the ribbon cable (45MB/s + 55 MB/s = 100 MB/s), and of course there's always overhead and collisions and such (you have to pay the tax man).

Here's the surprising result (but not very...): if the hdparm test is performed for each separately (at different times), then there is NO NOTICEABLE SLOWDOWN! Now, if I perform the tests at precisely the same times (I opened up two shells and started the tests simultaneously), then I get the following results

/dev/hdc:
Timing cached reads: 556 MB in 2.10 seconds = 265.31 MB/sec
Timing buffered disk reads: 108 MB in 3.00 seconds = 35.96 MB/sec

/dev/hdd:
Timing cached reads: 788 MB in 2.00 seconds = 393.27 MB/sec
Timing buffered disk reads: 124 MB in 3.03 seconds = 40.96 MB/sec

So there IS a slowdown if I access both master and slave drives at the same time! The 80GB drive slowed from about 43 MB/s to 36 MB/s, and the 200GB drive slowed from about 52 MB/s to 41 MB/s.

But they DON'T slow down if I don't access them at the same time. This is good, because one of my drives is an Windows drive, and the other is a Linux drive. I set them up as master/slave, and I rarely access them simultaneously!

Actual performance from the filesystem
The above tests were physical tests of the hard drive. Now if I want to copy files around, I need to deal with the file system overhead. Here are a set of tests that I performed on the 200GB drive. Remember, it has a maximum theoretical read of about 55 MB/s. The drive is about half-full of data, and the filesystem is Reiser 3.6.

I started reading big files (700 MB) off of the drive using the command

time cat bigfile > /dev/null

This tells Linux to just read the file from the drive as quickly as possible and do nothing with it (and time the operation, as well). My results were surprising. The range was HUGE! Some files read off at about 53 MB/s, and some were as low as 11 MB/s. I assume that the difference has to do with the fragmentation of the file (forcing the heads to seek many different cylinders), where it is physically located on the hard drive, etc. It is important to read off many different files (and remember that the kernel caches files in memory, so reading off the same file immediately again will be MUCH faster, and won't tell you anything except how effective caching is).

For writing performance testing, I used the following method:

1) Disable the write cache (I think) by mounting the drive with the "sync" option. If this isn't correct then please point it out!

2) Read "bigfile" into the READ memory cache with the command "cat [bigfile] > /dev/null" (make sure that your file is not TOO big, otherwise it won't fit into the cache. I use 300-500 MB files). Do this several times until the command executes instantly.

3) Use the command "time cp [bigfile] [destination]" to specifically time how long it takes to copy a big file to the given filesystem.

4) Repeat this several times with several different files to get a good idea how consistent these results are.

Using this method, the same 200 GB drive *actually* writes at about 40 MB/s. Obviously with the WRITE CACHE enabled (mounted "async") those numbers will be MUCH MUCH higher (you'll probably get "unphysical" results).

250GB Seagate SATA Drives
I recently (July 2005) purchased 4 identical 250GB SATA drives, and here are the results of testing on this interface. The computer is an AMD 64 with 1GB RAM (I could use more RAM, I know).

As far as hdparm is concerned, these drives have a maximum reading performance of 58 MB/s (although hdparm probably shouldn't be trusted with SATA, anyway). This is still a *little* slower than I would expect from my calculations above considering that the drive is 25% bigger. (remember that my old 200GB PATA drive discussed above had an hdparm of about 52 MB/s - the 250GB SATA *should* be faster).

Now let's read/write to the SATA drives with a Reiser 3.6 filesystem. Again, the drives are about half full. Using "cat" to read several big files as quickly as possible, I found that the drive reads files off at a maximum of 55 MB/s.

Using the "time cp" writing method I get about 47 MB/s. So I get a read/write rate of 55/47 MB/s for a 250GB SATA drive versus a 53/40 MB/s read/write rate for a 200GB PATA drive (although I expected the SATA drive to be faster just because of sheer SIZE). Notice that this drive does not even get close to maxing out the SATA1 bandwidth (150MB/s per drive), nor even the Ultra ATA modes 4 or better.

Software RAID0 (striped) performance
I set up two of my SATA 250GB drives in a Software RAID0 (striped) configuration and performed all of the above "cat" and "time cp" reading/writing tests on the resulting configuration. The claim is that, by splitting the data transfer across two drives, the transfer rate should be nearly double.

You can see my article on how I set up my RAID system. The configuration is very simple - I have no need to boot from RAID. The results are surprising! On my old Athlon XP the performance was disappointing. The RAID drive read at a maximum rate of 52 MB/s (I don't trust the WRITING performance numbers, so I won't bother giving them - they were dismal). However, for a single drive I was reading/writing at 55/47 MB/s, so RAID0 was no improvement on THAT system (in fact it was much worse). However, I will point out that I was using a PCI SATA controller card - PCI has a maximum system-wide bandwidth of 133 MB/s. The PCI bus could easily be maxed out by both drives in combination with all of the other peripherals communicating on it (sound card, network card, etc).

Now with the Athlon 64 3000+ with 1 GB memory and PCI Express 1.0 (that's 250MB/s per channel, and there can be lots of channels) I am getting at least PART of the RAID0 throughput that I expect! With PCI Express I have achieved ACTUAL FILE READS (using "time cat [bigfile] > /dev/null") of UP TO 96 MB/s (as compared to 55 MB/s for a regular 250GB SATA drive as determined above)!!! Again, the SATA interface has nothing to do with it. Now with PCI Express (*much* faster system bus) the RAID striped bottleneck has been lifted (I would never have noticed with a single drive).

Unfortunately my WRITE performance (without caching) takes a HUGE penalty - I'm routinely getting only about 16 MB/s, which is about a THIRD of the regular SATA writing rate of 47 MB/s. So software RAID0 seems like a tradeoff between faster reading and slower writing.

10,000 rpm drives are probably a waste of money
Using the above calculations it is easy to see that all of these so-called "top-end" awesome drives (like the Western Digital Raptor) are just a nice way to throw your money down the toilet. Here's my reasoning:

I just purchased 250GB (7200 rpm) drives (SATA or IDE, doesn't matter - see above) for about $100. However, I noticed that the Western Digital 74GB Raptor (10000 rpm) were around $200. All of the gearheads out there seem to love this drive, so let's just perform some simple calculations. Basically, as far as I can tell, the only difference is the spinning rate. This means that, in theory, a 10000 rpm drive should be 10000/7200=1.4 times faster than a regular 7200 rpm drive of the SAME SIZE. (As mentioned above, bigger drive = faster drive.)

So let's see what the performance is taking the numbers from my calculations above. My 80GB 7200rpm drive had an hdparm read test of 43 MB/s. Taking this as a baseline number (I'm being generous because 80GB > 74GB) and multiplying by 1.4 I see that the Raptor should have an hdparm of around 60 MB/s. Now compare this to the hdparm for my 250GB drive (58 MB/s) and you see that the speed boost is very slight. Considering that you get over 3 times the disk space for half the price when you buy the bigger drive, there is really no choice. This is why these smaller "faster" drives are a waste of money - they SURE ARE SMALLER, but they're NOT REALLY FASTER (and they are extremely expensive).

Network performance
I felt that my home network was SO SLOW a couple of months back, so I decided to do some testing and troubleshooting to see if I could improve it.

Hardware
I figured that the problem had to be fundamentally a hardware issue. I have my laptop and my main computer sitting on the network, and both have 100 Mb/s cards in there (Mb/s = MegaBITS per second, MB/s = MegaBYTES per second). In "full-duplex mode" this means that the computer can send information at 100 Mb/s, and can receive information 100 Mbps. Now 100 Mb/s = 12.5 MB/s (8 bits = 1 byte), so I was thinking that I *should* be able to transfer files somewhere in the ballpark of this rate.

But it wasn't happening... 700 MB files were taking hours when they should be taking ONLY about 1 minute to copy from one computer to another.

First, I had a look at the hub that they were plugged in to. Behold, it was a crappy old hub - only 10 Mbps. So I went out and replaced it with a 1000/100/10 Mbps switch. Switches are fundamentally better than hubs. A hub is almost just a dumb way of putting wires together. Traffic is broadcast across ALL machines on the network, even if only two computers are talking. Furthermore, if there is one machine on the network that can only talk at 10 Mbps, then the WHOLE network will talk at 10 Mbps.

Switches, on the other hand, are smarter. Each interface is negotiated separately, so that different machines can talk at different speeds. Furthermore, the switch collects traffic and ONLY redirects it to its intended recipient. This serves to remove a LOT of the clutter on the network. I purchased one that allows for 1000 Mbps (so-called Gigabit) because I plan on putting a server on the network in the near future, and I want to be able to talk to it at lightning fast rates.

In the meantime, everything is configured to speak at 100 Mbps full duplex - that's just fine for now. Fortunately, my switch has indicator lights that tell you the speed that each connection has negotiated - in my case, all were blinking 100 Mbps.

Now I have DSL, so that's maybe 1 Mbps - that doesn't even make the old 10 Mbps hub break a sweat (let alone the 100 Mbps that I am using now, or the 1000 Mbps that I want to use in the future). So I really need to test between two machines on the local network, not on the internet.

Fortunately, if both machines are running Linux (of course, they are) then there is a utility called netperf that allows you to test your connection speed (the Debian package is called netperf as well). This MUST be installed on BOTH machines between which the test will be performed (netperf starts a daemon called "netserver").

So I walked over to "machine1" and ran netperf -H machine2. The results were as follows

TCP STREAM TEST to machine2
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec

87380 16384 16384 10.01 94.12

So that's good... machine1 was talking to machine2 at a rate of 94.12 Mbps. The theoretical max is 100 Mbps, so that's great! Now I walked over to machine2 and ran netperf -H machine1. The results were that machine2 had "Throughput" to machine1 at about 0.08 Mbps. How terrible!!!

How to fix this wasn't easy. There are actually SEVERAL problems on my network (and not all of them are fixed permanently). The first was that I had a bad cable. I was able to figure this out by swapping cables around and pinging different machines on the network. 100 Mbps requires Cat5 cable, and they were all rated at Cat5e, so that wasn't a problem (I *think* that Cat5e is even good enough for 1000 Mbps, although Cat6 is recommended?).

I basically took two machines, established that they could talk to each other at 100 Mbps, and then swapped every cable in there until I found the bad one (actually, it was far more annoying than that, but I don't care to talk about it). I was pinging between the machines, and I noticed that a fair amount of traffic would be lost (like %14 or something) - that's how I knew that I had a flaky cable.

Then I figured out that my laptop has a flaky network card built in. It works sometimes, but often times it negotiates poorly, giving me the bad performance mentioned above. I have found that I often must disconnect the cable (sometimes several times) and force the network card to negotiate another connection with the switch. I have tried everything - turning off auto-negotiation, etc. (using the Linux utility ethtool), but still no success. I just have to live with it. As a result, when I try to transfer files to/from my laptop (or set up an NFS connection), I always test the connection to the other machine by using netperf in both directions. About half of the time I am forced to unplug the network cable form the laptop and try it again. C'est la vie.

In the end, when all goes well, both machines "netperf" each other at about 90 Mbps - and that's good enough for me.

Actual file transfer
OK, let's try to figure out what kind of speeds I can expect. I said above that, on a 100 Mbit/s network, in theory I can expect to copy a file from one machine to another at about 12.5 MBytes/s.

Obviously, there's going to be a tax man associated to this, so I don't expect to get nearly this result. If I use sftp or NFS to copy files over the network then I generally get a performance in the range 2 MB/s - 5.5 MB/s. That's not too bad, actually.

The actual performance seems to be heavily influenced by the speed of the hard drive and CPU. For example, it is much faster to copy from my laptop ONTO my main machine (which has a fast hard drive and faster processor) than to copy from my main machine ONTO my laptop (which has a slow drive).

It is worthwhile trying to figure out *exactly* how fast file transfer is without having the hard drive mucking things up. To do this, I ran the following test. First, I set up an NFS connection between my laptop and my main computer. Then, on my main computer, I ran the following command:

time cat [100MB file] > /dev/null

This read the file into the memory cache on the main computer (so any access to it will be very fast since it is already in memory). The "time" command verified that the process took about 2 seconds. This makes sense, since this is 100MB/2sec=50MB/s (which is close to the maximum speed that my hard drive can read files). Repeating the command again, I now saw that the process took about 0.431s, indeed verifying that the file was in the memory cache (it was much faster).

Then I walked over to my laptop and browsed to the same file (over the network using NFS). Again (on the laptop this time) I repeated the command. What did this do? It read the file (across the network) and did nothing with it (did NOT write to the hard drive). Since the file was already in the main computer's memory, there was no hard drive latency in reading the file. This implies that neither hard drive can be blamed anymore for slow file transfer. The test results were conclusive - it took about 10 seconds to transfer the file. Hence, I managed to get 100MB/10sec=10MB/s. That's NOT BAD!!! Remember that the THEORETICAL maximum was 12.5MB/s. Furthermore, remember that my actual "netperf" results were not *quite* 100Mbit/s=12.5MB/s, but were instead about 90Mbit/s=11.25MB/s.

Note that I get similar results (11 MB/s) with FTP. I use the same technique on the main computer to read the file into the READ cache, and then within FTP (from the laptop) I use the command

get [100MB file] /dev/null

So, in conclusion, taking the hard drives OUT OF THE PICTURE, I managed to achieve about 90% efficiency over NFS (10MB/s divided by 11.25MB/s) and nearly 100% efficiency over FTP. Incidentally, SCP was much slower at 6.5 MB/s, and SFTP was the slowest at about 4 MB/s.

On the other hand, if I'm actually reading/writing files from/to the hard drives then FTP is STILL the best performer (10 MB/sec). SCP is now second (6.5 MB/s), and NFS and SFTP are about equal: 20-50% efficiency (2MB/s to 5.5MB/s). It is clear how much effect hard drive and CPU speed can have on network performance.

Gigabit
I have now tested a Gigabit network that I recently configured using two Athlon 64s with 1GB RAM each (separated by maybe 10 ft of Cat5e cable). For a long time the machines would only "netperf" each other at about 350 Mbit/s, but now they seem to have spontaneously started netperfing at 800 Mbit/s (shrug). Using a similar technique as above to "take the hard drives out of the picture" I can get 85 to 95 MB/s with FTP. That's great - FTP is almost 100% efficient once again. SFTP gives me 39 MB/s in the same scenario.

For ACTUAL file transfer (hard drive, CPU, and filesystem) FTP seems to give me around 40 MB/s (but this varies widely). It seems that the hard disks have become the bottleneck.

SFTP is obviously even slower for ACTUAL file transfer - anywhere from 10 to 29 MByte/s for large files (but hey... before I was getting 2 to 5.5 Mbyte/s on a 100 Mbps network!!!). Clearly the Gigabit network makes a HUGE difference.

This page has been visited   times since January 13, 2005