I now am a proud owner of an OpenIndiana server, and I’ve been moving files to it over gigabit ethernet for the past few hours. During this time, I’ve made some important realizations, and I figured I’d note them here for everyone’s benefit. My transfers started off at about 10MB/s sustained, which is right around 100Mbit/s speeds, but on a gigabit network.
1. Ethernet Cables
Something we don’t think about too often these days is the type/quality of Ethernet cable we’re using in our homes. I certainly thought I was using CAT5e until I actually looked today and found my desktop machine was hooked up with a plain-Jane CAT5 cable. Yuck – that’s in the garbage now. After that change, I noticed a small improvement in sustained transfer speed, but still holding at around 12MB/s.
If you have 2 gigabit cards that support it, and a network switch that supports it, you can get better speeds by increasing the maximum transmission unit of your network card. In Linux, we do it like this:
hank☢barad-dur:~ % sudo ifconfig eth0 mtu 8000
In Solaris, or its derivatives, you do it like this:
root@nyu:~ # ifconfig e1000g0 mtu 8170
You also have to enable that mtu in /kernel/drv/e1000g.conf! I found that out thanks to this post. It’s quite easy – this is what mine looks like:
# 0 is for normal ethernet frames.
# 1 is for upto 4k size frames.
# 2 is for upto 8k size frames.
# 3 is for upto 16k size frames.
Each number corresponds to the last number in the interface, so e1000g0 is the first number (set to 2 in this example) and so on. My switch only supports 9K Jumbo Frames, so this was fine.
This got me a little more stability, but I was still basically capped at 100Mbit (13MB/s). Time to roll out the big guns!
3. Rsync compression
The -z option in rsync compresses files before they’re sent. I have nice beefy CPUs on both ends, so I thought that wouldn’t hurt – I was completely wrong about this. For some reason it slows down the transfer by about 50% here. CPU usage is very low on both machines, so this is really confusing, but as a general rule, do not use compression when transferring files with rsync on a LAN. So, now that it’s off, I’m up to about 200Mbits/s. Not bad, but we can do better!
4. Rsync method
So, when you run an rsync like this:
rsync -arxWh --progress . email@example.com:/diclonius/data
You’re telling rsync to log in (using rsh or ssh) to 192.168.1.8 using the root account. Now, if rsh is selected, then everything is peachy and you’ll get great rates. But, if ssh is selected, you’ll get encryption bloat, and your throughput will be reduced significantly (not to mention CPU usage will be higher). There’s a fix for this – on the destination system, run an rsync daemon. The instructions to do so can be found all over, but these were helpful for me. I set up the rsyncd.conf and secrets file, and just ran
rsync --daemon, which backgrounded. I then executed this on the sending machine:
rsync -arxWh --progress . rsync://firstname.lastname@example.org/data
And immediately got another 10MB/s (!!) bump in speed. So, now files are cruising over the network at around 300Mbits/s, which is good enough for now. If I didn’t have a crappy Marvell onboard network interface on my host machine, and actually got a real gigabit card (I have a PCI-E one in the mail that will supposedly do full gigabit), this would be a lot faster. For now, I’ll just have to deal with 1/3 of its potential.