Monthly Archives: August 2011

Hurricane Irene: MREs and RACES lessons learned

Last night, I participated in a RACES amateur radio net, manning 2 different fire stations over the course of 12 hours (2000 – 0830).  This was my first time doing something like this, so I brought what I thought I would need:

  • 2m/440 radio (IC-92AD)
  • 12v AGM battery (just in case)
  • Cellphone with unlimited 3g and tethering
  • Laptop
  • Power strip

All of these but the battery proved useful (and the battery is currently useful at home).  But, I found my setup was lacking quite a bit.  First of all, I need a better 2-meter antenna.  My rubber duck performed really well considering the circumstances, but had I been a bit further away, I wouldn’t have been able to get into the repeater.  So, my first purchase is going to be a 2-meter antenna, probably 5/8-wave or larger.  Another ham mentioned using speaker-stands as a tripod, and had a nice little setup of a plywood stand and some sandbags to anchor the tripod.  I’ll probably duplicate this somewhat.  I’ll need to get some long aluminum pipes for a mast, as well.

The power strip really came in handy, especially with these completely awesome wall-wart extensions with passthrough plugs.  They’re stackable, so you can even plug 3 wall-warts into the same single outlet, and leave the power strip at home!  I still like having it – it gives me piece of mind.  I’m going to buy more of these extensions though…

While I was setting up my station, I realized that if I needed to go 12v, many of the power adapters I had used a cigarette lighter adapter (yes, I know the correct term for it now is “auxiliary outlet” but that’s a bit ambiguous IMHO).  I definitely need to incorporate one (or preferably more) of these into the power junction box I plan to build.  The other connector I desperately need to adopt is Anderson Power Poles.  I have about 30 of them and a crimper, so I’m setting to work today to start remedying that.  One thing I found out is there’s a ARES/RACES standard way to mate them (right on red facing away).  But what if you super-glue your connectors that way and run into some nincompoop who did it backwards?  Well, I think creating a polarity switcher by crossing a wire is probably a good answer.  I’ll probably label it profusely once I make it, like one would do with a crossover cable (hopefully).

Driving down the road, I reported several downed trees to the Howard County Emergency Operations Center.  But, I realized that I only had one road flare in the car.  Next time, I’ll bring at least 3 so I can help mark these kinds of hazards immediately.

Another thing that would be great to have is a configurable wall wart that supports quite a few DC ends, polarity reversing and at least several common voltages.  This is really nice to have in the house, and I’d think would be even nicer to have in an emergency.

I really need an SMA -> N-connector cable.  I want to standardize on N-connectors instead of PL-259/SO-239.  I’m looking to buy a really nice one of these.

Finally, I need to get another 2m mobile rig.  An HT works, but it’s too low-power to be a great solution.  Also, it’s nice to be able to have a base station set up, and carry around your HT as a backup if you need to walk away.  Hooking up an antenna is easier and more stable as well.

I highly recommend RACES/ARES operators get a phone with unlimited 3g internet.  I have the Virgin Mobile LG Optimus V, which allows tethering and only costs $25/month with no contract.  You could even keep it deactivated, and re-activate it right before an event if you wanted to save money.  I use it as my main phone, so I keep it paid up all the time.  This kept me online all night even when the fire station didn’t have wireless.  I was able to read and send email, watch radar and weather reports, etc.  If the power or phone lines would have gone down, it wouldn’t have been a problem.

Now to talk about MREs – our power’s been out for 12 hours, so we tried some MREs today:

  • Italian Style Sandwich: 4/10.  This was pretty terrible.  I had it when I got home, then went to sleep.
  • Buffalo Chicken Entree: 8/10.  Very good.  Kind of saucy, but Kelsey and I (and Ruby) all liked it.
  • A-Pack Chicken Noodle Entree: 9/10.  This was my favority.  It needs copious amounts of pepper, but it’s totally worth it.

Fill an LVM volume group completely with a single logical volume

I learned a cool LVM trick today – how to resize a logical volume to use a certain percentage of a volume group.  Since I just have one logical volume in the group, I did the following:

[root@nyu ~]# lvextend -l +100%FREE /dev/diclonius/vector
  Extending logical volume vector to 7.28 TiB
  Logical volume vector successfully resized

Thanks Redhat Documentation!

In other news, I ran a performance test on my 5-drive RAID-5 using HD204UI drives from Samsung:

[root@nyu tmp]# dd if=/dev/zero of=foo count=5 bs=$((1024*1024*1024))
5+0 records in
5+0 records out
5368709120 bytes (5.4 GB) copied, 18.1434 s, 296 MB/s
[root@nyu tmp]# dd if=foo of=/dev/null                         
10485760+0 records in
10485760+0 records out
5368709120 bytes (5.4 GB) copied, 15.2682 s, 352 MB/s

So, 350MB/sec reads and 296MB/sec writes! Not bad!!

Why RAID-Z isn’t appropriate for me (or for almost any home user)

So, ZFS is cool.  OpenSolaris derivatives are cool.  RAID-Z is cool.  But it lacks one simple feature that other software RAID solutions handle – the ability to grow a volume by increasing the stripe size.  For instance, let’s just postulate that you have 3 2TB hard disks in a RAID-5, and you want to add 2 more to make a 5-disk volume.  Well, with ZFS, you have 2 options:

  • Back up everything on the current volume, destroy it, and create a 5-drive RAID-Z from scratch
  • Buy another 2TB drive, create a new vdev out of the 3 new drives, and add it to the zpool

Now, at first, the second option doesn’t sound too bad – until you realize that you’ve basically created a false RAID-Z2 (RAID-6), since you’ve got 2 parity disks.  It’s false because if 2 disks fail in the same vdev, you’re cooked, but you could lose one in each and be fine.  Also, you’re wasting money on an extra disk when you’re a simple home user who wants to scale in small parts.

Neither of these issues are a problem for larger deployments – they generally already have disk space for backups (or already have all the data backed up in the first place), or are building the entire thing from scratch to store future data.  Buying extra disks isn’t a problem – they have money.  Home users do not.

So, until this is possible, I’ll be using mdadm or a similar solution on OpenFiler or another Linux-based OS.  This is a real shame; I really wanted to start using OpenIndiana.

Maximizing rsync performance between Linux and Solaris

I now am a proud owner of an OpenIndiana server, and I’ve been moving files to it over gigabit ethernet for the past few hours. During this time, I’ve made some important realizations, and I figured I’d note them here for everyone’s benefit.  My transfers started off at about 10MB/s sustained, which is right around 100Mbit/s speeds, but on a gigabit network.

1. Ethernet Cables

Something we don’t think about too often these days is the type/quality of Ethernet cable we’re using in our homes.  I certainly thought I was using CAT5e until I actually looked today and found my desktop machine was hooked up with a plain-Jane CAT5 cable.  Yuck – that’s in the garbage now.  After that change, I noticed a small improvement in sustained transfer speed, but still holding at around 12MB/s.

2. MTU

If you have 2 gigabit cards that support it, and a network switch that supports it, you can get better speeds by increasing the maximum transmission unit of your network card.  In Linux, we do it like this:

hank☢barad-dur:~ % sudo ifconfig eth0 mtu 8000

In Solaris, or its derivatives, you do it like this:

root@nyu:~ # ifconfig e1000g0 mtu 8170

You also have to enable that mtu in /kernel/drv/e1000g.conf! I found that out thanks to this post. It’s quite easy – this is what mine looks like:

# 0 is for normal ethernet frames. 
# 1 is for upto 4k size frames. 
# 2 is for upto 8k size frames. 
# 3 is for upto 16k size frames.

Each number corresponds to the last number in the interface, so e1000g0 is the first number (set to 2 in this example) and so on. My switch only supports 9K Jumbo Frames, so this was fine.

This got me a little more stability, but I was still basically capped at 100Mbit (13MB/s). Time to roll out the big guns!

3. Rsync compression

The -z option in rsync compresses files before they’re sent.  I have nice beefy CPUs on both ends, so I thought that wouldn’t hurt – I was completely wrong about this.  For some reason it slows down the transfer by about 50% here.  CPU usage is very low on both machines, so this is really confusing, but as a general rule, do not use compression when transferring files with rsync on a LAN.  So, now that it’s off, I’m up to about 200Mbits/s.  Not bad, but we can do better!

4. Rsync method

So, when you run an rsync like this:

rsync -arxWh --progress . root@

You’re telling rsync to log in (using rsh or ssh) to using the root account. Now, if rsh is selected, then everything is peachy and you’ll get great rates. But, if ssh is selected, you’ll get encryption bloat, and your throughput will be reduced significantly (not to mention CPU usage will be higher). There’s a fix for this – on the destination system, run an rsync daemon. The instructions to do so can be found all over, but these were helpful for me. I set up the rsyncd.conf and secrets file, and just ran rsync --daemon, which backgrounded. I then executed this on the sending machine:

rsync -arxWh --progress . rsync://hank@

And immediately got another 10MB/s (!!) bump in speed. So, now files are cruising over the network at around 300Mbits/s, which is good enough for now. If I didn’t have a crappy Marvell onboard network interface on my host machine, and actually got a real gigabit card (I have a PCI-E one in the mail that will supposedly do full gigabit), this would be a lot faster. For now, I’ll just have to deal with 1/3 of its potential.

Telling bash to step aside for zsh

So, let’s say that you want to change your shell to zsh, but fall back to bash if it isn’t available on whatever system you’re using. This is useful if you use something like NIS or LDAP with home directory NFS, since you’ll be sshing around and bringing your .bashrc with you everywhere. The solution is pretty simple – just add this to the bottom of your .bashrc:

hash zsh 2>&- && exec zsh

Update: This breaks X-windows. Need to figure out why…
Update: My esteemed colleague informed me that I should check for interactive shell, which is a dead give-away for X:

if [ ! -z $PS1 ]; then
  hash zsh 2>&- && exec zsh -l

A zsh adventure

Oh yeah, zsh guys.  It’s awesome, and you should know that, but what makes one switch from good ol’ bash?  Generally, it’s prompt magic, but really there’s some other nice features that make it worth considering.  But first, prompt magic.  I had some trouble getting the prompt to respect the width of my terminal, but it was resolved by escaping some things I had in there.  I ended up with this:

To do that, you just need the following in your .zshrc in your home directory:

autoload -U colors && colors

# Prompt Customization
PS1="%{$fg[blue]%}%n%{$fg_bold[yellow]%}✯%{$fg[blue]%}%m%{$fg_bold[black]%}:%{$fg_no_bold[green]%}%~ %{$fg_bold[black]%}%#%{$reset_color%} "
RPS1="%{$fg_bold[black]%}%D{%Y%m%d %H:%M}%{$reset_color%}"

Yes, that’s a unicode character.  That’s perfectly legal, which is awesome.  Next, we have to get bash reverse history search functionality working in zsh, which is really easy with a simple keybinding for control-r:

## History search like bash
bindkey '\e[3~' delete-char
bindkey '^R' history-incremental-search-backward

And we’re off to the races.  So far, I really like it.  I recommend that you *nix nerds try it.

OpenIndiana Jones

So, after a bunch of research about building a DIY NAS, I decided to buy a whole bunch of hardware to do so. But, the real question was which software to use. FreeNAS seems to be the most popular solution, and I heard it was better than something called OpenFiler. Then I stumbled across NexentaStor, which is free for any NAS less than 18TB in size, which is fine for me. I was basically ready to go with that, but then I heard about OpenIndiana and the napp-it web gui. Basically, OpenIndiana is the result of OpenSolaris getting closed by Oracle. Since Oracle shut down the openness, the last open version of the operating system has been “sporked” into OpenIndiana. I just installed it in a VM, and I’m impressed, especially with the pool management of zfs.

But, being a hardcore Linux user for about 8 years, I’ve gotten used to certain things working a certain way. This post is just a little note to myself, and to others potentially, about what I didn’t like about the base install, and how I fixed it.


So, nicely, the machine comes with vim 7.2 installed, which is fantastic.  The problem is it’s in compatible mode by default.  Gotta shut that down.  Solaris apparently keeps the vimrc file hidden away in /usr, so we have to do this:

echo "set nocompatible" | sudo tee -a /usr/share/vim/vimrc

I also added the following lines for good measure to the same file using vim:

syntax on
set bg=dark
set ts=4
set sw=4

Now I have a real working copy of my favorite editor. That’s more than half-way to happiness for me. More to come.

Update: grep

So, now that I’m getting settled, I’ve been doing a bunch of shell work, and there’s something I noticed:

root@nyu:/etc # grep -R 2,2 *
grep: illegal option -- R 
Usage: grep -hblcnsviw pattern file . . .

That’s right – the default grep is crappy Solaris grep, not good old GNU grep! So, I checked it out, and the way to solve this is to use ggrep, which I will alias to grep, of course.

root@nyu:/etc # alias grep="ggrep"
root@nyu:/etc # grep 
Usage: ggrep [OPTION]... PATTERN [FILE]...
Try `ggrep --help' for more information.