Tag Archives: zfs

Why RAID-Z isn’t appropriate for me (or for almost any home user)

So, ZFS is cool.  OpenSolaris derivatives are cool.  RAID-Z is cool.  But it lacks one simple feature that other software RAID solutions handle – the ability to grow a volume by increasing the stripe size.  For instance, let’s just postulate that you have 3 2TB hard disks in a RAID-5, and you want to add 2 more to make a 5-disk volume.  Well, with ZFS, you have 2 options:

  • Back up everything on the current volume, destroy it, and create a 5-drive RAID-Z from scratch
  • Buy another 2TB drive, create a new vdev out of the 3 new drives, and add it to the zpool

Now, at first, the second option doesn’t sound too bad – until you realize that you’ve basically created a false RAID-Z2 (RAID-6), since you’ve got 2 parity disks.  It’s false because if 2 disks fail in the same vdev, you’re cooked, but you could lose one in each and be fine.  Also, you’re wasting money on an extra disk when you’re a simple home user who wants to scale in small parts.

Neither of these issues are a problem for larger deployments – they generally already have disk space for backups (or already have all the data backed up in the first place), or are building the entire thing from scratch to store future data.  Buying extra disks isn’t a problem – they have money.  Home users do not.

So, until this is possible, I’ll be using mdadm or a similar solution on OpenFiler or another Linux-based OS.  This is a real shame; I really wanted to start using OpenIndiana.