Tag Archives: raid

Fill an LVM volume group completely with a single logical volume

I learned a cool LVM trick today – how to resize a logical volume to use a certain percentage of a volume group.  Since I just have one logical volume in the group, I did the following:

[root@nyu ~]# lvextend -l +100%FREE /dev/diclonius/vector
  Extending logical volume vector to 7.28 TiB
  Logical volume vector successfully resized

Thanks Redhat Documentation!

In other news, I ran a performance test on my 5-drive RAID-5 using HD204UI drives from Samsung:

[root@nyu tmp]# dd if=/dev/zero of=foo count=5 bs=$((1024*1024*1024))
5+0 records in
5+0 records out
5368709120 bytes (5.4 GB) copied, 18.1434 s, 296 MB/s
[root@nyu tmp]# dd if=foo of=/dev/null                         
10485760+0 records in
10485760+0 records out
5368709120 bytes (5.4 GB) copied, 15.2682 s, 352 MB/s

So, 350MB/sec reads and 296MB/sec writes! Not bad!!


Why RAID-Z isn’t appropriate for me (or for almost any home user)

So, ZFS is cool.  OpenSolaris derivatives are cool.  RAID-Z is cool.  But it lacks one simple feature that other software RAID solutions handle – the ability to grow a volume by increasing the stripe size.  For instance, let’s just postulate that you have 3 2TB hard disks in a RAID-5, and you want to add 2 more to make a 5-disk volume.  Well, with ZFS, you have 2 options:

  • Back up everything on the current volume, destroy it, and create a 5-drive RAID-Z from scratch
  • Buy another 2TB drive, create a new vdev out of the 3 new drives, and add it to the zpool

Now, at first, the second option doesn’t sound too bad – until you realize that you’ve basically created a false RAID-Z2 (RAID-6), since you’ve got 2 parity disks.  It’s false because if 2 disks fail in the same vdev, you’re cooked, but you could lose one in each and be fine.  Also, you’re wasting money on an extra disk when you’re a simple home user who wants to scale in small parts.

Neither of these issues are a problem for larger deployments – they generally already have disk space for backups (or already have all the data backed up in the first place), or are building the entire thing from scratch to store future data.  Buying extra disks isn’t a problem – they have money.  Home users do not.

So, until this is possible, I’ll be using mdadm or a similar solution on OpenFiler or another Linux-based OS.  This is a real shame; I really wanted to start using OpenIndiana.