As I mentioned at the end of my RAID setup post, I want the storage space on my home NAS divided up into several fixed-size filesystems, each associated with a different purpose. Now, one approach here would have been to divide the physical disks up into several partitions and create several separate RAID arrays on top of those…but that seems a bit like overkill, and certainly isn’t very flexible if I later increase the size of the array. So, after a little research, I discovered a better solution: Logical Volume Management, or LVM.
Linux LVM allows you to create flexible logical volumes on top of an existing set of devices, either to join them together into one giant filesystem, or to separate them out by logical purpose. In my case, I wanted the final setup to look something like this:
|Purpose||Allocated Space||Mount Point|
|Operating System Backup||8GB||/root/os-backup|
|Cacti RRD Files||8GB||/var/lib/cacti|
|Extra Swap Space||2GB||/root/swap|
I know, the math doesn’t quite work out right…but ignore that, you’ll enjoy this more.
Anyway, there are a couple of unusual details here that I’d like to explain. First…I broke my “Documents” storage up into two separate volumes, one labeled “Critical” and the other “Non-Critical.” The “Non-Critical” documents are things I’ve already backed up to DVDs, but might want immediate access to. The “Critical” documents are things I’m working on right now that I haven’t quite gotten backed up yet; everything on that volume is backed up nightly (using the duplicity command-line utility) to Amazon S3, so I needed to make sure it couldn’t get too large. I am not, after all, made of money. Meanwhile, the “Operating System Backup” volume is, as its name implies, a place to store a copy of everything on the CompactFlash card in case it fails and I need to put in a new one. Can’t be too careful.
Anyway, it’s reasonably simple to set up a logical volume structure like this. Make sure you’ve got the lvm package installed (
apt-get install lvm2); then, create an LVM “physical volume” out of the device (or devices) that you want LVM to manage. In our case, we’ll use the RAID array we created last time (/dev/md0):
The next step is to create a “volume group;” volume groups are used to collect various physical volumes so that they can be treated as a single unit. Since we’ve only got a single physical volume to worry about, this is easy:
The -s parameter there is the size of the “physical extents” that make up the volume. Because this device is so large, it was important to choose a reasonably large physical extent size; unfortunately I can’t remember exactly why. “export”, meanwhile, is the name I used for the volume group; we’ll be using that again in a moment.
To figure out the total number of physical extents in the new volume group, you can run
vgdisplay export; in my case, it came to 29808. However, you don’t necessarily have to know this to get things working properly, since LVM allows you to create your logical volumes using actual byte-size values instead. It’s just useful to know it’s there.
Anyway, creating each logical volume is pretty simple; for each volume you want, run something like this:
The above command would give you a 25GB logical volume in the “export” group named “critical”. For each logical volume you set up, you’ll also need to create a new filesystem as follows:
See how the device filename works? The volume group is the second path component, and the logical volume name is the third.
Finally, once you’ve got your filesystems created, you just need to pick appropriate mount points and add entries to your /etc/fstab to get them assigned. I put pretty much everything in /export, since I’ll be “exporting” these filesystems via NFS later. One thing to note: it’s easiest if you don’t mount any of these inside each other, since NFS will get a little confused by that. Keep things simple. So, for instance, to add a mount point for our new “critical” volume, we do the following:
1 2 3
Easy enough. Next time I’ll show you how to get these things shared to other UNIX-based computers using NFS.