I noticed that Linux server distros are using LVM as default. What is so good about LVM, and when should I use it? Is there a GUI for managing LVM volumes like GParted, or is it just through the terminal? How is it different from RAID in using multiple drives for one volume?
interesting facts about LVM:
-
You can make a volume snapshot of the system before a major change (for example, an update).
-
You can enable caching and use HDD together with SSD cache
-
You can build raid 0,1,5 directly on LVM (you still need modules from mdraid)
-
Even without a raid, you can expand the partition beyond one disk to another or migrate the partition from disk to disk (without even disabling it)
However, all this is done from the console and I do not know if there is a GUI.
-
LVM is just a way more flexible partition table. It gives you the possibility to grow partitions at a later date. You probably not think you can do that with MBR or GPT too. Well yes, but only when the spare room is adjacent to the partition you want to grow. With LVM you can grow partitions even if the free space is somewhere else on the disk.
So you can grow any disk ‘partition’ at any time as long as you have some free space in the group.
Another advantage is that you can encrypt logical volumes easily. Usually that’s supported when you install the OS.
You can also stack LVM on top of a software RAID, so you can create a mdadm from a disk partition of several disks and create a VG on that with LVs to spilt it into pieces.
I usually use LVM on every server. There is no need not to and gives you options for the future.
LVM is a bit more complicated than just using a normal partition, but it does add a lot of functionality. If you need to make an LVM volume bigger, you can just add another disk to the volume. You can also do RAID like stuff with it. Live resizing of volumes is doable too.
I think some LVM stuff can be done in Disks, but I generally just use the command line. Smarter people, are there graphical LVM utilities I don’t know about?
can you stack btrfs on top of LVM? is there any advantage of doing so?
right now i have each docker volume mapped to a btrfs volume, so that i can snapshot the volume and send it away.
can i replicate the same thing with LVM and ext4 for example?
i’m mostly interested in the ssd as cache feature and the possibility of just adding more disks. Stuffs that are not possible in my current setup.
Opensuse can do this. Well put btrfs on LVM that is. I found out with my tumbleweed installs that if i use disk encryption and no LVM i do not have the option to boot from btrfs snapshots. Also with LUKS you need to type in your password twice when booting if you dont use LVM.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters NAS Network-Attached Storage RAID Redundant Array of Independent Disks for mass storage SSD Solid State Drive mass storage
3 acronyms in this thread; the most compressed thread commented on today has 13 acronyms.
[Thread #309 for this sub, first seen 28th Nov 2023, 23:05] [FAQ] [Full list] [Contact] [Source code]
Slightly off tangent, but if you are thinking you might need LVM features (other than disk encryption) then it is worth looking into filesystems that have most of the functionality built in, like btrfs or OpenZFS.
I’m torn a bit, because architecturally/conceptually the split that LVM does is the correct way: have a generic layer that can bundle multiple block devices to look like one and let any old filesystem work on top of that. It’s neat, it’s clean, it’s unix-y.
But then I see what ZFS (and btrfs, but I don’t use that personally) do while “breaking” that neat separation and it’s truly impressive. Sometimes tight integration between layers has serious advantages too and neat abstraction layers don’t work quite as well.
Care to elaborate about these ZFS features?
ZFS combines the features of something like LVM (i.e. spanning multiple devices, caching, redundancy, …) with the functions of a traditional filesystem (think ext4 or similar).
Due to that combination it can tightly integrate the two systems and not treat the “block level” as an opaque layer. For example each data block in ZFS is stored with a checksum, so data corruption can be detected. If a block is stored on multiple devices (due to a mirroring setup or raid-z) then the filesystem layer will read multiple blocks when it detects such a data corruption and re-store the “correct” version to repair the damage.
First off most filesystems (unfortunately and almost surprisingly) don’t do that kind of checksum for their data: when the HDD returns rubbish they tend to not detect the corruption (unless the corruption is in their metadata in which case they often fail badly via a crash).
Second: if the duplication was handled via something like LVM it couldn’t automatically repair errors in a mirror setup because LVM would have no idea which of the blocks is uncorrupted (if any).
ZFS has many other useful (and some arcane) features, but that’s the most important one related to its block-layer “LVM replacement”.
Very interesting, thanks for the message. I might use it in my next Nas, but my workstation is staying on regular lvm, too much hassle to change probably…
ZFS is nifty and I really like it on my Homelab Server/NAS. But it is definitely a “sysadmins filesystem”. I probably wouldn’t suggest it to anyone just for their workstation, as the learning curve is significant (and you can lock yourself into some bad decisions).