Hello, I’ve been a long time Linux user but I had a 5 years break and I am coming back to it now.
I’ve been trying several Linux distributions in the past week, installing the packages and configuring them as I need with several different orders of success.
My last case was an Ubuntu installation that I was very happy with and pretty close to call it setup and done, until I installed virtualbox and restarted the system only to find it bricked.
Obviously I could try to drop into one of the terminals on ctrl + alt + Fx and fix it, but I wonder if I could be smarter about it and be more prepared for this kind of situation.
One of the starting points I think would be having a separate home partition from the rest of the system. I used to have it in the past and it was great.
But then what’s next? What are the best FS I could pick for each type of partition? A performant one to keep the code and package manager cache, a journaling/snapshop based one for system, another type for game data, etc etc.
What if I would like to have a snapshot of working version of my system backed up somewhere ready to restore as simple as simple as possible?
How do you configure your systems in order to quickly recover from an unexpected bricking without growing some more white hairs, and squeezing as much performance vs feature for each of your use case?
Well having a dedicated
/home
partition is the very minimum and pretty much default.If you are interested in having a backup/restore solution for your system you are looking for
BTRFS
which uses sub volumes instead of primary partitions and is compatible with snapshot tools, those tools being Timeshift and Snapper.I do think Snapper is the superior solution however it’s also more complex to set up and requires significantly more prep work. Imo totally worth it.
I currently use it on my main machine Debian with BTRFS and Snapper and couldn’t be happier.
I installed (well, compiled) Btrfs Assistant, it integrates with snapper and btrfs maintenance, I had to create cron jobs for monthly/daily by hand, but the GUI is pretty nice.
This came pre installed and setup on my Garuda Linux install and it’s great. By default it takes a snapshot before and after every update or installation through the package manager so if something goes wonky I can just select the previous snapshot in grub and be back up and running in seconds.
Also just recently installed bazzite on a laptop and it appears to automatically do something similar, but I haven’t spent as much time with it to know the ins and outs yet.
What’s the appropriate size for a home partition, though?
That’s not the question that needs to be answered, first you must allocate enough space for you
system
partition, and whatever other system related partitions that you want to use,var
,temp
,swap
, you get the gist, then whatever space you have left is yourhome
partition. This scenario being a default personal use desktop ofc.If this is going to be a BTRFS system then it also doesn’t matter since sub volumes share the total space available dynamically.
Also if this is a modern hardware consider not having a swap partition and instead use ZRAM.
On servers I like to have /var on its own partition. Partially as a habit from the olden days of using FreeBSD in the 90’s, but also because that means that / will mostly be left with things that don’t really change. I’ve had to clean out clogged up / too many times. So in effect, my partion schema for a typical production server looks like this:
/ ext4
/local xfs
/global usually beegfs or nfs, but sometimes a local xfs.
/var ext4
/home ext4Eli5 what’s in /var and /local?
/Global is just personal storage like /home?
I never manually make fs but I really want to have a good set up like on was asking.
Also do you know what’s up with /swap? Is it beneficial aside from getting the ability to hibernate?
Also… What happens when I reinstall an OS and my home is separate? If I had Kodi installed as a flat pack and then reinstalled a like-distro, would Kodi still be set up and available with all my settings? Or would I have to reinstall again?
…obviously I don’t know much about the fs
/var was originally for files of varying sizes, but today it’s more of a general purpose storage for the system, such as log files. It used to make sense to have this as its own partition as read and write operations were generally expected to be small but many, as opposed to few and large for the rest of the storage areas. With its own partition it’s easier to adjust the filesystem to accomodate the I/O. Today it’s mostly used for logs.
/local used to be similar to /usr/local on some systems, but that’s not really the case anymore. It’s a directory we use at work for local stuff, as opposed to /global which is shared with the entire server cluster.
You can have any directory as its own partition, just make sure the mountpont reflects it. /home is a very common example of this - using this as a mountpoint instead of just a normal directory named /home prevents regular uaers from filling up the root filesystem and borking normal operation.
Swap is what your PC uses when it runs out of RAM. It can be a partition, or it can be individual (large) files. As an example, I have a rather huge and demanding factorio save which takes up more memory than I have on my laptop, so when I want to play it I have to add additional swap space. It’s similar to what windowa refers to as the pagefile. It’s slow compared to RAM, but it enables the PC to operate relatively normal despite being bogged down with loads of allocated memory.
I don’t know the answers to the other questions but yes swap is important, without it as soon as your system exceeds the RAM available to it it will freeze entirely.
Btrfs snapshots are great! All my filesystem is Btrfs, with subvolumes for root, home and var.
I’m using the Fedora immutable distros on many computers, it’s great to be able to boot into a previous version of the system if issues arise. Not that issues arise, since most packages are installed as Flatpaks, in a toolbox or in a container. Makes the systems nearly unbrickable.
Thanks for all your comments, a lot of interesting things here.
I went with BtrFS with Timeshift. Seems to have improved in terms of performance a lot that I barely noticed any difference compared to the previous installation with Ext4, if any at all.
Unfortunately the current Ubuntu 23.10 installer doesn’t properly set btrfs subvolumes correctly for
and
@home
and instead instead just throws the entire OS at the root of the FS, making it incompatible with Timeshift and causing FS snapshots to live in the Linux directories, which in turn would cause future snapshots to contain snapshots, not great…Fortunately migrating to a subvolume layout is possible although it was quite painful following this outdated and a bit not well written post https://www.reddit.com/r/btrfs/s/qWi84tGJam
After successfully installing the system and setting up btrfs layouts and Timeshift, I created the first system snapshot and I feel extremely confident about this solid system.
Thanks again for sharing your experience!
Nothing, I just corupted my hard drive with all my college work for tomorrow and im trying to save them, im feeling so stupid rn, im nothing but a failure at this point
I hope that this link can help save your data. You might need an external HD to recover the data to as well as a live USB you can install testdisk to.
https://www.howtogeek.com/700310/how-to-recover-deleted-files-on-linux-with-testdisk/
Here is a video and there are plenty more on YouTube.
https://www.youtube.com/watch?v=3jbWfGePrqo&t=0
Maybe you can get an extetion from your professorto buy yourself a bit more time to recover your data? Best of luck.
deleted by creator
I’ve been using NixOS and I’ve never had to worry about my system, because even if I did break it I can just wipe it and reinstall from my config files and it’ll be almost exactly the same as before.
MX Linux is good for this too with their iso maker .
I started using MX and just use their image tool to make installable live USB. But tbh I have tried installing with it yet.
The only thing I’ve used for back up is clonezilla and have actually recovered with it… just a regular full disk image.
Such a cool distro surprised I just started using it. Fedora was my go to for years before.
I just have a different partition for /home. For snapshots, you could set a partition up as btrfs and use btrfs snapshots
I image the whole system to a recovery image using Veeam. Partitions and file systems make no difference.
The new fad is immutable distros as I see more and more. Each major distro seems to have a flavor that is immutable. You are not specific about your needs/use case though
Wing everything without backups
Not really partition related but in terms of backups, state replication and reliability:
State of Systems: NixOS configs. Art: Borg + Borgbase. Code: Git + Sourcehut.
I need to get into NixOS but I have a similar variation on servers: ansible for state of systems, Borg + Borgbase for data (kept in /srv) and code (including ansible) are in Git.
The separation between data and state is really great. You want to be able to go from a base install and only bring in everything which makes your setup different.
Partitioning doesn’t affect backups. Any modern system supports both full images and file-level backups, so even if you take a whole disk image, you can just restore /home if that’s what you want.
I would just use whatever filesystem is the default for your distro. For the root partition, usually that’s ext4. That’s a perfectly good default.
Right now I’m using Garuda Linux, it takes a snapshot during major updates. Easily restored if something breaks.
Time shift saved my but a time or two in the past.
My strategy has always been to separate what should be persistent from what shouldn’t be.
On every system I deploy for home or work, I have a tree similar to below
/storage/[local/remote]/[where it is, enclosure, backplane,etc]/[what it is]
E.g
/storage/local/e1/raid/r6a/[this is my mount point] /storage/remote/nfs4/oldserver/[this is my mount point]
I then build all of my workflows off of the assumptions that things go there. Docker containers have a subdirectory in r6a for persistent volumes, etc
Even my containers themselves have a /storage/remote/persistent that I symlink anything to that I care about.
On the desktop side, I tend to physically just mount a second drive or a second partition as a subdirectory of /storage. That way my assumption can always be safe in that if it’s a subdirectory of a mount, my data is safe. If it’s not, it isn’t. It’s also nonstandard, so I can be relatively certain I won’t have conflicts between different distributions.
The main issue I have with submounting system directories like /home is that applications tend to put junk there, and old junk might not be compatible with a newer version of, or different distro. It can make for more effort than it’s worth
IIRC in MX Linux, you can create USB bootable snapshot of your system, so a full setup copy of your system, in case of hard crash.