Immutable distros were originally very focused on servers, and more recently distros for workstations has stated gaining more interest as the concept has matured.
With the advent of cloud computing “immutable infrastructure” started becoming more and more popular. This concept started out as someone sitting down and grabbing a normal Linux distribution, and installing all the necessary bits for the server purpose they needed. Then baking that into an image. Now you could launch new copies of that machine whenever you felt like it, and they would behave exactly the same. If any of them started doing something wonky, you just destroyed it and launched a new copy. This was very useful for software developers and operations people who could now more easily reason about how things behaved. And be sure that the difference in behaviour wasn’t because someone forgot to enable a setting, install a tool, or skipped a step in the setup.
On the software development side, you also simultaneously saw more and more developers make use of functional programming methods, and al’ng with those immutable data structures. Fundamentally, instead of adding an item to a list, you make a new list with all the old and the new items in it. You never change the data after it’s creation. Each “change” is a new copy, with the difference already built in.
Then containers started becoming popular. Which allowed software developers to build a container image on their local computer, and then ship that image to a server, where the image behaved exactly as it did on their local machine. This also meant that the actual OS became less and less important, as everything needed by the container was already bundled in the container. The containers also worked as “immutable”, since everything you would install or change within the containers would be immediately lost when the container was destroyed, and recreating it would be exactly as when the image was built.
The advent of containerised workloads, gave rise to a lot of different Linux distributions. Since the containers pretty much only needed the Linux kernel from the OS, it was pretty easy to make a container-centric operating system. And in turn lock down everything else, even completely omitting having a package manager. Stuff like CoreOS, Flatcar, Rancher OS, and many others were immutable linuxes that only catered to containers. I don’t know the exact mechanism for all of these, but at least the original CoreOS and Flatcar make the actual system read only, and on top of that had two man partitions, one of the partitions would be the current system, and the other would be where updates were downloaded. Once an update was downloaded and ready, you just rebooted the machine, and it would be running off the updated partition. Which also meant easy rollback if you got a broken update. You could just boot off the other unupdated partition.
Containers were however rather ill suited for desktop applications, as there were no good way to provide a GUI. You could serve up a Web page, but native GUI apps were tricky.
That’s where Flatpak, Snaps and all that came, which essentially brings the container mentality to normal desktop apps. This brought immutability to individual apps, as they brought their own dependencies. And therefore didn’t have to rely on the correct versions of dependencies being available on the machine.
The logical next step was of course to add immutability to workstation distributions. This is where the popularity of Fedora Silverblue, NixOS, and many others really started taking shape.
I believe Fedora Silverblue uses ostree to make the system “immutable”. Of course you can still make changes to your system, but the system is built to be completely aware of the state before and the state after, this is what’s called “atomic”. There’s no such thing as a partially installed package. There is only the state before installing something, and the state when the thing is fully installed. You can roll back to any of the previous states, to recover from a broken update or misconfiguration. This also makes trying out new things with no risk. Trying out a new desktop environment, and it broke your system? Just roll back. Accidentally uninstalled a critical package? Just roll back. What to try out a new display manager? Just apply the config and roll back if you don’t like it.
SteamOS also does the thing with multiple partitions, and even allows you to turn off the immutability. Other distributions aren’t as lenient. There’s no way to turn off the immutability in NixOS or Fedora Silverblue.
I have a few cheap cameras that can handle both WiFi and ethernet, they support an SD card, and they do continuous recording regardless of connection type.
I actually don’t know whether timeshift can just run easily from a live USB, but I don’t see why not.
But of course that also requires you to have installed and set up timeshift before (which is obviously a good idea)
It’s quite a different deal when the whole operating system it built around a timeshift-like concept.
Depends what you break. Sure kernels are easy to fix like you mention, but what if you bork your display manager?
ZFS doesn’t really support mismatched disks. In OP’s case it would behave as if it was 4x 2TB disks, making 4 TB of raw storage unusable, with 1 disk of parity that would yield 6TB of usable storage. In the future the 2x 2TB disks could be swapped with 4 TB disks, and then ZFS would make use of all the storage, yielding 12 TB of usable storage.
BTRFS handles mismatched disks just fine, however it’s RAID5 and RAID6 modes are still partially broken. RAID1 works fine, but results in half the storage being used for parity, so this would again yield a total of 6TB usable with the current disks.
SSD longevity seems to be better than HDDs overall. The limiting factor is how many write cycles the SSD can handle, but in most cases the write endurance is so high that it’s unreachable by most home/NAS systems.
SSDs are however really bad for cold storage, as they will lose the charge stored in their cells if left unpowered too long. When the SSD is powered it will automatically refresh the cells in the background to ensure they don’t lose their charge.
Factorio. I saw transport belts in my dreams.
Since you are talking about pods, you are obviously emitting all your logs on stdout and stderr, and you have of course also labeled your pods nicely, so grepping all 36 gods is as easy as kubectl logs -l = | grep
Nope, those steps are the steps needed to legally watch Netflix on Asahi Linux on an Apple Silicon device, because Google has not officially released the widevine library for that platform
But the author is actually using less data than expected, because he’s paying for 4K, but only able to watch up to 1080p
Live service and single player is not incompatible… Unfortunately…
Look at Hitman (2016 and forward), all require an online connection to play, and release new stuff monthly.
Many of Ubisofts games also require an online connection despite being fully single player, and you can even buy currency for the in-game single player shop with real money… What used to be a cheat code is now a microtransaction.
/
filling up the rest/mnt/games
Since both my root and home are on the same BTRFS partition they share space.
I have made sure to create sub volumes for the Steam and Game install directories, to avoid taking snapshots of them.
Steam has 2 “libraries” registered, one in my home directory and one in /mnt/games
I think he is referring to LVM
Are you profiting from running systemd?
Ghost in the Shell is rapidly becoming a documentary.
My home-assistant installation alone is too much for my Raspberry Pi 3. It depends entirely on how much data it’s processing and needing to keep in memory.
Octoprint needs to respond in a timely manner, so you will want to have the system mostly idle (at least below 60 percent CPU at all times), preferably octoprint should be the only thing running on the system unless it’s rather powerful.
If I were you, I would install octoprint exclusively on your Raspberry Pi 3, and then buy a Raspberry Pi 4 for the other services.
I’m running Pi-hole and a wireguard VPN on an old Raspberry Pi 2, which is perfectly fine if you are not expecting gigabit speeds on the VPN.
Docker for webdev? You know that Docker is server side right?
Mittens take away too much dexterity for many things. But a 3-finger glove is the perfect compromise: https://www.snowsportprofessionals.com/wp-content/uploads/2017/11/8272aca90cb09ec2c85ef324e10933f57f500daf.jpg