• 0 Posts
  • 54 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle



  • Immutable distros were originally very focused on servers, and more recently distros for workstations has stated gaining more interest as the concept has matured.

    With the advent of cloud computing “immutable infrastructure” started becoming more and more popular. This concept started out as someone sitting down and grabbing a normal Linux distribution, and installing all the necessary bits for the server purpose they needed. Then baking that into an image. Now you could launch new copies of that machine whenever you felt like it, and they would behave exactly the same. If any of them started doing something wonky, you just destroyed it and launched a new copy. This was very useful for software developers and operations people who could now more easily reason about how things behaved. And be sure that the difference in behaviour wasn’t because someone forgot to enable a setting, install a tool, or skipped a step in the setup.

    On the software development side, you also simultaneously saw more and more developers make use of functional programming methods, and al’ng with those immutable data structures. Fundamentally, instead of adding an item to a list, you make a new list with all the old and the new items in it. You never change the data after it’s creation. Each “change” is a new copy, with the difference already built in.

    Then containers started becoming popular. Which allowed software developers to build a container image on their local computer, and then ship that image to a server, where the image behaved exactly as it did on their local machine. This also meant that the actual OS became less and less important, as everything needed by the container was already bundled in the container. The containers also worked as “immutable”, since everything you would install or change within the containers would be immediately lost when the container was destroyed, and recreating it would be exactly as when the image was built.

    The advent of containerised workloads, gave rise to a lot of different Linux distributions. Since the containers pretty much only needed the Linux kernel from the OS, it was pretty easy to make a container-centric operating system. And in turn lock down everything else, even completely omitting having a package manager. Stuff like CoreOS, Flatcar, Rancher OS, and many others were immutable linuxes that only catered to containers. I don’t know the exact mechanism for all of these, but at least the original CoreOS and Flatcar make the actual system read only, and on top of that had two man partitions, one of the partitions would be the current system, and the other would be where updates were downloaded. Once an update was downloaded and ready, you just rebooted the machine, and it would be running off the updated partition. Which also meant easy rollback if you got a broken update. You could just boot off the other unupdated partition.

    Containers were however rather ill suited for desktop applications, as there were no good way to provide a GUI. You could serve up a Web page, but native GUI apps were tricky.

    That’s where Flatpak, Snaps and all that came, which essentially brings the container mentality to normal desktop apps. This brought immutability to individual apps, as they brought their own dependencies. And therefore didn’t have to rely on the correct versions of dependencies being available on the machine.

    The logical next step was of course to add immutability to workstation distributions. This is where the popularity of Fedora Silverblue, NixOS, and many others really started taking shape.

    I believe Fedora Silverblue uses ostree to make the system “immutable”. Of course you can still make changes to your system, but the system is built to be completely aware of the state before and the state after, this is what’s called “atomic”. There’s no such thing as a partially installed package. There is only the state before installing something, and the state when the thing is fully installed. You can roll back to any of the previous states, to recover from a broken update or misconfiguration. This also makes trying out new things with no risk. Trying out a new desktop environment, and it broke your system? Just roll back. Accidentally uninstalled a critical package? Just roll back. What to try out a new display manager? Just apply the config and roll back if you don’t like it.

    SteamOS also does the thing with multiple partitions, and even allows you to turn off the immutability. Other distributions aren’t as lenient. There’s no way to turn off the immutability in NixOS or Fedora Silverblue.





  • ZFS doesn’t really support mismatched disks. In OP’s case it would behave as if it was 4x 2TB disks, making 4 TB of raw storage unusable, with 1 disk of parity that would yield 6TB of usable storage. In the future the 2x 2TB disks could be swapped with 4 TB disks, and then ZFS would make use of all the storage, yielding 12 TB of usable storage.

    BTRFS handles mismatched disks just fine, however it’s RAID5 and RAID6 modes are still partially broken. RAID1 works fine, but results in half the storage being used for parity, so this would again yield a total of 6TB usable with the current disks.








    • 1TB NVMe SSD
      • 512 MB EFI
      • BTRFS partition for / filling up the rest
    • Ancient 128 GB SATA SSD
      • Swap
    • 1TB SATA SSD
      • 500 GB Windows installation for VR games
      • 500 GB BTRFS partition mounted at /mnt/games

    Since both my root and home are on the same BTRFS partition they share space.

    I have made sure to create sub volumes for the Steam and Game install directories, to avoid taking snapshots of them.

    Steam has 2 “libraries” registered, one in my home directory and one in /mnt/games





  • My home-assistant installation alone is too much for my Raspberry Pi 3. It depends entirely on how much data it’s processing and needing to keep in memory.

    Octoprint needs to respond in a timely manner, so you will want to have the system mostly idle (at least below 60 percent CPU at all times), preferably octoprint should be the only thing running on the system unless it’s rather powerful.

    If I were you, I would install octoprint exclusively on your Raspberry Pi 3, and then buy a Raspberry Pi 4 for the other services.

    I’m running Pi-hole and a wireguard VPN on an old Raspberry Pi 2, which is perfectly fine if you are not expecting gigabit speeds on the VPN.