I have a trusty UnRaid server that has been running great for almost 3 years now, with some kinks and headaches here and there, but mostly very stable. Now I’m entertaining the idea of setting that box up with ProxMox, and running UnRaid virtualized. The reason being that I want to use UnRaid exclusively as a NAS and then run all dockers and VMs on ProxMox (at least that’s how I’m picturing it). I would like to know your opinion on this idea. All I have is Nextcloud, Immich, Vaultwarden, Jellyfin, Calibre, Kavita and a Windows VM I use to update some hardware every now and then. I mainly want to do that for the backup capabilities in ProxMox for each instance. Storage is not a concern, and I have 64GB of ECC Ram running in that box. What are the Pros and Cons, or is it even worth it to move all this to ProxMox?

  • youmaynotknow@lemmy.mlOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    I’m very inclined to use this method instead.

    I would like to ask for some suggestions on the initial process to migrate the data from UnRaid.

    Considering that:

    • My disk pool is made out of 2 10TB disks, for a total of 20TB
    • It also has a 10TB parity disk
    • The pool is using just -6TB of the storage

    The option I see is:

    • Get another 10TB disk
    • I can clear the parity drive and copy my data from the pool to that disk for migrating
    • Configure the pool disks to RaidZ and once I complete that, use the other 2 disks as parity pool

    Or, I bite the bullet, get brand new 10TB disks, 12 to make it Raidz2 and have a storage pool of 40TB (35 usable?). I’m thinking 4 groups of 3 disks each should do the trick. Then use the same method to migrate my data.

    With 64GB of ECC RAM, I should have a pretty swift storage IOPS that way.

    • Pyrosis@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      Another thing to keep in mind with zfs is underlying vm disks will perform better if the zfs pool is a type of mirror or stripe of mirrors. Z1 Z2 type pools are better for media and files. Cm disk io will improve on the mirror type style dramatically. Just passing what I’ve learned over time in optimizing systems.

      • youmaynotknow@lemmy.mlOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        I’ll be studying that link you sent me deeply before I start my adventure here.

        I didn’t know this rabbit hole was so deep. Love it!

    • Pyrosis@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      Bookmark this if you utilize zfs at all. It will serve you well.

      https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/

      You will be amused with zfs performance in proxmox due to all the tuning that is possible. If this is going to be an existing zfs pool keep in mind it’s easier to just install proxmox with the zfs option and let it create a zfs rpool during setup. For the rpool tweak a couple options. Make sure ashift is at least 12 during the install or 13 if you are using some crazy fast SSD as proxdisk for the rpool.

      It needs to be 12 if it’s a modern day spinner and probably a good setting for most ssds. Do not go over 12 if it’s a spinning disk.

      Now beyond that you can directly import your existing zfs pool into proxmox with a single import command. Assuming you have an existing zfs pool.

      In this scenario zfs would be fully maintaining disk operations for both an rpool and a media pool.

      You should consider tweaking a couple things to really improve performance via the guide de I linked.

      Proxmox vms/zvols live in their own dataset. Before you start getting to crazy creating vms make sure you are taking advantage of all the performance tweaks you can. By default proxmox sets a default record size for all datasets to 128k. qcow2, raw, and even zvols will benefit from record size of 64k because it tends to improve the underlying filesystem performance of things like ext4, XFS, even UFS. Imo it’s silly to create vm filesystems like btrfs if you’re vm is sitting on top of a cow filesystem.

      Another huge improvement is tweaking the compression algorithm. lz4 is blazing fast and should be your default go to for zfs. The new one is pretty good but can slow things down a bit for active operations like active vm disks. So make sure your default compression is lz4 for datasets with vm disks. Honestly it’s just a good default to specify for the entire pool. You can select other compressions for datasets with more static data.

      If you have a media dataset full of files like music, vids, pics. Setting a record size of 1mb will heavily improve disk io operations.

      In proxmox it will default to grabbing half of your memory for arc. Make sure you change that after install. It’s a file that defines arc_max in byte number format. Set the max to something more reasonable if you have 64 gigs of memory. You can also define the arc_min

      Some other huge improvements? If you are using an SSD for your proxmox install I highly recommend you install log2ram on your hypervisor. It will stop all those constant log writes on your SSD. It will also sync them to disk on a timer and shutdown/reboot. It’s also a huge performance and SSD lifespan improvement to migrate /tmp and /var/tmp to tmpfs

      So many knobs to turn. I hope you have fun playing with this.

      • youmaynotknow@lemmy.mlOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        Thanks so much.

        All this info brought me back to the drawing board.

        This led me to start searching for new components, as I’m pretty sure that I will want to build a new rig and just probably donate my current box.

        Thank you, I really appreciate it. My bank account, not so much 🤣🤣