What are some best practices in mounting NAS shares that you all follow?

Currently I am mounting using fstab to my user’s home directory with full rwx permissions, but that feels wrong.

I’ve read to use the mnt directory or the media directory but opinions differ.

My main concern is I want to protect against inadvertently deleting the contents of the NAS with an errant rm command. And yes I have backups of my NAS too.

Edit: this is a home NAS with 1 user on this Linux PC (the other clients being windows and Mac systems)

Would love to hear everyone’s philosophy! Thanks!

  • UntouchedWagons@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    I use systemd mount files instead of fstab, that way I can specify a network dependency in the off chance there’s no network connection. Plus I can have other services like jellyfin depend on that mount file so it starts after the share is available.

    • Rockslide0482@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      in fstab, there’s a nofail option that I started using when mounting NFS and other disks that may be missing and I don’t want to kill my bootup

    • steel_moose@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Dipping my toes into this as well. Would you care to share the contents of your .mount unit file?

      If I understand it correctly systemd generates unit files at boot using fstab 🤔. Probably not possible to specify the network dependency in fstab.

      • UntouchedWagons@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago
        #cat /etc/systemd/system/mnt-data.mount
        [Unit]
        Description=nfs mount script
        
        [Mount]
        What=192.168.0.30:/mnt/tank/Media
        Where=/mnt/data
        Type=nfs4
        
        [Install]
        WantedBy=remote-fs.target
        

        The file name has to match the folder upon which the share is mounted with hypens instead of forward slashes

  • teawrecks@sopuli.xyz
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    The NAS should be regularly backed up/snapshotted, so that even if you/a bad process deletes everything, you can restore it all quickly and easily.

    • LastYearsPumpkin@feddit.ch
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      A backup is an emergency protection, not a primary plan. This attitude is dangerously close to making the backup a critical part of their uptime.

      • teawrecks@sopuli.xyz
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        Having something rm your entire NAS is an emergency, not something that should be happening regularly. If it is, you’ve got bigger problems.

  • tkf@infosec.pub
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    1 year ago

    I’m curious, what file system do you use to mount your share? (SMB, SSHFS, WebDAV, NFS…?) I’ve never managed to get decent performance on a remote-mounted directory because of the latency, even on a local network, and this becomes an issue with large directories

    • DefederateLemmyMl@feddit.nl
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      I’ve found that NFS gives me the best performance and the least issues. For my use cases, single user where throughput is more important than latency, it’s indistinguishable from a local disk. It basically goes as fast as my gigabit NIC allows, which is more or less the maximum throughput of the hard disks as well.

      A benefit of NFS over SMB is that you can just use Unix ownerships and permissions. I do make sure to synchronize UIDs and GIDs across my devices because I could never get idmapping to work with my NAS.

      • 2xsaiko@discuss.tchncs.de
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        idmapping

        idmap only works with Kerberos auth, but iirc I didn’t have to set anything up specifically for it. Though I’ve also never really had to test it since my UIDs match coincidentally, I just tested with the nfsidmap command.

    • NotAnArdvark@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Agreed on the latency issues. I tested SMB and NFS once and found them to be pretty much the same in that regard.

      I’m interested to test iSCSI, as for some reason I think it might be better designed for latency.

      • dan@upvote.au
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        1 year ago

        If you want the lowest latency, you could try NBD. It’s a block protocol but with less overhead compared to iSCSI. https://github.com/NetworkBlockDevice/nbd/tree/master

        Like iSCSI, it exposes a disk image file, or a raw partition if you’d like (by using something like /dev/sda3 or /dev/mapper/foo as the file name). Unlike iSCSI, it’s a fairly basic protocol (the API is literally only 9 commands). iSCSI is essentially just regular SCSI over the network.

        NFS and SMB have to deal with file locks, multiple readers and writers concurrently accessing the same file, permissions, etc. That can add a little bit of overhead. With iSCSI and NBD, it assumes only one client is using the file (because it’s impossible for two clients to use the same disk image at the same time - it’ll get corrupted) and it’s just reading and writing raw data.

      • Rockslide0482@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        main thing to note is that NFS is an object based storage (acts like a share) where iSCSI is block based (acts like a disk). You’d really only use iSCSI for things like VM disks, 1:1 storage, etc. For home use cases unless you’re selfhosting (and probably even then) you’re likely gonna be better off with NFS.

        if you were to do iSCSI I would recommend its own VLAN. NFS technically should be isolated too, but I currently run NFS over my main VLAN, so do what ya gotta do

        • phx@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Yeah, there are a few limitations to each. NFS, for example, doesn’t play nicely with certain options if you’re using a filesystem overlay (overlays), which can be annoying when using it for PXE environments. It does however allow you to mount in several remote machines simultaneously, which I don’t think iSCSI would play nicely with.

          SMB though has user-based authentication built in, watch can be quite handy esp if you’re not into setting up a whole Kerberos stack in order to use that functionality with NFS.

  • dan@upvote.au
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    If you do this, make sure you use snapshots, ideally taken automatically. You wouldn’t want ransomware to overwrite the files on your NAS.

  • LastYearsPumpkin@feddit.ch
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    How many users are there?

    Is there a chance that the computer will boot without access to the NAS (aside from failure conditions).

    Are you doing anything with ownership to prevent reading, or changing, sensitive files?

  • 0x4E4F@infosec.pub
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    Mounting it in fstab is a bad idea… in home even worse.

    Just make some desktop entries with the shares and that should be enough.

      • 0x4E4F@infosec.pub
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Well, for one, it’s network attached storage. If it’s not present in the network for one reason or another, guess what, your OS doesn’t boot… or it errors during boot, depending on how the kernel was compiled and what switches your bootloader sends to the kernel during boot. Second, this is an easy way for malware to spread, especially if it’s set to run after user logon.