I am thinking of extending my storage and I don’t know if I should buy a JBOD (my current solution) or a RAID capable enclosure.
My “server” is just a small intel nuc with an 8th gen i3. I am happy with the performance, but that might be impacted by a bigger software RAID setup. My current storage setup is a 4-bay JBOD with 4TB drives in RAID 5. And I am thinking of going to 6 x 8TB drives with RAID 6 which will probably be more work for my little CPU
Normally I would say software, or rather a raid-like filesystem like btrfs or ZFS. But in your specific case of funneleing it all through a single usb-c connection it is probably better to keep using an external box that handles it all internally.
That said, the CPU load of software raid it very small, so that isn’t really something to be concerned with, but usb connections are quite unstable and not good for directly connecting drives in a raid.
I mean I’ve been running the setup this way for >4 years and never had any problem with the USB connection, so I cannot attest to “usb connections are quite unstable”…
I supposed that is because the JBOD box was handling the raid internally so short connection issues are not that problematic and can be recovered from automatically. But that wouldn’t be the case if you connected everything together with a usb hub and usb to sata adapters and run a software raid on that.
I don’t know usb c in thunderbolt has direct access to pcie lanes.
I’ve been running a 4-disk RAIDz1 on USB for 4 years now with zero failures on one machine and one failure on another where it turned out the USB controller in one WD Elements was overheating. Adhering a small heatsink on it resolved the problem and it’s been stable under load for 2 years now. The USB devices have to be decent. AMD’s host controllers are okay. VIA hubs are okay. ASMedia USB-to-SATA are okay. I’m using some enclosures with ASMedia and some off-the-shelf WD Elements that also use ASMedia. It’s likely easier to get a reliable system if installing disks internally as the PSU and interconnects are much more regulated and any would work well, whereas with USB you have to be careful in selecting decent components.
USB hub.
The argument for hardware RAID has typically been about performance. But software RAID has been plenty performant for a very long time. Especially for home-use over USB…
Hardware RAID also requires you to use the same RAID controller to use your RAID. So if that card dies you likely need a replacement to use that RAID. A Linux software RAID can be mounted by any Linux system you like, so long as you get drive ordering correct.
There are two “general” categories for software RAID. The more “traditional” mdadm and filesystem raid-like things.
mdadm creates and manages the RAID in a very traditional way and provides a new filesystem agnostic block device. Typically something like /dev/md0. You can then use whatever FS you like (ext4, btrfs, zfs, or even LVM if you like).
Newer filesystems like BTRFS and ZFS implement raid-like functionality with some advantages and disadvantages. You’ll want to do a bit of research here depending on the RAID level you wish to implement. BTRFS, for example, doesn’t have a mature RAID5 implementation as far as I’m aware (since last I checked - double-check though).
I’d also recommend thinking a bit about how to expand your RAID later. Run out of space? You want to add drives? Replace drives? The different implementations handle this differently. mdadm has rather strict requirements that all partitions be “the same size” (though you can use a disk bigger than the others but only use part of it). I think ZFS allows for different size disks which may make increasing the size of the RAID easier as you can replace one disk at a time with a larger version pretty easily (it’s possible with mdadm - but more complex).
You may also wish to add more disks in the future and not all configurations support that.
I run a RAID5 on mdadm with LVM and ext4 with no trouble. But I built my RAID when BTRFS and ZFS were a bit more experimental so I’m less sure about what they do and how stable they are. For what it’s worth my server is a Dell T110 from around 12 years ago. It’s a 2 core Intel G850 which isn’t breaking any speed records these days. I don’t notice any significant CPU usage with my setup.
I used to use mdadm, but ZFS mirrors (equivalent to RAID1) are quite nice. ZFS automatically stores checksums. If some data is corrupted on one drive (meaning the checksum doesn’t match), it automatically fixes it for you by getting the data off the mirror drive and overwriting the corrupted data. The read will only fail if the data is corrupted on both drives. This helps with bitrot.
ZFS has raidz1 and raidz2 which use one or two disks for parity, which also has the same advantages. I’ve only got two 20TB drives in my NAS though, so a mirror is fine.
If I were to redo things today I would probably go with ZFS as well. It seems to be pretty robust and stable. In particular the flexibility in drive sizes when doing RAID. I’ve been bitten with mdadm by two drives of the “same size” that were off by a few blocks…
Don’t do a RAID enclosure, just get one that exposes the disks straight to the OS.
Problem for me is: there is not a 6 bay enclosure and the 8 bay enclosures cost as many as a RAID capable one
I’d pay more money for a non raid enclosure.
I just set up my icy box drive bay with software raid. Works great, just remember in some cases you have to disable UAS for the enclosure
My guy Wendell says that Hardware Raid is Dead and is a Bad ldea in 2022
He also made a follow up video a few days ago:
Here is an alternative Piped link(s):
https://www.piped.video/watch?v=Q_JOtEBFHDs
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
There is an even more relevant video of using external storage trough USB. He recommends using software raid:
Here is an alternative Piped link(s):
https://piped.video/GmQdlLCw-5k?feature=shared&t=697
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Very informative, thank you :)
Here is an alternative Piped link(s):
https://www.piped.video/watch?v=l55GfAwa8RI
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
I read somewhere, years ago, that RAID6 takes about 2 cores, on a working server.
That may have been a decade ago, and hardware’s improved significantly since then.
Bet on 1 core being saturated, min, with heavy use of a RAID6 or Z2 array, I suspect…
I’d go with software raid, not hardware: with hardware RAID, a dead array, due to a dead controller-card, means you need EXACTLY the same card, possibly the same firmware-revision, to be able to recover the RAID.
With mdadm, that simply isn’t a problem: mdadm can always understand mdadm RAID’s.
_ /\ _
Software, software, software! ZFS, mdraid, etc. USB is fine even with hubs, so long as your hubs and USB controllers (USB-to-SATA) are decent and not overheating.
Since hardware RAID is not state of the art anymore I will definetly stick with software RAID. I think I will just build a new server for the money, since an 8-Bay USB enclosure costs around 600€ and for that amount of money I can just build a new server with even better performance
While you’re at it you can get a PC case with plenty of drive slots… Check out Fractal Design.
Thats what I will be going for 😁
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters LVM (Linux) Logical Volume Manager for filesystem mapping NAS Network-Attached Storage NUC Next Unit of Computing brand of Intel small computers PCIe Peripheral Component Interconnect Express PSU Power Supply Unit RAID Redundant Array of Independent Disks for mass storage SAN Storage Area Network SATA Serial AT Attachment interface for mass storage ZFS Solaris/Linux filesystem focusing on data integrity
[Thread #495 for this sub, first seen 8th Feb 2024, 12:25] [FAQ] [Full list] [Contact] [Source code]
Your CPU should support vfio so you could pass though a PCIe sata controller to Truenas
How are those disks/the box connected to the NUC?
USB-C. It only has a single SATA connector inside
just my 2 cents, if youre going to do raid, buy a thing that will do it…
a nas or enclosure where the hardware does all the heavy lifting. do not build raided system from a bunch of disks… i have had, and have had friends have many failures over the years from those home brew raids failing in one way or another and its usually the software that causes the raid to go sideways… mayvbe shits better today than it was 10-20 years ago.
its just off my list. i bought a bunch of cheap nas devices that handle the raid, and then i mirror those devices for redundancy.
Y’all must be doing something wrong because HW raid has been hot garbage for at least 20years. I’ve been using software raid (mdadm, ZFS) since before 2000 and have never had a problem that could be attributed to the software raid itself, while I’ve had all kinds of horrible things go wrong with HW raid. And that holds true not just at home but professionally with enterprise level systems as a SysAdmin.
With the exception of the (now rare) bare metal windows server, or the most basic boot drive mirroring for VMware (with important datastores on NAS/SAN which are using software raid underneath, with at most some limited HW assisted accelerators) , hardly anyone has trusted hardware raid for decades.
deleted by creator
Y’all must’ve been doing something wrong with your hardware raid to have so many problems. Anecdotally, as an admin for 20+ years, I’ve never had a significant issue with hardware raid. The exception might be the Sun 3500 arrays. Those were such a problem and we had dozens of them.
So what were you doing wrong to have so much trouble with the Sun 3500’s?
They were olde by the time I got to administer them and they were failing every so often. They were a pain to work with as I recall, but again no lost data. They would beep with no issues sometimes. One thing that wasn’t their fault was a previous admin had set them up in threes with all but one disk in a raid5 array. They were wondering why performance was crap and one array would just drop every week like clockwork.
Took a while, but I mirrored (VVM) the data off on spare 3500s set up properly. They ran okay then. It was just ancient storage. Glad to work on netapps now. So much smoother and sophisticated storage compared to the olden days. lol