• 9 Posts
  • 109 Comments
Joined 1 year ago
cake
Cake day: July 29th, 2023

help-circle






  • Yep, I think there’s sound arguments for separating out your storage (NAS) and network (router/DNS/PiHole) infrastructure. After that, whatever suits your purpose. I virtualise all my serious services on one machine under Proxmox (mostly for ease of snapshots) then have another machine for things I’m fiddling with, usually again under Proxmox so they are easy to move to production when I’m happy with them.


  • My NAS and production server run 24/7, I’ve got a dev server that I turn off if I’m not expecting to use it for a week or so. Usually when I do that, I immediately need it for something and I’m away from home. I have chosen equipment to try and minimize energy use to allow for constant running.

    My view on UPS is it’s a crucial part of getting your availability percentage up. As my home lab turned into crucial services I used to replace commercial cloud options, that became more important to me. Whether it is to you will depend on what you’re running and why.

    I’ve heard that one of the most likely times for hard drives to fail is on power up, and it also makes sense to me that the heating/cooling cycles would be bad for the magnetic coating, so my NAS is configured to keep them spinning, and it hasn’t been turned off since I last did a drive change.


  • I agree. Get a domain name, point it to the internal address of your NGINX Proxy manager (or other reverse proxy that manages certificates that you are used to). A bit of work initially, then trivial to add services afterwards.

    I didn’t really need encryption for my internal services (although I guess that’s good), but I kept getting papercuts with browser warnings, not being able to save passwords, and some services (eg container repository on Forgejo) just flat out refusing to trust a http connection.


  • My step-up from Pi was to ebay HP 800 G1 minis then G2’s. They are really well made, there’s full repair manuals available, and they are just a pleasure to swap bits in and out. I’ve heard good things about, and expect similar build quality from the 1 liter Lenovos.

    I agree that RAM is a likely constraint rather than processor for self-hosting workloads. Particularly in my case as I’m on Proxmox and run all my docker containers in separate LXCs. I run 32GB in the G2’s which was a straightforward upgrade (they take laptop like memory). One some of them I’ve upgraded the SSDs, or if not, I’ve added M.2 NVME drives (that the G2’s have a slot for).






  • Yes, a few. Signal (daily use), LetsEncrypt & Certbot (EFF). It’s not enough.

    One day I decided I’d spend $x every January (when I do all my other donations) on open source stuff I depend on, and roughly in the proportions I depend on them. It quickly became impossible - I can’t just fund Debian (which I use a lot of in VMs), I’d need to think of all their dependencies, same with NGINX, Node etc etc. The mind boggles.

    I need something like a Spotify subscription for open source to assuage my guilt of the great value I extract for my personal use of open source.



  • I started as more “homelab” than “selfhosted” as first - so I was just stuffing around playing with things, but then that seemed sort of pointless and I wanted to run real workloads, then I discovered that was super useful and I loved extracting myself from commercial cloud services (dropbox etc). The point of this story is that I sort of built most of the infrastructure before I was running services that I (or family) depended on - which is where it can become a source of stress rather than fun, which is what I’m guessing you’re finding yourself in.

    There’s no real way around this (the pressure you’re feeling), if you are running real services it is going to take some sysadmin work to get to the point where you feel relaxed that you can quickly deal with any problems. There’s lots of good advice elsewhere in this thread about bit and pieces to do this - the exact methods are going to vary according to your needs. Here’s mine (which is not perfect!).

    • I’m running on a single mini PC & a Synology NAS setup for RAID 5
    • I’ve got a nearly identical spare mini PC, and swap over to it for a couple of weeks (originally every month, but stretched out when I’m busy). That tests my ability to recover from that hardware failure.
    • All my local workloads are in LXC containers or VM’s on Proxmox with automated snapshots that are my (bulky) backups, but allow for restoration in minutes if needed.
    • The NAS is backed up locally to an external USB that’s not usually plugged in, and to a lower speced similar setup 300km away.
    • All the workloads are dockerised, and I have a standard directory structure and compose approach so if I need to upgrade something or do some other maintenance of something I don’t often touch, I know where everything is with out looking back to the playbook
    • I don’t use a script or Terrafrom to set those up, I’ve got a proxmox template with docker and tailscale etc installed that I use, so the only bit of unique infrastructure is the docker compose file which is source controlled on Forgejo
    • Everything’s on UPSs
    • A have a bunch of ansible playbooks for routine maintenance such as apt updates, also in source control
    • all the VPS workloads are dockerised with the same directory structure, and behind NGINX PM. I’ve gotten super comfortable with one VPS provider, so that’s a weakness. I should try moving them one day. They are mostly static websites, plus one important web app that I have a tested backup strategy for, but not an automated one, so that needs addressed.
    • I use a local and an external UptimeKuma for monitoring, enhanced by running a tiny server on every instance that just exposes a disk free and memory free api that can be consumed by Uptime.

    I still have lots of single points of failure - Tailscale, my internet provider, my domain provider etc, but I think I’ve addressed the most common which would be hardware failures at home. My monitoring is also probably sub-par, I’m not really looking at logs unless I’m investigating a problem. Maybe there’s a Netdata or something in my future.

    You’ve mentioned that a syncing to a remote server for backups is a step you don’t want to take, if you mean managing your own is a step you don’t want to take, then your solutions are a paid backup service like backblaze or, physically shuffling external USB drives (or extra NASs) back and forth to somewhere - depending on what downtime you can tolerate.






  • I switched from Copilot to Codeium after only a couple of months of Copilot use - just based on the cost since currently I’m just a hobby coder.

    The main difference I’ve noticed is that Codeium doesn’t seem as smart about the local context as Copilot. Copilot would look at how I’m handling promises in a project, and stick to that, whereas Codeium would choose a strategy seemingly at random.

    A second, and maybe more telling example, is that I do my accounts using ‘plain text accounting’ in VS Code. This is a very niche approach to accounting software and I imagine is hardly in the training sets at all - there certainly would not be a lot of public domain text accounts in the particular format (BeanCount) I use in public code repositories. Codeium doesn’t make any suggestions for entries as I’m entering transactions, whereas Copilot would see that the account names I’m using are present in another file in the project and suggest them, and very quickly figure out the formatting of transactions and suggest them correctly.