at least weekly mysqlcheck + mysqlddump and some form of periodic off-machine storing of that is something I’ll surely take to heart after this lil’ fiasco ;-) sound advice, thank you!
IT professional with a strong love for all things #FLOSS. Soon-to-be-retired #soccer player, #guitar player and sizeable #LEGO bricks addict.
https://keyoxide.org/26E947141F348287FF494EAE736EDD9A0151287B
Pixelfed: @pete@pixel.cyano.at
PeerTube: @pete@tube.cyano.at
at least weekly mysqlcheck + mysqlddump and some form of periodic off-machine storing of that is something I’ll surely take to heart after this lil’ fiasco ;-) sound advice, thank you!
Hear hear! You don’t own a backup if you’ve never restored it before. Words to live by both in corporate and self-hosting environments.
Ironically, if I would have had more services running in docker I might not have experienced such a fundamental outage. Since docker services usually tend to spin up their exclusive database engine you kind of “roll the dice” as far as data corruption goes with each docker service individually. Thing is, I don’t really believe in bleeding CPU computation cycles by running redundant database services. And since many of my services are already very long-serving they’ve been set up from source and all funneled towards a single, central and busy database server - thus, if that one experiences sudden outage (for instance power failure) all kinds of corruption and despair can arise. ;-)
Guess I should really look into a small UPS and automated shutdown. On top of better backup management of course! Always the backups.
Excellent choice. I’m running a physical Routerboard and a virtual RouterOS inside my hypervisor for redundancy.
The license for virtual RouterOS is dirt cheap and has more features than you could ever dream of with any of the the big network device manufacturers.
The physical devices are very well designed for their relatively modest price and likewise fully featured. Perfect for any home lab or to play around with IEEE conform protocols.
Life is Strange soundtrack. That is, including all the licensed songs.
Was so good, got me into playing guitar.
You’re quite bold - I like it ;-) in all honesty, is your requirement mounting an NFS share? As indicated by @chris it really is designed for the local network.
How about using something more suited like a WebDAV share/mount?
You’re right - I missed that detail. From the graphs alone it looks as if a process ate up all still free to claim (cached) memory, then the system stalled possibly thrashing until OOM kill intervened - as indicated by large chunks of RAM being freed. Allocated RAM in red lowering and cached RAM in blue rising again.
I don’t see a clear indication that you have too low RAM… RAM should be “used” fully at all times and your “cached” RAM value suggest you still have quite a bunch of RAM that could potentially be consumed by applications when they need it.
I cannot clearly see a swap usage in the graphs - that would be an interesting value to judge the overall stability of the system with regards to fluctuating RAM usage.
However, once you notice the problem again, right after you manage to log in, run a “dmesg -T | grep -i oom” and see if any processes get killed due to temporarily spiking RAM consumption. If you’re lucky that command might lend some insight even now still.
Also, what if you run a “top” command for a while, what’s the value for “wa” in the second line like? “wa” stands for I/O wait and if that value is anything above 5 it might indicate that your CPU is being bottlenecked by for instance hard disk speed.
Of course you are right, and this should be noted.
But if you so happen to have Calibre already running via for instance your desktop installation you, may also “take advantage your pre-existing Calibre database” in Calibre-Web ;-)
Take a look at Calibre-Web (github.com/janeczku/calibre-we…) which I’ve been using for what you ask for quite a while now. As the name suggests it can also take advantage of a pre-existing Calibre eBook Database.
What model is the router? I suspect it is a router your provider equipped you with? In that case, with a 500Mb download bandwidth contract it would be really crazy of your provider to hand you a router with 100Mb ports ;-)
In either way looking up specifications of the router model will help here.
I would not upgrade the contract, even if you go beyond your 50mbit UPLOAD speed you won’t be sure that no buffering and hence drop in streaming will happen. Note you have a “500Mb Broadband” contract but the upload is limited to 50Mb. Asymmetric bandwidth is typical for “consumer” internet you mostly consume/download - contrary to “hosting” internet uplinks which typically are symmetric and very pricey since you are typically hosting/uploading.
You need specialised software to make sure you can transmit big, uncompressed real-time data (which video basically is) over the internet. It’s basically what Youtube does for its users.
It hosts arbitrary uncompressed video data you upload to it (this is your NAS - which you have now) and then displays that data to users on the web in a compressed, streaming fashion (this is what streaming software would handle - which you do not have yet).
In your scenario issues will arise, naturally.
M500 broadband package boasts average download speeds of 516Mbps and average upload speeds of 52Mbps
So, while viewing media from outside your local netwwork, i.e. via Synology QuickConnect, you’re limited to 52mbit speed.
If you’re self-hosting upload speed matters alot unfortunately. You will surely need something that buffers / transcodes your media for viewing from the internet.
There’s something to that claim. Sending uncompressed (i.e. not transcoded) video content over the internet can easily saturate your internet link.
Do you have CIFS/Samba, in other words Windows Network Explorer access to the files on the NAS via your local network? If so try directly opening a video and look at the network dashboard of the NAS and/or your computers task manager (performance -> ethernet tab) to see to what mbit bandwidth the not transcoded stream amounts too.
Consider that the exact same mbit bandwidth will be needed using Synology QuickConnect to view media from outside of your local network.
If you want to work around all that you would probably have to look into something that buffers/transcodes your media, something like Jellyfin/Plex or the likes. For that you’d have to look into running Docker on the NAS but that’ll plunge you into self-hosting very deep very fast and may be beyond your initial comfort zone.
500 fibre connection means it is a 500mbit internet uplink?
Have you checked whether the ethernet cable you’re connecing the DS216j to your router is a “Cat5e” cable. If it is a “Cat5” you would be limiting and thus bottlenecking your bandwidth to 100mbit max.
Plus, Jitsi Meet will allow you publicly available video conferencing which is really nice to have on its own. ;-)
It’s pretty solid for 1:1 calls, and they are currently working on Matrix’s own conferencing protocol/solution.
But until then you could set up a Jitsi Meet instance along with Matrix to run multiple user calls.
I’m trying to stay away and I do feel like there is a reasonable chance I might be able to.
I recently subscribed to a Lemmy “self-hosted” and “asklemmy” group and content is starting to trickle in real good.
Mostly I feel it’s a matter of consolidating Lemmy groups with the same topics into super-groups. This should help with general useability as well as making things more friendly for people moving over from Reddit.
Federation support for other Fediverse products towards Lemmy also will need some work still.
“Necropolis” (Gaunt’s Ghosts #3)) by Dan Abnett. Whole lot of Warhammer 40k goodness.
@cyclohexane I can only speak from my personal experience having hosted both XMPP and Matrix for friends/family before.
Ran XMPP (eJabberd) for round about 10 years and it never really was a trivial process, neither for me as admin nor for my friends/family with regards to parcipating.
Basically, back then, I had to manually extend eJabberd with a bunch of XEPs (namely push notifications, message carbons and message archive) to increase the useability and user convenience to even stand a chance getting people on board and able to use the system. The client ecosystem was not quite there yet either - Conversations for instance had just come around to shaping up for android, Gajim for cross-platform was pretty fine though.
Let’s not talk about E2E encryption either: GPG - not a chance, OMEMO was just coming around as well and was not yet very reliable.
Matrix on the other hand was quite the breakthrough for me as an admin with regards to user acceptance. I do believe that a big part of that comes from the concerted effort to have a unified client (Element) available on any platform - web, fat clilent, mobile client.
By now there’s also a ton of cross chat platform bridges which also greatly serves as a “selling point” towards users. And most imporantly, again in my humble opinion, the required technical knowledge barrier for users is just not comparable to XMPP.
Don’t get me wrong, I’ve learned so much as an admin setting up and hosting XMPP and for a short while I even had a PoC going at work to try and advocate the protocol, but in the end Matrix feels like a worthy successor to me.
It allows me to convince “normal users” to use a federated, self-hosted and free chat platform reliably - and that’s what mostly matters to me :wink: