Frogger, played a lot of that and had to stop myself before crossing roads for awhile after.
In my mind I was trying to optimize moving the same direction as the car and sneak in before the meeting car got there. I’m lucky to be alive.
Frogger, played a lot of that and had to stop myself before crossing roads for awhile after.
In my mind I was trying to optimize moving the same direction as the car and sneak in before the meeting car got there. I’m lucky to be alive.
If PostgreSQL is also shut down and you dont start the backup before its completely stopped it should be ok. You might need to restore to the same version of PostgreSQL and make sure it is setup the same way. If you dump the data, it is safer, both that you get a known good state, and that it can be restored to any new database. Grabbing the files as you suggest should be ok at least 90 percent of the time. But why risk it?
From personal experience, if you’re hosting Gitlab and make it available to the internet, make sure to keep it updated or your server will be super slow hosting a crypto miner within a year.
https://github.com/plankanban/planka was more competent than it looked a few years ago, might be even better now.
Sorry, 70 degrees, not 80. The load was fine. It’s a machine to test things, but I kept using checkmk since I really liked it. All on one server, both monitor server and all clients.It’s an old workstation - it runs around 60 degrees normally.
That said, it could very much be a config issue, I installed with the ansible role and left most everything as default. A very easy installation, and with ansible very easy to add new hosts to monitor as well. I’m up to 36 now, including some docker containers.
I switched back to 1 minute to test, and is warned for temp within 20 minutes, from 60 degrees to hovering around 70. Load from 2 to 3.5, threads from 1k to 1.2k all on the physical side. There’s also a small change in IO that seems to be the checkmk server writing more to disk - the cpu on that host is only slighty.
I’m guessing that the temp going over is hardware related, a better fan might fix that issue.
I don’t know if the load/thread increase is reasonable, but given the amount of checks done in the agent I’m perfectly OK with giving those resources to have all the data points checkmk collects available. It’s helped a lot being able to go into details to see what’s going on, checkmk makes that so easy.
I opted for checkmk as well and don’t want to switch. It’s got a good default for Linux monitoring and it will tell me about random things to fix after reboots, or that memory/disc is getting low so I can fix it quickly.
When monitoring 15 virtual machines on one physical the default of checking every minute for all machines raised the temp over 80 degrees Celsius on the physical machine and triggered a warning. Checking every five minutes is more that I need, so I went with that change.
Yeah, pgsql and redis are probably to much to work around, and the market too small. For those it could be useful they probably already have an installation on a server that can be used.
For my usage it’s perfectly fine running in python, so far not many daily users and not many bugs - most days nothing is reported. If I had more users or with performance telemetry enabled I might want rust. Better for the environment and I could run it on a smaller instance. That said, I believe GlitchTip is already ahead of Sentry in resource usage - I didn’t install Sentry, but I saw all the systems needed and that was the main reason for going with GlitchTip. I’m mostly OK with their license.
I installed it with Ansible a few months ago and it’s been solid. It’s really nice to see bug reports with so much detail.
At the same time I also connected my dev environment to it, and it’ s been helpful for webdev getting errors from both front- and backend in the same interface when adding features.
For dev it’s less useful to have the history saved, so I think a standalone binary without setup that’ll simply accept anything and keep in memory would be useful for a small audience.
It should be easier to port forward SMTP to the mailcow installation for incoming mail and only use NPM for the web interface.
If netbird has enough DNS support you might be able to setup all the mailcow recommended settings there so you have auto discovery from mail-clients on the netbird VPN.
Incoming mail is pretty easy to get working anywhere, but outgoing is restricted if your IP adress is in any way suspicious. Using sendgrid, authsmtp, or something similar is the easy way.
For the hardcore, finding a VPS with a company that blocks outgoing smtp as default but will unblock if you convince them you’re responsible can be fun and/or frustrating. You’ll have a mail relay there for outgoing email at the minimum but can also get incoming email via that server. The smallest possible server should be enough.