• 2 Posts
  • 123 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • I have no idea about cozy.io, but just to offer another option, I’ve been running Seafile for years and it’s pretty solid piece of hardware. And while it does have other stuff than just file storage/sharing, it’s mostly about just files and nothing else. Android client isn’t the best one around, but gets the job done (background tasks at least on mine tend to freeze now and then), on desktop it just works.


  • I assume you don’t intend to copy the files but use them from a remote host? As security is a concern I suppose we’re talking about traffic over the public network where (if I’m not mistaken) kerberos with NFS doesn’t provide encryption, only authentication. You obviously can tunnel NFS with SSH or VPN and I’m pretty sure you can create a kerberos ticket which stores credentials locally for longer periods of time and/or read them from a file.

    SSH/VPN obviously causes some overhead, but they also provide encryption over the public network. If this is something ran in a LAN I wouldn’t worry too much about encrypting the traffic and in my own network I wouldn’t worry about authentication either too much. Maybe separate the NFS server to it’s own VLAN or firewall it heavily.


  • I don’t think there exists a proper alternative even in the commercial sector.

    There is a handful of vendors and they indeed monitor a ton more than just viruses. The solution we’re running at the office monitors pretty much all kinds of logs (dns, dhcp, authentication, network traffic…) and it can lock down clients which are behaving wrongly enough. For example every time I change a hosts file (for a legitimate reason) on my own laptop I get a question from security team if that was intented. And it combines logs/data gathered from different systems to identify potential threats and problematic hosts and that’s why our fleet feeds in data from all kinds of devices.

    I haven’t seen that many different solutions which do this, but the few I’ve worked with are a bit hit or miss with linux. The current solution has a funny feature where it breaks dpkg if the server doesn’t have certain things installed (which are not depencies on the packet itself). And they eat up a pretty decent chunk of CPU-cycles and RAM while running. But apparently someone has done the math and decided that it’s worth the additional capacity, it’s outside my pay range so I just install whatever I’m told to.



  • IsoKiero@sopuli.xyztoLinux@lemmy.mlLindowsOS, 2001
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 months ago

    Yes, Ethernet, not Wifi, which would have been understandable.

    Back in the day there was ‘software NICs’ on the market which required separate (driver-ish) software to do anything. Also there was RTL chips which required propietary parts from a driver and all the fun stuff. On wifi it’s still a thing now and then, but everything works far better today, and it’s at least partially because hardware is better too. Of course even in late 90’s when ethernet started to gain traction you could just throw something like 3c509 or e100 to your box and call it a day, but standards were far less mature than they’re today.




  • The process is to go step-by-step. First direct connect to modem you have, bridged connection if possible, and test with multiple bandwidth measurements (speedtest, fast.com, downloading a big file from some university ftp…) and work your way downstream of the network. And on every step test multiple scenarios where it’s possible, preferably with multiple devices.

    When I got a 1Gbit fiber connection few years back I got an Ubiquiti Edgerouter-X with PoE-options. On paper that should’ve been plenty for my network, but in theory with NAT, DNAT, firewall rules and things like that it capped on 6-700Mbps depending on what I used it for. With small packets and VPN it dropped even more. So now that thing acts as an glorified PoE switch and the main routing is handled with Mikrotik device, which on manufacturers tests should be able to push 7Gbps on optimal conditions. I only have 1/1Gbps, so there’s plenty of room, but with very specific loads that thing still is still pushed to the limit (mostly small packet size with other stuff on top of it) but it can manage the full duplex 1000Base-T. And on normal everyday use it’s running at 20% (or so) load, but I like the fact that it can manage even the more challenging scenarios.


  • IsoKiero@sopuli.xyztoLinux@lemmy.mlProblem with File transfer
    link
    fedilink
    English
    arrow-up
    11
    ·
    5 months ago

    I’m pretty sure that you’ve already checked, but the obvious things sometimes fly under the radar and go unnoticed: is the phone in file transfer mode in the first place? Other one (which has bitten me) is if you’re using an usb-hub, try direct connection and/or different ports on the host computer.

    Personally I’ve spent far too long to try and hunt down something obscure while the fix was really simple as some default option changed with updates or whatever. And in general I’ve forgotten to check the simple things first way too many times and that has caused wasted hours way more than I want to count or admit.



  • There’s already a ton of great examples which I can relate (I’ve been using linux since 1998 or 99) but maybe the biggest difference today, apart from that everything is SO MUCH EASIER now, is that the internet wasn’t really the thing it is today. Specially the bandwidth. It took hours and hours over the phone line to download anything, on a good day you could get 100MB just under 4 hours. Of course things were a lot smaller too back then, but it still took ages and I’m pretty sure I now have more bandwidth on my home connection than most of the local universities had back in the 90s.


  • Back when CRT monitors were a thing and all this fancy plug’n’play technology wasn’t around you had modelines on your configuration files which told the system what kind of resolutions and refresh rates your actual hardware could support. And if you put wrong values there your analog and dumb monitor would just try to eat them as is with wildly different results. Most of the time it resulted just in a blank screen but other times the monitor would literally squeal when it attempted to push components well over their limits. And in extreme cases with older monitors it could actually physically break your hardware. And everything was expensive back then.

    Fun times.



  • I want to prevent myself from reinstalling my system.

    Any even remotely normal file on disk doesn’t stop that, regardless of encryption, privileges, attributes or anything your running OS could do to the drive. If you erase partition table it’ll lose your ‘safety’ file too without any questions asked as on that point the installer doesn’t care (nor see/manage) on individual files on the medium. And this is exactly what ‘use this drive automatically for installation’ -option does on pretty much all of the installers I’ve seen.

    Protecting myself from myself.

    That’s what backups are for. If you want to block any random usb-stick installer from running you could set up a boot options on bios to exclude those and set up a bios password, but that only limits on if you can ‘accidently’ reinstall system from external media.

    And neither of those has anything to do on read/copy protection for the files. If they contain sensitive enough data they should be encrypted (and backed up), but that’s a whole another problem than protecting the drive from accidental wipe. Any software based limitation concerning your files falls apart immediately (excluding reading the data if it’s encrypted) when you boot another system from external media or other hard drive as whatever solution you’re using to protect them is no longer running.

    Unless you give up the system management to someone else (root passwords, bios password and settings…) who can keep you from shooting yourself on the foot, there’s nothing that could get you what you want. Maybe some cloud-based filesystem from Amazon with immutable copies could achieve that, but it’s not really practical on any level, financial very much included. And even with that (if it’s even possible in the first place, I’m not sure) if you’re the one holding all the keys and passwords, the whole system is on your mercy anyways.

    So the real solution is to back up your files, verify regularly that backups work and learn not to break your things.




  • Then do sudo apt install xfce4 and sudo apt purge cinnamon* muffin* nemo*.

    It’s been a while since I installed xfce4 on anything, but if things haven’t changed I think the metapackage doesn’t include xfce4-goodies and some other packages, so if you’re missing something it’s likely that you just need to ‘apt install xfce4-whatever’. Additionally you can keep cinnamon around as long as you like as a kind of a backup, just change lightdm (or whatever login manager LMDE uses) to use xfce4 as default. And then there’s even lighter WM’s than XFCE, like LXDE, which is also easy to install via apt and try out if that works for you.


  • I understand the mindset you have, but trust me, you’ll learn (sooner or later) a habit to pause and check your command before hitting enter. For some it takes a bit longer and it’ll bite you in the butt for few times (so have backups), but everyone has gone down that path and everyone has fixed their mistakes now and then. If you want hard (and fast) way to learn to confirm your commands, use dd a lot ;)

    One way to make it a bit less scary is to ‘mv <thing you want removed> /tmp’ and when you confirmed that nothing extra got removed you can ‘cd /tmp; rm -rf <thing>’, but that still includes the ‘rm -rf’ part.


  • IsoKiero@sopuli.xyztoLinux@lemmy.mlLinux on old School Machines?
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    6 months ago

    Absolutely. Maybe leave Gnome/KDE out and use a lighter WM, but they’ll be just fine. Specially if they have 8GB or more RAM. I suppose those have at least dual core processors, so that won’t be a (huge) bottleneck either. You can do a ton of stuff with those beyond just web browsing, like programming/text editing/spreadsheets and so on. I’d guess that available RAM is the biggest bottleneck on what they can do, specially if you like to open a ton of tabs on your browser.