• 4 Posts
  • 243 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle
  • Moving/copying/reading/deleting tonnes of tiny files isn’t significantly faster on an ssd because the requirements for doing so are not limited by HDDs in the first place.

    You mean the physical actuator moving a read/write head over a spinning platter? Which limits its traversal speed over its physical media? Which severely hampers its ability to read data from random locations?

    You mean that kind of limitation? The kind of limitation that is A core part of how a hard drive works?

    That?

    I would highly recommend that you learn what a hard drive is before you start commenting about its its performance characteristics. 🤦🤦🤦


    For everyone else in the thread, remember that arguing with an idiot is always a losing battle because they will drag you down to their level and win with experience.


  • This is like asking for a source for common sense statements.

    HDDs are pretty terrible at random IO, which is what reading many small files tends to be. This is because they have a literal mechanical arm with a tiny magnet on the end that needs to move around to read sectors on a spinning platter. The physical limitations of how quickly the read right head can traverse limits it’s random I/O capabilities.

    This makes hard drives, abysmal, at random I/O. And why defragmenting is a thing.

    This is common knowledge for anyone in it and easy knowledge to obtain by reading a Wikipedia page.

    SSDs are great at random I/O. They do not have physical components that need to move in order to read from random locations they generally perform equally as well from reading any location. Meaning their random I/O capabilities are significantly better.



  • douglasg14b@lemmy.worldtoSelfhosted@lemmy.worldJellyfin over the internet
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    edit-2
    5 days ago

    These are all holes in the Swiss cheese model.

    Just because you and I cannot immediately consider ways of exploiting these vulnerabilities doesn’t mean they don’t exist or are not already in use (Including other endpoints of vulnerabilities not listed)


    This is one of the biggest mindset gaps that exist in technology, which tends to result in a whole internet filled with exploitable services and devices. Which are more often than not used as proxies for crime or traffic, and not directly exploited.

    Meaning that unless you have incredibly robust network traffic analysis, you won’t notice a thing.

    There are so many sonarr and similar instances out there with minor vulnerabilities being exploited in the wild because of the same"Well, what can someone do with these vulnerabilities anyways" mindset. Turns out all it takes is a common deployment misconfiguration in several seedbox providers to turn it into an RCE, which wouldn’t have been possible if the vulnerability was patched.

    Which is just holes in the swiss cheese model lining up. Something as simple as allowing an admin user access to their own password when they are logged in enables an entirely separate class of attacks. Excused because “If they’re already logged in, they know the password”. Well, not of there’s another vulnerability with authentication…

    See how that works?





  • And it won’t scale at all!

    Congratulations, you made more AI slop, and the problem is still unsolved 🤣

    Current AI solves 0% of difficult programming problems, 0%, it’s good at producing the lowest common denominator, protocols are sitting at 99th percentile here. You’re not going to be developing anything remotely close to a new, scale able, secure, federated protocol with it.

    Nevermind the interoperability, client libraries…etc Or the proofs and protocol documentation. Which exist before the actual code.





  • You can’t really host your own AWS, You can self-host various amalgamations of services that imitate some of the features of AWS, but you can’t really self-host your own AWS by any stretch of the imagination.

    And if you’re thinking with something like localstack, that’s not what it’s for, and it has huge gaps that make it unfit for live deployment (It is after all meant for test and local environments)







  • I mean, it’s more complicated than that.

    Of course, data is persisted somewhere, in a transient fashion, for the purpose of computation. Especially when using event based or asynchronous architectures.

    And then promptly deleted or otherwise garbage collected in some manner (either actively or passively, usually passively). It could be in transitory memory, or it could be on high speed SSDs during any number of steps.

    It’s also extremely common for data storage to happen on a caching layer level and not violate requirements that data not be retained since those caches are transitive. Let’s not mention the reduced rate “bulk” non-syncronous APIs, Which will use idle, cheap, computational power to do work in a non-guaranteed amount of time. Which require some level of storage until the data can be processed.

    A court order forcing them to start storing this data is a problem. It doesn’t mean they already had it stored in an archival format somewhere, it means they now have to store it somewhere for long term retention.




  • The sad part is is that you’re right.

    And the reason that it’s sad is that most of the individual veneers on proprietary projects deeply about a project itself and have the same goals as they do with open source software, which is just to make something that’s useful and do cool shit.

    Yep, the business itself can force them not take care of problems or force them to go in directions that are counter to their core motivations.