• 0 Posts
  • 152 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle




  • On debian testing (trixie):

    $ cat bin/steam-jailed.sh

    #!/bin/sh
    firejail --private=/home/user/steamjail --profile=/etc/firejail/steam.profile ~/steam $1
    

    Sometimes an update breaks something, and I have to experiment with the profile settings, for which it helps to launch a bash with the same jail and start steam on the command line inside the jail to see output messages.

    #!/bin/sh
    firejail --private=/home/user/steamjail --blacklist=${HOME}/.inputrc --profile=/etc/firejail/steam.profile bash
    

    What happens most of the time is that a steam update depends on a newer system library that I didn’t yet install and I then have to do a system update - steam is shit at managing OS dependencies (i.e.: it doesn’t)





  • I was saying:

    most data center admins using linux are not so stupid to subscribe to remote updates from a third party

    Your response is not related in any way to that. If a third party software - running on system rights - forces auto-updates, that’s called a “rootkit” and any sane admin would refuse to install such a package.

    Competent here also meaning “if the upper management refuses to listen to my advice, I leave because I have other options”. People who implement stupid policies - and especially technological solutions - against their principles are a cancer to democracy. Those are the people that enable tech-illiterate morons to implement totalitarian regimes.










  • My bad bad for not seeing the sub’s name before commenting. My points still stand though.

    There’s machine learning, and there’s machine learning. Either way, pattern matching & statistics has nothing to do with intelligence beyond the actual pattern matching logic itself. Only morons call LLMs “AI”. A simple rule “if value > threshold then doSomething” is more AI than an LLM. Because there’s actual logic there. An LLM has no such logic behind word prediction, but thanks to statistic it is able to fool many people (including myself, depending on the context) into believing it is intelligent. So that makes it dangerous, but not AI.


  • bullshit take. OP didn’t post a screenshot about AI, it’s about LLMs. They are absolutely doing more harm than good. And the examples you are quoting are also highly misleading at best:

    • science assistance: that’s machine learning, not AI
    • helping doctors? yes, again, machine learning. Expedite screening rates? That’s horribly dangerous and will get people killed. What it could do is scan medical data that has already been seen by a qualified doctor / radiologist / scientist, and re-submit them for a second opinion in case it “finds” a pattern.
    • powering robots that have moving parts: that’s where you want actual AI, logical rules from sensor to action, putting deep learning or LLM bullshit in there is again fucking dangerous and will get people killed
    • helping to catch illegal fishing / etc: again, deep learning, not AI.