• 0 Posts
  • 5 Comments
Joined 2 years ago
cake
Cake day: August 3rd, 2023

help-circle
  • Fully agreed. On the service-provider side, we have ‘safe harbor’ laws: A site isn’t liable for copyrighted user-generated content as long as they have mechanisms to take down items when notified.

    Liability-wise: The payment processors should have no fucking insight into what is being sold, only that they handle the transactions. Therefore, they should have no liability, similar to “safe harbor”.

    Reputation-wise: I can almost see a history where Visa, for example, used a statement like “we don’t handle transactions for X” as a marketing ploy… but that is way past where we are. There’s no chance of reputational damage to a payment processor for the items for which they handled a payment. Combined with the above, if I say I’m giving $20 to Tim, you give $20 to Tim and take it from me. Done. Not your problem.

    As another commenter stated, the payment processor should be a dumb pipe, and anything illegal being sold should be a liability for the seller or buyer. The idea of a moral judgement of the processor is as stupid as a water pipe to your house cutting off the flow if your shower runs too long.

    The real problem is the politicians, or lobbyists/influencers, who are sending bribes to each other to gain advantage… but visa doesn’t have a problem handling a venmo transaction for ‘tuition’.

    Let me buy horny games until after you block world superpower corruption first. But honestly, don’t even do that. Just handle moving the money when someone send it. That’s your only job.





  • Like many things, a tool is only as smart as the wielder. There’s still a ton of critical thinking that needs to happen as you do something as simple as bake bread. Using an AI tool to suggest ingredients can be useful from a creative perspective, but should not be assumed accurate at face value. Raisins and Dill? maybe ¯\(ツ)/¯, haven’t tried that one myself.

    I like AI, for being able to add detail to things or act as a muse, but it cannot be trusted for anything important. This is why I’m ‘anti-AI’. Too many people (especially in leadership roles) see this tool as a solution for replacing expensive humans with something that ‘does the thinking’; but as we’ve seen elsewhere in this thread, AI CANT THINK. It only suggests items that are statistically likely to be next/near based on its input.

    In the Security Operations space, we have a phrase “trust but verify”. For anything AI, I would use 'doubt, then verify" instead. That all said. AI might very well give you a pointer to the place to ask how much motrin an infant should get. Hopefully, that’s your local pediatrician.