• 1 Post
  • 16 Comments
Joined 2 years ago
cake
Cake day: April 24th, 2023

help-circle

  • Oh you sweet summer child.

    If you remember anything from this thread, remember this: capitalist markets do not care whether something is useful or useless. Capitalist markets care whether something will make money for its investors. If something totally useless will make money for its investors, the market will throw money at it.

    See: tulips, pet rocks, ethanol, cryptocurrency. And now AI.

    Because people are stupid. And people will spend money on stupid shit. And the empty hand of capitalism will support whatever people will spend money on, whether it’s stupid shit or not.

    (And because, unfortunately, AI tools are amazing at gathering information from their users. And I think the big tech companies are really aggressively pushing AI because they want very much to have users talking to their AI tools about what they need and what they want and what their interests are, because that’s the kind of big data they can make a lot of money from.)


  • Why are LLMs so wrong most of the time? Aren’t they processing high quality data from multiple sources?

    Well that’s the thing. LLMs don’t generally “process” data as humans would. They don’t understand the text they’re generating. So they can’t check their answers against reality.

    (Except for Grok 4, but it’s apparently checking its answers to make sure they agree with Elon Musk’s Tweets, which is kind of the opposite of accuracy.)

    I just don’t understand the point of even making these softwares if all they can do is sound smart while being wrong.

    As someone who lived through the dotcom boom of the 2000s, and the crypto booms of 2017 and 2021, the AI boom is pretty obviously yet another fad. The point is to make money - from both consumers and investors - and AI is the new buzzword to bring those dollars in.


  • Oh, you sweet summer child.

    Throughout history, the so-called intellectuals have generally been the ones rationalizing atrocities and human rights violations.

    After all, being intelligent, they understand not to piss off those in power. They know they’re better off defending the actions of those in power than opposing them.

    The people who stand up for basic human rights, the people who speak truth to power, tend to be the less educated people. Like the abolitionists in the 1800s, when all the colleges were teaching students to pin educated arguments in favor of slavery. Like the men and women who marched for civil rights when all the educated conservatives were telling them it would destroy the country and all the educated liberals were telling them fighting was counterproductive. The people who say “I don’t care about complicated arguments, I don’t care what the intellectuals say, I see injustice and I stand against it”.

    As 1984 puts it:

    The party told you to reject the evidence of your eyes and ears. It was their final, most essential command. His heart sank as he thought of the enormous power arrayed against him, the ease with which any Party intellectual would overthrow him in debate, the subtle arguments which he would not be able to understand, much less answer. And yet he was in the right! They were wrong and he was right.

    If AIs today produce text frowning on inhumane evils, it’s because they were trained on actual human beings posting on social media about what they actually believe, and not on the ramblings of genocide justifying political “intellectuals” like Henry Kissinger and Donald Rumsfeld.


  • There should be multiple independent steps of verifying if someone should get banned and in what way. And probably integrate a good test for joining the community so that it’s more likely for people to be rational from the start (that way you don’t even have to look at so many potential flags).

    How much would you pay to join a community with that level of protection for user rights? Like the old subscription based forums, some of which are still floating around the internet?

    Because “multiple independent steps of verifying” is, frankly, going to be a lot of frustrating, thankless, and redundant work for moderators. I mean, we know how to safeguard people’s rights through legalistic processes. Courts do it all the time. It’s called due process. And due process is frequently a slow, complicated, and expensive pain in the ass for everyone involved. And I think very few people would want to do that work for free.

    (Conveniently, this would also serve as a good test for joining such a community - people are more likely to follow the rules and act like decent human beings if a subscription they paid for is riding on it, and it would price out AI and spambots in the process.)



  • Generally if people don’t “get” your joke, there’s one of two things likely happening:

    Or option three, which happened here: someone attempted satire or dark humor and didn’t realize society had degenerated so much that people were genuinely, seriously, advocating for the satirical claim.

    Imagine Jonathan Swift’s “A Modest Proposal” - a suggestion that poor Irish people sell their children to be eaten for food, which would both reduce the burden on poor families and provide delicious sustenance for wealthy Englishmen. Now imagine a bunch of English people saying “this is a great idea, I’ve supported it for a long time now”. And then a bunch of Irish people attacking Jonathan Swift, believing he genuinely supported eating Irish children, because a bunch of English people actually supported it.

    You might wonder how it could be possible, that people would confuse satirical attacks on exaggeratedly stupid and evil positions for actual support for those positions.

    But then you might remember there are sitting members of Congress suggesting we literally feed immigrants to alligators to thunderous fucking applause.

    And then you might remember satire is dead.


  • I think Disney is to American culture what McDonald’s is to American food. A corporate juggernaut that markets product not through quality but through advertising and name recognition, and starves out genuine American culture in the process.

    I mean, what does it say that one of the most recognizable symbols of the United States, worldwide, is a cartoon mouse whose job is to sell toys to kids?

    What message does that cartoon mouse send to the world about American values and American beliefs?

    The idea that giving money to a corporation has become a rite of fucking passage in American society - the number of people who think their kids need to watch Disney movies so they can fit in with other kids, who think their kids will miss out on a fundamental part of American culture if they don’t take them to Disneyland at least once - absolutely horrifies me. Especially since the only political and moral message kids learn from Disney is “uphold the status quo and buy more Disney merch”.

    Also, Disney is known for racism and sexism and cultural appropriation and union busting and copyright trolling and all sorts of general corporate bullshittery, and is currently shoving its feminist and LGBT representation back into the closet to appease Trump and avoid offending big conservative audiences in India and China and the Middle East, and there are plenty of smaller more specific reasons to hate them, but for me the whole “cultural vanguard of capitalism” thing outweighs the rest.





  • I’d argue the article’s point is “new communication technology encourages a particular form of psychosis, and LLMs are especially prone to encouraging psychosis because they generate such a believable imitation of speech”.

    I’ve been coming to believe LLMs dangerous to mental health in general for a lot of reasons, and I thought this was an interesting discussion of how a basic human instinct - to look for patterns and assume rational thought and meaning behind those patterns - has always gone wrong when applied to technology and is particularly dangerous when applied to LLM-generated content.

    (Because there is a reason for every LLM-generated utterance, and that reason is “make the company money”. LLMs are capitalist speech acts in their purest form.)

    BTW, what’s wrong with Substack? Is it just the “Substack hosts fascist blogs so everyone using Substack is fascist by association” thing?



  • This is hardly unique to AI. When I used Reddit, r/bestof (a sub that reposted the “best” comments from Reddit threads) was consistently full of posts that confidently, eloquently, and persuasively stated bullshit as fact.

    Because Redditors as a collective don’t upvote and award the truest posts - they upvote and award the posts that seem the most trustworthy.

    And that’s human nature. Human beings instinctively see confidence as trustworthy and hesitation and doubt as untrustworthy.

    And it’s easy to project an aura of confidence when you post bullshit online, since you have all the time you need to draft and edit your comment and there are no consequences for being wrong online.

    Zero surprise an AI algorithm trained on the Internet replicates that behavior 😆


  • I mean, how many people fact check a book? Even at the most basic level of reading the citations, finding the sources the book cited, and making sure they say what the book claims they say?

    In the vast majority of cases, when we read a book, we trust the editors to fact check.

    AI has no editors and generates false statements all the time because it has no ability to tell true statements from false. Which is why letting an AI summarize sources, instead of reading those sources for yourself, introduces one very large procedurally generated point of failure.

    But let’s not pretend the average person fact checks anything. The average person decides who they trust and relies on their trust in that person or source rather than fact checking themselves.

    Which is one of the many reasons why Trump won.


  • Oh fuuuuck no.

    You’re not good enough at controlling your thoughts to be less useful than the pornsick social media addicted ai drones with 10 second attention spans that would willingly participate in this.

    I remember a fantasy novel from the Myth Adventures series where the good guys went undercover as conscripts in an enemy nation’s army. They ended up assigned to logistics and decide they could effectively hamper the enemy army, while keeping their own cover, if they messed up ten percent of their supply orders. And they got medals for efficiency because a 90% success rate was so much better then every other logistics unit 😆

    Anyway, that’s what I think of when I hear your suggestion. The average competent human being reading this and recognizes how dystopian this bullshit is, even trying to fail, is going to give better data than the kind of fucking idiot who thinks this is a good idea and participates willingly.