• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle
  • ((Why does Firefox crash on me!!!))

    ((Maybe even Firefox knows I typed too long and rambly.))

    So, where does that leave us? There’s always been unreliable knowledge from people. Joe in the next village tells tall tales about Martha from Sweden who catches fish with peeled strawberries. Scientific standardisation has helped a lot, and allowed for a sort of globalised reliable knowledge, but its cracks are showing. We trust ‘the experts’, but then find Wikipedia has trolls and WHO is influenced by Chinese diplomacy. So we trust ‘the community’ and find Amazon reviews are bought. So we trust our moderated sublemmits, and find out the content-to-user matching algorithms breed echo chambers. So we trust the government to moderate, but the American Left admit the Democrats are bad, and the Right admit the Republicans are liars. (And I’ve never even been to America!) So at last we go back to Aunt Jenny, who’s deeply afraid that black people will take over the country, and the local sysadmin whose network security is based on the book he read in the '90s.

    Maybe we need to relearn tricks from the old irl days, even if that loses us some of what we could gain from globalised knowledge and friendship. Perhaps we can find new ways to apply these to our internet communities. I don’t think I’m saying anything new here, but I guess fostering a culture of thinking about truth and trust is good: maybe I’m helping that.

    Almost as an aside (so I don’t ramble twice as long like my crashed-firefox answer!): The best philosophical one-liner I’ve found for first-principleing trust, is, does this person show love? (Kindness, compassion, selflessness.) To me, and/or to others. Then that imparts some assumed value to their worldview and life understanding. Doesn’t make them an expert on any topic, but makes a foundation.

    And finally,

    Do you really believe that the average persons sapience is really that noteworthy?

    Yes. If you mean, is their comment more noteable than most others, in a public debate, then no. But if you’re pointing towards, are their experience, understanding and internal processes valuable, then yes, and that’s important to me. (Though I’m not great enough to hear, consider or interact with everyone!)

    The average person on the internet is being fake the same way chatGPT based bots would be!

    Do you reckon so? I think fake internet usually talks different to chatGPT, though of course propaganda (national or individual level) tries to mimic which or whatever will be most effective. My point was largely that chatGPT mimics the experts we’ve previously learnt to trust, better than most of fake internet was able to do before, whilst being less sapient (than fake internet) and at the same time being yet more and yet much less trustworthy.




  • You treat bots like humans and humans like bots. It’s all about logic and good/bad faith.

    Part of the thing with chatgpt is it’s particularly good at sounding like it knows what is saying, while spewing linguistically-coherent nonsense.

    For many (most? Even all to some degree?) of us, we have some idea ingrained in our culture of saying what we think to be true, and refraining from what we don’t. That’s heavily diluted on the internet, but the converse tends to be saying what we think will make people support/agree with us. We’ve grown up (some of us have!) with some feel of how to tell the difference.

    GPT (and I guess most human-like chat bots will be similar for now) is more an amoral, or a-scient, attempt to say something coherent based on the training data. It’s different again, but sounds uncannily like what we’re used to from good-faith truth-speakers. I also think it’s like the extreme-end of some cultures that prioritise saying what will make the other person happy, more than what is true.





  • There’s also low-effort/value comments that agree with your worldview but are bad contribution to the debate. Especially on controversial topics.

    I’m sure there will always be lots of updates for things that shit on the opposition, especially when the majority thinks the opposition is morally and intellectually corrupt, but I’d rather those posts/comments be demoted (or e.g. relegated to a shitposting community) so healthy discussion can happen. And the truth can be seen more fairly.

    As a side note: some of Reddit’s majority opinions which I broadly agree with, I found myself shifting away from, because most of its supporting posts are stupid arguments. And some of the opponents I’ve gained sympathy for, because whenever I check the source for hate against them, it’s ill-founded. I tried not to take much opinion from Reddit anyway, but I love it when good debate frames the truth more clearly.










  • I agree it would be a dangerous precedent.

    Thing is, though, every instance is not equally valid and legitimate: that’s the reason for defederating from Threads.

    Not sure what you mean by what Gmail and Microsoft did to email? Do you mean that they assume many unknown email origins are spam? Though Gmail’s obviously attracted a lot of users, and I myself have moved off it now to paying for my email provider elsewhere, I was under the impression it’s been quite good for email and for pushing secure email, and being good at anti-spam.


  • I wonder if it’s possible …and not overly undesirable… to have your instance essentially put an import tax on other instances’ votes. On the one hand, it’s a dangerous direction for a free and equal internet; but on the other, it’s a way of allowing access to dubious communities/instances, without giving them the power to overwhelm your users’ feeds. Essentially, the user gets the content of the fediverse, primarily curated by the community of their own instance.