A friend of mine is interested in the “sovereign artist” model, which basically means that you self publish and self release your own work on your own website, as opposed to using a publishing house or art gallery.

It’s powerful because it gives everyone a platform to share “niche” art, but as a consumer, it can be difficult to find and “curate” high quality, interesting works of art. Is there a rating/voting system that exists that is resitant to internet vote tampering?

I’m talking about how 10 years ago, Amazon reviews were pretty helpful. But now they’ve been swarmed with paid and bot written reviews. Same with Slickdeals and many others.

I’d want a voting system that incorporates some ideas:

  • it would prevent one person from making multiple fake accounts
  • reviews wouldn’t be suppressed or promoted by paid algorithims
  • the algorithm WOULD help connect people to items they are interested in. But maybe the workings of it would be open source, so it can be audited for bad acting.

Does a project like this exist somewhere? Rather than host that project in one place, it could be powerful to defederate and prevent the temptation to manipulate algorithms.

  • azdle@news.idlestate.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    As far as I’m aware something like that isn’t really possible.

    • it would prevent one person from making multiple fake accounts

    How do you define ‘a person’ and how do you ensure that they only have one account? Short of government control of accounts, I don’t think you can really guarantee this and even then there’s still fraud that gets past the current government systems.

    Then, how do you verify that the review is coming from the person that the account is for?

    IMO, we’d all be better off going back to smaller scale social interactions, think ‘social media towns’ you trust a smaller number of people and over time develop trust in some. Then you can scale this out to more people than you can directly know with some sort of web-of-trust model. You know you trust Alice, and you know Alice trusts Bob, so therefore you can trust Bob, but not necessarily quite as much as you trust Alice. Then you have this web of trust relationships that decay a bit with each hop away from you.

    It’s a rather thorny problem to solve especially since for that to work optimally you’d want to know how much Alice trusts bob, but that amounts to everyone documenting how much they trust each of their friends, which seems socially… well… difficult.

    Though the rest is actually easy™:

    • reviews wouldn’t be suppressed or promoted by paid algorithms
    • the algorithm WOULD help connect people to items they are interested in. But maybe the workings of it would be open source, so it can be audited for bad acting.

    You do what the fediverse does, you have all the information available to everyone, then you run your own ‘algorithm’ that you wrote/audited/trust. The hard part is getting others to give away access to all ‘their’ data.