Start by reading these two articles:
- https://blog.joinmastodon.org/2023/07/what-to-know-about-threads/
- https://ploum.net/2023-06-23-how-to-kill-decentralised-networks.html
Ok, now that you’ve done that (hopefully in the order I posted them), I can begin.
I have always been a strong supporter of Open Source Software (OSS), so much so that all of my projects (yes all) are OSS and fully open for anyone to use. And with that, I knew that things could be used for good… and bad. I took that risk. But I also made sure to build stuff that wasn’t, in itself, inherently bad. I didn’t build anything unethical to my eyes (I understand the nuance here).
But I’ve seen what unethical devs can do.
Just take a look at those implementing the ModFascismBot for Reddit (that’s not its name, but that’s what it is). That is an incredibly unethical thing to build. Not because it’s a private company controlling what they want their site to do, no, that’s fine by me. Reddit can do whatever they want. But because it’s an attempt to lie about reality, to force users to do something through manipulation not through honesty. Even subreddits that voted overwhelmingly to shut down still got messaged by the bot telling them that the users (that voted for it) didn’t want it and they had to open back up or they would be removed from mod position. This is not ethical. This is not right. This is not what the internet is about.
Or the unethical devs at Twitter, who:
- built in actual keywords to mark Ukraine news as misinformation
- marked Substack as unsafe when they released their own Twitter competitor
- banned Mastodon links for no reason besides the fact that they are a competitor
- marked NPR, BBC, CBC, and PBS as ‘state-funded media’ even though that is clearly meant to indicate something along the lines of Russia’s propaganda arm RT or China Daily, the same for China. Then when there was enormous backlash, they removed the labels, but did the same for the actual government propaganda accounts like RT and China Daily. And then they removed the limits on virality of those posts from those propaganda accounts allowing them to have a massive spike in engagement, thus furthering misinformation.
It’s one thing for an organization to have political lean…that is just a part of life, and that will never end. It’s another to actually sow disinformation in order to accomplish nefarious things to further your profits. It is what has caused massive addiction to tobacco, the continuation of climate change, death and disfiguration from forever chemicals, ovarian cancer and mesothelioma from undisclosed exposure to asbestos, or selling ‘health products’ that claim to cure everything under the sun, but can “interfere with clinical lab tests, such as those used to diagnose heart attacks”.
Please do not confuse this for saying that companies shouldn’t be able to sell things and make a profit. If you want to sell someone something that kills them if they misuse it and you market it as such, you go for it. That’s literally how every product in the cleaning aisle of your grocery store works. That’s how guns work, that’s how fertilizers work, that’s why we have labels. But manipulation for profit is unethical, and that’s why companies hide it. It hurts their bottom line. They know that their products will not be used if they reveal the truth. Instead of doing something good for humanity, they choose the subterfuge. Profits over people. Profits over Earth honestly. Profits over continuing the human race. Absolutely nothing matters to companies like this. And unethical developers enable this.
Facebook (ok, fine, Meta, still going to refer to them as FB though) is trying to join the Fediverse. We as a community, but honestly each of you as individuals, have a decision to make. Do they stay or do they go? Let’s put some information on the table.
Facebook…
- lies about the amount of misinformation it removes [1]
- increased censorship of ‘anti-state’ posts [1:1] [2] [3]
- lied to Congress about social networks polarizing people, while FB’s own researchers found that they do [2:1]
- attempted to attract preteens to the platform (huh, wonder where all that “you must be 13” stuff went) [4]
- rewards outrage and discord [3:1][5]
Facebook also…
- Allows for checking on friends and family in disasters [6]
- Created and maintained some of the most popular open source software on the planet (including the software that runs the interface you’re looking at right now) [7][8]
From my perspective… There’s not much good about FB. It has single handedly caused the deaths of tens of thousands of people across the planet, if not hundreds of thousands. It continually makes people angrier and angrier. It’s a launching pad for scammers, thieves, malevolent malefactors, manipulators, dictators, to push their conquests onto the world through manipulation, lies, tricks, and deceit. Its algorithms foster an echo chamber effect, exacerbating division and animosity, making civil discourse and mutual understanding all but impossible. Instead of being a platform for connection, it often serves as a catalyst for discord and misinformation. FB’s propensity for prioritizing user engagement over factual accuracy has resulted in a global maelstrom of confusion and mistrust. Innocent minds are drawn into this vortex, manipulated by fear and falsehoods, consequently promoting harmful actions and beliefs. Despite its potential to be a tool for good, it is more frequently wielded as a weapon, sharpened by unscrupulous entities exploiting its vast reach and influence. The promise of a globally connected community seems to be overshadowed by its darker realities.
As a person, I believe that we need to choose things as a community. I do not believe in the ‘BDFL’…the Benevolent Dictator For Life. Graydon Hoare, creator of Rust, wrote an article just recently about how things would have been different if they had stayed BDFL of Rust. From my position the BDFLs we currently have on this planet really suck. Not just politically, but even in tech. I don’t think that path is good for society. It might work in specific circumstances, but it usually fails, and when it does, people get hurt. Badly.
So, with that in mind, I’ve been working on a polling feature for Lemmy. I seriously doubt I’ll be done with it soon, but hopefully FB takes a while longer to implement federation. I understand there’s a desire for me, or the other admins to just make a decision, but I really don’t like doing that. If it comes down to it, I will implement defederation to start with, but I will still be holding a vote as soon as I can get this damn feature done.
http://web.archive.org/web/20220120004921/https://www.washingtonpost.com/technology/2021/10/25/what-are-the-facebook-papers/ ↩︎ ↩︎
http://web.archive.org/web/20220119204203/https://www.washingtonpost.com/technology/2021/10/25/mark-zuckerberg-facebook-whistleblower/ ↩︎ ↩︎
https://web.archive.org/web/20181016003104/https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html ↩︎ ↩︎
https://www.wsj.com/articles/facebook-instagram-kids-tweens-attract-11632849667?mod=article_inline ↩︎
https://www.wsj.com/articles/facebook-algorithm-change-zuckerberg-11631654215?mod=article_inline ↩︎
https://developers.facebook.com/blog/post/2021/10/18/peeking-behind-the-scenes-of-facebook-open-source/ ↩︎
the website actually uses Inferno, but from what I can tell it was forked directly from React, judging from the actually documentation and references in the repo. ↩︎
For me, the main issue is that I simply don’t want to spoon-feed them data about my behavior, or give them my content to monetize on their platform as they see fit. I’m certain that if they ever implement something like subscription to communities on Lemmy, or a Frontpage or All, they will do so with their own algorithms that decide what content you see (* see edit below) - algorithms that are designed to manipulate with people, backed by a ML model that has unimaginable amount of data from FB and IG to train on, and 3 billion users to learn and experiment on to further be better at showing you the right personalized posts that keep you glued to their apps for as long as possible, no matter how unhealthy it may be or how it changes you for the worse.
While I understand that my content personally, or the whole of Lemmy isn’t going to make a dent in the data they already have to work with, I still don’t what to have anything to do with it, and I would be pretty sad if we’ve let them exploit Fediverse in such a way.
EDIT: Now that I think about it, I’m actually not sure if that’s how ActivityPub works - from what I assume (and please correct me if I’m wrong), it’s just a protocol that allows servers to query different instances for their content, but the content is then shown on that instance - so the frontend and the way the content is shown is decided solely by the instance owner, just as I use https://programming.dev/c/community@lemmy.ml if I want to see content from Lemmy.ml, and nothing is stopping programming.dev to have a different interface altogether, or show me the posts in whatever order they see fit. In the same way, if Mastodon wanted to let their users access Lemmy posts, all they would need is to query Lemmy instance for data using standartized ActivityPub API (what data actually? I need to finally read up on ActivityPub.) about the posts the user wants to see, and then implement frontend for that data. And if Mastodon user comments on something, it just sends the comment back to the Lemmy instance - using ActivityPub.
Is this correct? Or is there some kind of SSO involved in ActivityPub, so all of my Fediverse interaction isn’t limited - and directed by - my home instance only? That’s something I’m not really clear on, case my whole assumption about ActivityPub is based on random mentions here and there from comments around here.
Your data/content is public on Lemmy, FB would have no problem fetching it.
Your edit is correct. SSO not required.