I did it two or three times with 3-5 accounts (never all 8). I also used to ask my friends (N=~8) to upvote stuff too (yes, I was pathetic) and I wasn’t warned/banned. This was five-six years ago.
infosec amongst other things
I did it two or three times with 3-5 accounts (never all 8). I also used to ask my friends (N=~8) to upvote stuff too (yes, I was pathetic) and I wasn’t warned/banned. This was five-six years ago.
In my opinion, the biggest (and quite possibly most dangerous) problem is someone artificially pumping up their ideas. To all the users who sort by active / hot, this would be quite problematic.
I’d love to actually see some social media research groups actually consider how to detect and potentially eliminate this issue on Lemmy, considering Lemmy is quite new and is malleable at this point (compared to other social media). For example, if they think metric X may be a good idea to include in all metadata to increase chances of detection, then it may be possible to include this in the source code of posts / comments / activities.
I know a few professors and researchers who do research on social media and associated technologies, I’ll go talk to them when they come to their office on Monday.
Maybe you’re right, but it just felt uncanny to see thousands of upvotes on a post with only a handful of comments. Maybe someone who active on the bot-detection subreddits can pitch in.
This was a problem on reddit too. Anyone could create accounts - heck, I had 8 accounts:
one main, one alt, one “professional” (linked publicly on my website), and five for my bots (whose accounts were optimistically created, but were never properly run). I had all 8 accounts signed in on my third-party app and I could easily manipulate votes on the posts I posted.
I feel like this is what happened when you’d see posts with hundreds / thousands of upvotes but had only 20-ish comments.
There needs to be a better way to solve this, but I’m unsure if we truly can solve this. Botnets are a problem across all social media (my undergrad thesis many years ago was detecting botnets on Reddit using Graph Neural Networks).
Fwiw, I have only one Lemmy account.
Was it hard to get this standardized back in the good ol’ days?
Do you think it would be as easy to do it now? If not, what challenges and hurdles would a RFC have to overcome?
The last thing I know that was pretty “significant” is the GNU Terry Pratchett header (https://en.m.wikipedia.org/wiki/Terry_Pratchett#Death) and that was a community effort.
Oh man, this is something I definitely hope to never see again. I’m so tired of the unbelievable TIFUs, AITAs, and/or OffMyChests with thousands of upvotes that had obviously-fake stories.
The worst one (in recent history) was that TIFU with the student who slept with their professor’s daughter.
Part 1: https://libreddit.de/r/tifu/comments/1379pge/tifu_by_hooking_up_with_professors_daughter/
Part 2: https://libreddit.de/r/tifu/comments/137u9bk/tifupdate_by_hooking_up_with_professors_daughter/
Part 3: https://libreddit.de/r/tifu/comments/1391lmj/tifupdate_i_cuckolded_my_professor/
I hope this kind of bullshit never happens here.
The latter. I was making bots to collect data (for the previously-mentioned thesis) and to make some form of utility bots whenever I had ideas.
I once had an idea to make a community-driven tagging bot to tag images (like hashtags). This would have been useful for graph building and just general information-lookup. Sadly, the idea never came to fruition.