There have been users spamming CSAM content in !lemmyshitpost@lemmy.world causing it to federate to other instances. If your instance is subscribed to this community, you should take action to rectify it immediately. I recommend performing a hard delete via command line on the server.
I deleted every image from the past 24 hours personally, using the following command: sudo find /srv/lemmy/example.com/volumes/pictrs/files -type f -ctime -1 -exec shred {} \;
Note: Your local jurisdiction may impose a duty to report or other obligations. Check with these, but always prioritize ensuring that the content does not continue to be served.
Update
Apparently the Lemmy Shitpost community is shut down as of now.
If you aren’t going to fully wipe your drive in horrible events like this, at the very least use
shred
instead ofrm
.rm
simply removes references to the file in the filesystem, leaving the data behind on the disk until other data happens to be written there.Do not ever allow data like that to exist on your machines. The law doesn’t care how it got there.
Was going to say the same. Windows and Linux both use “lazy” ways of deleting things, because there’s not usually a need to actually wipe the data. Overwriting the data takes a lot more time, and on an SSD it costs valuable write cycles. Instead, it simply marks the space as usable again, and removes any associations to the file that the OS had. But the data still exists on the drive, because it’s simply been marked as writeable again.
There are plenty of programs that will be able to read that “deleted” content, because (again) it still exists on the drive. If you just deleted it and haven’t used the drive a lot since then, it’s entirely possible that the data hasn’t been overwritten yet.
You need a form of secure delete, which doesn’t just mark the space is usable. A secure delete will overwrite the data with junk data. Essentially white noise 1’s and 0’s, so the data is completely gone instead of simply being marked as writeable.
Would rm be okay if you regularly fstrim?
TRIM tells the SSD to mark an LBA region as invalid and subsequent reads on the region will not return any meaningful data. For a very brief time, the data could still reside on the flash internally. However, after the TRIM command is issued and garbage collection has taken place, it is highly unlikely that even a forensic scientist would be able to recover the data.
From: https://en.m.wikipedia.org/wiki/Trim_(computing)#Operation
So: probably yes.
I nuked my personal instance because of this :(
Dealing with pictrs is just frustrating currently since there’s no tools for its database format and no frontend for the API. I half-expected this outcome but I hope it gets better in the future.
I’m in the process of hopefully writing a tool to make deletion a bit easier, basically purging all the content not uploaded on my personal server. I can’t help but feel like pict-rs is not ready for prime time yet.
There is no API endpoint to list all images known in the system. There is no direct connection between posts and images, or even images and users, even if they’re cached locally. This is way more painful than it needs to be.
Pict-rs has been the single largest pain of self-hosting a tiny Lemmy instance. I really hope things improve. I like hosting it myself but I can’t do it as a second job, having to figure out my own hacks and workarounds just to keep it running and not serving up illegal crap.
About a month after I commented that, pict-rs added the
external_validation
URL for pre-processing. I haven’t looked into it myself, but Lemmy servers can now run images through a CSAM detector before uploading.Combining pictrs-safety and fedi-safety should help prevent the most immediate issues. However, fedi-safety requires a GPU for any kind of efficient processing, and I don’t have anything compatible available. I could waste many CPU cycles on running that stuff on the CPU, but I’m not going to bother with that.
Once illegal crap makes it to your server, you need to check your local laws before deleting it. Some jurisdictions require you to keep the files (but deny access) for evidence, and require you to notify the authorities. This stuff is exactly why self-hosting social media sounds nice but sucks in practice.
Thank you! I was looking into running this a week or two ago when I was doing some maintenance but I gave up and shelved the project for later due to the complexity. My Lemmy instance is running in AWS and I’m going to have to put some work into my network setup on both ends to be able to connect to a computer with a GPU at home.
I’m glad the community is working to resolve some of these issues. Hopefully some of this will get easier and more cost-effective.
deleted by creator
Agreed, pict-rs is not ready for this. Not having an easy way to map URL to file name is a huge issue. I still don’t understand why non-block storage doesn’t just use the UUID it generates for the URL as a filename. There is zero reason to not have a one-to-one mapping.
yeah, I just spent the last hour writing some python to grab all the mappings via the pict-rs api. Didn’t help that the env var for the pictrs api token was named incorrectly (I should probably make a PR to the Lemmy ansible repo). EDIT: Nevermind, seems there is one already! https://github.com/LemmyNet/lemmy-ansible/pull/153
What kind of depraved piece of shit does this?
deleted by creator
What’s a CSAM attack? Sounds so serious, but I’ve never heard of it.
Spamming pornographic depictions of minors
I’m not surprised. It was quite common for shitheads on reddit to make an account, post a few comments on /r/againsthatesubreddits, then post CP on other subreddits to spin the narrative that AHS was trying to shut down hate subs.