Even if it’s just a recommendation on a different group in which to ask the question, I’m curious how Lemmy combats criminal activity and content like human trafficking, smuggling, terrorism, etc?

Is it just a matter of each node bans users when they identify a crime, and/or problematic nodes are defederated if they tolerate it?

And if defederated, does that mean each node has to individually choose to defederate from the one allowing criminal activity?

  • conciselyverbose@kbin.social
    link
    fedilink
    arrow-up
    28
    arrow-down
    1
    ·
    10 months ago

    The protocol and software don’t. It’s open source and anyone can use it.

    Instance admins can block servers that allow anything that’s illegal (or they otherwise believe is inappropriate) .

    • Otter@lemmy.ca
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      10 months ago

      To add on:

      If a problem user is on instance A, then it’s mainly up to the admins of instance A to deal with the user. Until that happens, other instances can block the problem user from their particular instance. Ideally instance A steps in and deals with the problem user quickly.

      If instance A is not dealing with the problem user, if there’s a wave of them on instance A, or if there’s an entire problem community, other instances will likely defederate (temporarily or permanently)