Threads has begun testing federation over ActivityPub. And they have blocked two important servers I administrate. The first server is the Mostr Bridge. The Mostr Bridge connects ActivityPub and Nostr together, so people can communicate across protocols. The second server is Spinster.xyz, which is arguably the largest independent feminist community online.
This was always going to happen. They will block agressively, because they can’t have their precious advertising money mixed with CSAM, nazis and other illegal content. And the fedi is full of that.
[citation needed]
This is a good start: https://cyber.fsi.stanford.edu/io/news/addressing-child-exploitation-federated-social-media
The fedi is, in fact, full of illegal content, but good admins share the blocklists so that nobody has to see it.
Tells you a lot about the Pleroma admins that insist on remaining completely unmoderated.
Umm Meta is basically only Nazis and Pedos at this point? I’d say there is far less of it on the Fediverse than on Facebook.
I’m not even sure that’s true of Xhitter, and Faceb**k at least has a mod team
There’s far less because of server blocks. There are tons of gross servers that are just walled off from everyone else. Mastodon.social blocks a couple hundred servers.
Every now and then someone will write an article like, ‘I love free speech so I thought I could run a Mastodon server without blocking anyone… boy was I ever wrong.’ There’s some truly vile shit out there.
A couple hundred servers is nothing compared to a couple hundred thousand facebook groups.
FB removed 73.3 million CSAM in the first 9 months of '22 alone, and that’s only the stuff they bother to catch.
73 million? Holy shit people are disgusting man
It’s also a matter of scale. FB has 3 billion users and it’s all centralized. They are able to police that. Their Trust and Safety team is large (which has its own problems, because they outsource that - but that’s another story). The fedi is somewhere around 11M (according to fedidb.org).
The federated model doesn’t really “remove” anything, it just segregates the network to “moderated, good instances” and “others”.
I don’t think most fedi admins are actually following the law by reporting CSAM to the police (because that kind of thing requires a lot resources), they just remove it from their servers and defederate. Bottom line is that the protocols and tools built to combat CSAM don’t work too well in the context of federated networks - we need new tools and new reporting protocols.
Reading the Stanford Internet Observatory report on fedi CSAM gives a pretty good picture of the current situation, it is fairly fresh:
https://cyber.fsi.stanford.edu/io/news/addressing-child-exploitation-federated-social-media
Have you gone on Instagram ever?
It’s normies and wine moms.
Nope.
But a quick google search shows:
https://www.theguardian.com/world/2021/mar/22/neo-nazi-groups-use-instagram-to-recruit-young-people-warns-hope-not-hate
Meta is Meta, I expect them to act like Meta and they do not disappoint.
Yes, of course it has Neo-Nazis, because it has hundreds of millions of people and essentially every niche community has representation there. The doesn’t mean it’s remotely accurate to say that Instagram is “only Nazis and pedos”.
The most followed user is Kim Kardashian, if I remember right, and she’s targeting the most normie women possible. Nazis and pedos aren’t exactly good for advertising.
This isn’t to say that Instagram doesn’t have moderation issues, but that isn’t contradictory to the fact that Instagram is not solely composed of those kinds of users.
Oh get off it, as if you’ve never seen hyperbole before in your life.