A really interesting look at the recent spam wave.

  • Admiral Patrick@dubvee.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Takeaways

    Many Fediverse instances have open sign-ups without proper limits, enabling this to even happen in the first place. It’s important to note that this attack doesn’t require any novel exploit, just the existence of unmonitored, un-protected instances with open registration. From what we’ve seen, these are usually smaller instances.

    Open registrations should NEVER be enabled on instances without proper protections and monitoring.

    If you must have open registrations on your instance, use the proper anti-spam and anti-bot mechanisms. We also recommend blocking sign-ups using Tor IP addresses and temporary email domains.

    • CarbonIceDragon@pawb.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      hypothetically, what stops a spam group from creating their own instance to register accounts on, or several such? It’d get defederated quickly once the attack got going, sure, but it would take time for this to get done, and in the meantime the spam gets in

      • Draconic NEO@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        I don’t think really anything, it just takes more effort and they’d need to change the domain every time they get blocked. I have seen a few services hosted solely for spam and bad faith practices, though they were Mastodon, Plemora, and Kbin servers, not Lemmy.

      • NicoCharrua@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        It’s probably more expensive and inconvenient.

        Also it might only take one report for an active mod team to ban a server. How long can that take? An hour? Less? If they’re on servers that real people use, bots have to be banned one by one, so the spam can last a lot longer and reach more people.

  • Otter@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 months ago

    The takeaways are great

    I haven’t read it all yet, but I noticed a bit about pressing charges.

    With decentralized social media, currently there is not a risk of some big social media company coming after you when you cause damages. It doesn’t have to stay that way though.

    What might coordinated legal action look like for the fediverse? They caused a LOT of harm to a lot of people, even if we’re just looking at server costs and time spent by volunteers to clean up the mess.

    • mosiacmango@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 months ago

      A first step is RBL intergration, a shared blocklist of spam instances that subscribed instances would use to blackhole spam users/traffic/instances. These are used ubiquitously in email spam systems, so there is a precedent in federation systems for it working. We need to stand up an RBL, and then mod Lemmys federation system to work automatically based on the community blocklist.

      It does mean that poorly admined instances will get blackholed, breaking their federation, but that’s the cost of a healthy network.

      • Unforeseen@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        This is how email servers have worked for decades - there is no silver bullet and this comes closest. If you poorly admin your email instance, say allowing it to be an open relay (same as just allowing open registrations), you get blacklisted everywhere aka defederated. Same if you have a compromise and someone starts spamming out.

      • Skull giver@popplesburger.hilciferous.nl
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        I think integrating with SpamAssasin shouldn’t be that hard. The threshold may need to be set quite high and things like votes/boosts may need to be exempted in some way, but the Fediverse can take a lot of existing tools from email.

        Luckily, we don’t need to do anything to support anything like DKIM and reverse PTR, because those technologies are built right into ActivityPub. We can’t add SPF, because ActivityPub allows for arbitrary servers to boost stuff so that replies and such get mirrored right. A DMARC-like “we’ve blocked 40% of your traffic” notification may also be useful so Fediverse servers can monitor if their servers are misbehaving.

        What we might need is some kind of ActivityPub proxy that’ll parse and analyse incoming/outgoing traffic, pass them through whatever spam filter you like, and forward the ones that make it to actual server software. That way, we can write one tool that’ll work for Lemmy, Mastodon, Misskey, and all other kinds of servers.

  • amio@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    amex2189 disappears(?) after 72 hours, possibly because of:

    • Parents being notified of this event, and confiscating his devices.

    Kek.

  • nutsack@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    how did it take this long to get spam on the fediverse it’s basically an open canvas for wet crap

    • The Nexus of Privacy@lemmy.blahaj.zoneOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      There have been other waves, it’s just that once they get shut down everybody loses interest and moves on. The PR for the one of the changes Mastodon just made was implemented in May 2023 after the Doge spam wave. And here’s a June 2019 post talking about exactly the same kind of attack: “The problem we are experiencing is the spammer signing up on random open instances and sending spam remotely.”