I’ve started to realize that every social media platform, including Facebook, Telegram, Twitter, etc., has issues with bot spam and fake follower accounts. These platforms typically combat this problem by implementing various measures such as ban waves, behavior detection, and more.

What strategies/tools did Lemmy employ to address bots, and what additional measures could further improve these efforts?

  • JohnDClay@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    We’re not mainstream enough to have many bots yet. I think some instances needed to deal with bot spam, but I haven’t seen any in the community I moderate.

    • HubertManne@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      I don’t know if its lemmy or other parts of the federation but I see plenty of drug and other stuff that I guess could be manually done but my guess is its bots.

      • Cheradenine@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        That’s a kbin thing. I have never seen ‘buy cheap Viagra, Oxycontin, etc.’ on Lemmy. It probably exists, but whenever I block and report a user they’re from kbin.

  • Otter@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Currently, it’s mostly manual removals which isn’t sustainable if the platform grows. Various instances are experimenting with their own moderation tools outside of Lemmy, and I don’t think Lemmy itself has any features to combat this. Moderation improvements is something that’s been talked about with Sublinks.

    What additional measures could further improve these efforts?

    Having an ‘automod’, similar to but more advanced than Reddit, would help a lot as the first step. No one likes excess use of automod, but not having it at all will be much worse. Having an improved automod system with guides and tips on how to use it effectively, will go a long way towards making moderation easier.

    • hoshikarakitaridia@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      I think the right strategy is providing all the tools, and then the instances themselves have to stay attractive. That’s not on the developers, that’s on the instances themselves.

  • Emily (she/her)@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    As a moderator of a couple communities, some basic/copypasta misbehaviour is caught by automated bots that I largely had to bootstrap or heavily modify myself. Near everything else has to be manually reviewed, which obviously isn’t particularly sustainable in the long term.

    Improving the situation is a complex issue, since these kinds of tools often require a level of secrecy incompatible with FOSS principles to work effectively. If you publicly publish your model/algorithm for detecting spam, spammers will simply craft their content to avoid it by testing against it. This problem extends to accessing third party tools, such as specialised tools Microsoft and Google provide for identifying and reporting CSAM content to authorities. They are generally unwilling to provision their service to small actors, IMO in an attempt to stop producers themselves testing and manipulating their content to subvert the tool.