Social media platforms like Twitter and Reddit are increasingly infested with bots and fake accounts, leading to significant manipulation of public discourse. These bots don’t just annoy users—they skew visibility through vote manipulation. Fake accounts and automated scripts systematically downvote posts opposing certain viewpoints, distorting the content that surfaces and amplifying specific agendas.

Before coming to Lemmy, I was systematically downvoted by bots on Reddit for completely normal comments that were relatively neutral and not controversial​ at all. Seemed to be no pattern in it… One time I commented that my favorite game was WoW, down voted -15 for no apparent reason.

For example, a bot on Twitter using an API call to GPT-4o ran out of funding and started posting their prompts and system information publicly.

https://www.dailydot.com/debug/chatgpt-bot-x-russian-campaign-meme/

Example shown here

Bots like these are probably in the tens or hundreds of thousands. They did a huge ban wave of bots on Reddit, and some major top level subreddits were quiet for days because of it. Unbelievable…

How do we even fix this issue or prevent it from affecting Lemmy??

  • conciselyverbose@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    The problem with almost any solution is that it just pushes it to custom instances that don’t place the restrictions, which pushes big instances to be more insular and resist small instances, undermining most of the purpose of the federation.

  • pop@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    13 days ago

    Internet is not a place for public discourse, it never was. it’s the game of numbers where people brigade discussions and make it confirm to their biases.

    Post something bad about the US with facts and statistics in US centric reddit sub, youtube video or article, and see how it divulges into brigading, name calling and racism. Do that on lemmy.ml to call out china/russia. Go to youtube videos with anything critical about India.

    For all countries with massive population on the internet, you’re going to get bombarded with lies, delfection, whataboutism and strawman. Add in a few bots and you shape the narrative.

    There’s also burying bad press with literally downvoting and never interacting.

    Both are easy on the internet when you’ve got the brainwashed gullible mass to steer the narrative.

    • MentalEdge@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      13 days ago

      Just because you can’t change minds by walking into the centers of people’s bubbles and trying to shout logic at the people there, doesn’t mean the genuine exchange of ideas at the intersecting outer edges of different groups aren’t real or important.

      Entrenched opinions are nearly impossibly to alter in discussion, you can’t force people to change their minds, to see reality for what it is even if they refuse. They have to be willing to actually listen, first.

      And people can and do grow disillusioned, at which point they will move away from their bubbles of their own accord, and go looking for real discourse.

      At that point it’s important for reasonable discussion that stands up to scrutiny to exist for them to find.

      And it does.

      • AnarchistArtificer@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 days ago

        I agree. Whenever I get into an argument online, it’s usually with the understanding that it exists for the benefit of the people who may spectate the argument — I’m rarely aiming to change the mind of the person I’m conversing with. Especially when it’s not even a discussion, but a more straightforward calling someone out for something, that’s for the benefit of other people in the comments, because some sentiments cannot go unchanged.

        • MentalEdge@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          13 days ago

          Did you mean unchallenged? Either way I agree, when I encounter people who believe things that are provably untrue, their views should be changed.

          It’s not always possible, but even then, challenging those ideas and putting the counterarguments right next to the insanity, inoculates or at least reduces the chance that other readers might take what the deranged have to say seriously.

    • DandomRude@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      Well, unfortunately, the internet and especially social media is still the main source of information for more and more people, if not the only one. For many, it is also the only place where public discourse takes place, even if you can hardly call it that. I guess we are probably screwed.

  • Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    13 days ago

    As others said you can’t prevent them completely. Only partially. You do it four steps:

    1. Make it unattractive for bots.
    2. Prevent them from joining.
    3. Prevent them from posting/commenting.
    4. Detect them and kick them out.

    The sad part is that, if you go too hard with bot eradication, it’ll eventually inconvenience real people too. (Cue to Captcha. That shit is great against bots, but it’s cancer if you’re a human.) Or it’ll be laborious/expensive and not scale well. (Cue to “why do you want to join our instance?”).

    • beefbot@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      Actual human content will never be undesirable for bots who must vacuum up content to produce profit. It’ll always be attractive to come here. The rest sound legit strategies though

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        You’re right that it won’t be completely undesirable for bots, ever. However, you can make it less desirable, to the point that the botters say “meh, who cares? That other site is better to bot”.

        I’ll give you an example. Suppose the following two social platforms:

        • Orange Alien: large userbase, overexcited about consumption, people get banned for mocking brands, the typical user is as tech-illiterate enough to confuse your bot with a human.
        • White Rat: Small userbase, full of communists, even the non-communists tend to outright mock consumption, the typical user is extremely tech-savvy so they spot and report your bot all the time.

        If you’re a botter advertising some junk, you’ll probably want to bot in both platforms, but that is not always viable - coding the framework for the bots takes time, you don’t have infinite bandwidth and processing power, etc. So you’re likely going to prioritise Orange Alien, you’ll only bot White Rat if you can spare it some effort+resources.

        The main issue with point #1 is that there’s only so much room to make the environment unattractive to bots before doing it for humans too. Like, you don’t want to shrink your userbase on purpose, right? You can still do things like promoting people to hold a more critical view, teaching them how to detect bots, asking them to report them (that also helps with #4), but it only goes so far.

        [Sorry for the wall of text.]

        • beefbot@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          This is the sort of thoughtful reasoning that I’m glad to see here, so a wall of text was warranted! Thanks for taking the time to add to the discussion 👍🙏

      • Passerby6497@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        12 days ago

        Bots can view content without being able to post, which is what people are aiming to cut down. I don’t super care if bots are vacuuming up my shitposts (even my shit posts), but I don’t particularly want to be in a community that’s overrun with bots posting.

        • gandalf_der_12te@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          Yeah, after all, we post on the internet for it to be visible by everyone, and that includes bots. If we didn’t want bots to find our content, then other humans couldn’t find them either; that’s my stance on this.

  • Ensign_Crab@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    13 days ago

    How do we even fix this issue or prevent it from affecting Lemmy??

    Simple. Just scream that everyone whose opinion you dislike is a bot.

    • beefbot@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      I admit I’ve been guilty of this in the past, so sarcasm aside I cannot recommend this as a strategy for detecting actual bots … even though if you’re parroting the opinion those who have power & control bots wish you to believe, expressing that opinion makes one’s post functionally equivalent to that of a bot. I KNOW, SUE ME 🤷‍♂️

      • Ensign_Crab@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        12 days ago

        I cannot recommend this as a strategy for detecting actual bots

        That’s because it isn’t one. It’s a means by which people attempt to impose orthodoxy.

  • hark@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    Is this a problem here? One thing we should also avoid is letting paranoia divide the community. It’s very easy to take something like this and then assume everyone you disagree with must be some kind of bot, which itself is damaging.

  • 1984@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    13 days ago

    I think the larger problem is that we are now trying to be non-controversal to avoid downvotes.

    Who thinks it’s a good idea to self censor on social media? Because that’s what you are doing, because of the downvote system.

    I will never agree downvotes are a net positive. They create censorship and allows the ignorant mob or bots to push down things they don’t like reading.

    Bots make it worse of course, since they can just downvote whatever they are programmed to downvote, and upvote things that they want to be visible. Basically it’s like having an army of minions to manipulate entire platforms.

    All because of downvotes and upvotes. Of course there should be a way to express that you agree or disagree but should that affect visibility directly? I don’t think so.

    • areyouevenreal@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      At this point you might as well complain about the mods and admins on Lemmy as tons of them are out of wack. I have had comments removed for stating facts that every should know just because it doesn’t agree with the lemmy hivemind. For example say anything positive about AI or how it was used before the likes of ChatGPT came around.

    • imaqtpie@lemmy.myserv.one
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      A few things.

      • Admins can and do ban accounts that downvote rampantly

      • Obvious bot brigading is obvious. It became harder to tell on reddit when they started fuzzing the vote numbers, but could frequently still be figured out. It’s easier on Lemmy, someone just has to report some unusual voting pattern to the admin and they can check if the voting accounts look like bots.

      • I was once told that the algorithm is less weighted towards upvoted comments and more weighted towards recent comments on Lemmy, when compared with Reddit. I am not sure if this is true, but I have noticed that recent comments tend to rise above the top upvoted comments in threads when viewing by Hot.

      • Without any way for bad content to be filtered out, you just end up with an endless stream of undifferentiated noise. The voting system actually protects the platform from the encroachment of bots and the ignorant mob, because it helps filter them out from the users who have something of value that they want to contribute.

      • doctortran@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        12 days ago

        For example, imagine a post where three users comment:

        One posts a heated stream of idiocy, falsehoods, and outright nastiness, thinly veiled bigotry and other garbage. Paragraphs of it, all poorly written.

        Another is some basic comment not saying anything of any real consequence. Completely mundane to the point no one has upvoted it, but it is perfectly harmless.

        The final is a comment with some meat on it and something to add to the conversation, but unfortunately they arrived too late to the thread. No one saw it, so no one upvoted it.

        Without downvotes, all three of these comments are treated exactly the same.

        I get downvotes can suck sometimes but they’re a valuable aspect to this system and removing them does not make the place better.

        I’d argue what people need to do if these things are genuinely bothering them is turn off the scores entirely and learn to live without them. It’s better for your mental health.

    • KillingTimeItself@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      12 days ago

      i dont self censor, it’s about a 50 50, as to be expected per random stats. Or at least that’s what it feels like, it’s probably better than that lmao.

      It’s just numbers, it’s not going to kill you lol.

    • gap_betweenus@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      That’s just what comes with internet becoming mainstream so mainstream cultural standards are applied to online conversations. It’s the difference between an opera and a punk club or something.

  • UndercoverUlrikHD@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    13 days ago

    A chain/tree of trust. If a particular parent node has trusted a lot of users that proves to be malicious bots, you break the chain of trust by removing the parent node. Orphaned real users would then need to find a new account that is willing to trust them, while the bots are left out hanging.

    Not sure how well it would work on federated platforms though.

      • UndercoverUlrikHD@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        You could always ask someone to vouch for you. It could also be that you have open communities and closed communities. So you would build up trust in an open community before being trusted by someone to be allowed to interact with the closed communities. Open communities could be communities less interesting/harder for the bots to spam and closed communities could be the high risk ones, such as news and politics.

        Would this greatly reduce the user friendliness of the site? Yes. But it would be an option if bots turn into a serious problem.

        I haven’t really thought through the details and I’m not sure how well it would work for a decentralised network though. Would each instance run their own trust tree, or would trusted instances share a single trust database 🤷‍♂️

  • Metz@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    13 days ago

    Long before cryptocurrencies existed, proof-of-work was already being used to hinder bots. For every post, vote, etc., a cryptographic task has to be solved by the device used for it. Imperceptibly fast for the normal user, but for a bot trying to perform hundreds or thousands of actions in a row, a really annoying speed bump.

    See e.g. https://wikipedia.org/wiki/Hashcash

    This combined with more classic blockades such as CAPTCHAs (especially image recognition, which is still expensive in mass despite the advances in AI) should at least represent a first major obstacle.

    • tatterdemalion@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      Why resort to an expensive decentralized mechanism when we already have a client-server model? We can just implement rate-limiting on the server.

      • Metz@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 days ago

        Can’t this simply be circumvented by the attackers operating several Lemmy servers of their own? That way they can pump as many messages into the network as they want. But with PoW the network would only accept the messages work was done for.

    • gandalf_der_12te@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      The issue I have with this that basically, now users need to “pay” (with compute time) to speak their mind. This would be similar than if you had to pay to vote in political elections. It favors the rich. A poor user might not be able to afford 20$ additional electricity bill a month, but a large agency (such as state sponsored, corporate agendas) might have a 1000000$.

      • Metz@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        12 days ago

        We’re talking about fractions of a cent here per post. Of course, this all needs to be worked out in detail and variables and scaling needs to be added / calculated. So for someone that posts only 2-3 times a day, costs and delay are practically unmeasurable low. but if you start pushing 100 posts out per minute, the difficulty of the PoW calculation gets up.

        A delay of a fraction of a second to do the PoW for a single post is not a problem. But a spam-bot that is now suddenly limited to making 1 post per minute instead 100 makes a huge difference and could drive up the price even for someone with deep pockets.

        But I’m not an expert in this field. I only know that spambots and similar are a problem that is almost as old as the Internet and that there have been an almost incalculable number of attempts to solve it to date, all of which have more or less failed. But maybe we can find a combination that could work for our specific case.

        Of course, there are still a lot of things to clarify. how do we stop someone from constantly creating new accounts, for example?

        would we have to start with a “harder difficulty” for new users to counteract this?

        do we need some kind of reputation system?

        How do we set them accurately enough not to drive away new users but still fulfill their purpose?

        But as said, not an expert. Just brainstorming here.

  • Asudox@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    13 days ago

    You can’t get rid of bots, nor spammers. The only thing is that you can have a more aggressive automated punishment system, which will unevitably also punish good users, along with the bad users.

  • AmidFuror@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    13 days ago

    One argument in favor of bots on social media is their ability to automate routine tasks and provide instant responses. For example, bots can handle customer service inquiries, offer real-time updates, and manage repetitive interactions, which can enhance user experience and free up human moderators for more complex tasks. Additionally, they can help in disseminating important information quickly and efficiently, especially in emergency situations or for public awareness campaigns.

      • AmidFuror@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        12 days ago

        A ChatGPT reply is generally clear, concise, and informative. It aims to address your question or topic directly and provide relevant information. The responses are crafted to be engaging and helpful, tailored to the context of the conversation while maintaining a neutral and professional tone.

  • Media Sensationalism@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    11 days ago

    Signup safeguards will never be enough because the people who create these accounts have demonstrated that they are more than willing to do that dirty work themselves.

    Let’s look at the anatomy of the average Reddit bot account:

    1. Rapid points acquisition. These are usually new accounts, but it doesn’t have to be. These posts and comments are often done manually by the seller if the account is being sold at a significant premium.

    2. A sudden shift in contribution style, usually preceded by a gap in activity. The account has now been fully matured to the desired amount of points, and is pending sale or set aside to be “aged”. If the seller hasn’t loaded on any points, the account is much cheaper but the activity gap still exists.

    • When the end buyer receives the account, they probably won’t be posting anything related to what the seller was originally involved in as they set about their own mission unless they’re extremely invested in the account. It becomes much easier to stay active in old forums if the account is now AI-controlled, but the account suddenly ceases making image contributions and mostly sticks to comments instead. Either way, the new account owner is probably accumulating much less points than the account was before.
    • A buyer may attempt to hide this obvious shift in contribution style by deleting all the activity before the account came into their possession, but now they have months of inactivity leading up to the beginning of the accounts contributions and thousands of points unaccounted for.
    1. Limited forum diversity. Fortunately, platforms like this have a major advantage over platforms like Facebook and Twitter because propaganda bots there can post on their own pages and gain exposure with hashtags without having to interact with other users or separate forums. On Lemmy, programming an effective bot means that it has to interact with a separate forum to achieve meaningful outreach, and these forums probably have to be manually programmed in. When a bot has one sole objective with a specific topic in mind, it makes great and telling use of a very narrow swath of forums. This makes Platforms like Reddit and Lemmy less preferred for automated propaganda bot activity, and more preferred for OnlyFans sellers, undercover small business advertisers, and scammers who do most of the legwork of posting and commenting themselves.

    My solution? Implement a weighted visual timeline for a user’s points and posts to make it easier for admins to single out accounts that have already been found to be acting suspiciously. There are other types of malicious accounts that can be troublesome such as self-run engagement farms which express consistent front page contributions featuring their own political or whatever lean, but the type first described is a major player in Reddit’s current shitshow and is much easier to identify.

    Most important is moderator and admin willingness to act. Many subreddit moderators on Reddit already know their subreddit has a bot problem but choose to do nothing because it drives traffic. Others are just burnt out and rarely even lift a finger to answer modmail, doing the bare minimum to keep their subreddit from being banned.