• MonkderVierte@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    13 days ago

    When Reddit rebranded itself as “the heart of the internet” a couple of years ago, the slogan was meant to evoke the site’s organic character. In an age of social media dominated by algorithms, Reddit took pride in being curated by a community that expressed its feelings in the form of upvotes and downvotes—in other words, being shaped by actual people.

    Not since the APIcalypse at least.

    Aside from that, this is just reheated news (for clicks i assume) from a week or two ago.

    • ClamDrinker@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      12 days ago

      One likely reason the backlash has been so strong is because, on a platform as close-knit as Reddit, betrayal cuts deep.

      Another laughable quote after the APIcalypse, at least for the people that remained on Reddit after being totally ok with being betrayed.

    • mke@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      13 days ago

      That’s too far, though I understand the feeling.

      I think we should save the Hitler comparisons for individuals that actually deserve it. AI bros and genAI promoters are frequently assholes, but not unapologetically fascist genocidal ones.

  • LovingHippieCat@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    13 days ago

    If anyone wants to know what subreddit, it’s r/changemyview. I remember seeing a ton of similar posts about controversial opinions and even now people are questioning Am I Overreacting and AITAH a lot. AI posts in those kind of subs are seemingly pretty frequent. I’m not surprised to see it was part of a fucking experiment.

    • Refurbished Refurbisher@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      AIO and AITAH are so obviously just AI posting. It’s all just a massive circlejerk of AI and people who don’t know they’re talking to AI agreeing with each other.

    • eRac@lemmings.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      This was comments, not posts. They were using a model to approximate the demographics of a poster, then using an LLM to generate a response counter to the posted view tailored to the demographics of the poster.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        12 days ago

        You’re right about this study. But, this research group isn’t the only one using LLMs to generate content on social media.

        There are 100% posts that are bot created. Do you ever notice how, on places like Am I Overreacting or Am I the Asshole that a lot of the posts just so happen to hit all of the hot button issues all at once? Nobody’s life is that cliche, but it makes excellent engagement bait and the comment chain provides a huge amount of training data as the users argue over the various topics.

        I use a local LLM, that I’ve fine tuned, to generate replies to people, who are obviously arguing in bad faith, in order to string them along and waste their time. It’s setup to lead the conversation, via red herrings and other various fallacies to the topic of good faith arguments and how people should behave in online spaces. It does this while picking out pieces of the conversation (and from the user’s profile) in order to chastise the person for their bad behavior. It would be trivial to change the prompt chains to push a political opinion rather than to just waste a person/bot’s time.

        This is being done as a side project, on under $2,000 worth of consumer hardware, by a barely competent progammer with no training in Psychology or propaganda. It’s terrifying to think of what you can do with a lot of resources and experts working full-time.

  • TwinTitans@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    13 days ago

    Like the 90s/2000s - don’t put personal information on the internet, don’t believe a damned thin on it either.

    • mic_check_one_two@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      Yeah, it’s amazing how quickly the “don’t trust anyone on the internet” mindset changed. The same boomers who were cautioning us against playing online games with friends are now the same ones sharing blatantly AI generated slop from strangers on Facebook as if it were gospel.

      • I feel like I learned more about the Internet and shit from Gen X people than from boomers. Though, nearly everyone on my dad’s side of the family, including my dad (a boomer), was tech literate, having worked in tech (my dad is a software engineer) and still continue to not be dumb about tech… Aside from thinking e-greeting cards are rad.

      • Serinus@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 days ago

        Back then it was just old people trying to groom 16 year olds. Now it’s a nation’s intelligence apparatus turning our citizens against each other and convincing them to destroy our country.

        I wholeheartedly believe they’re here, too. Their primary function here is to discourage the left from voting, primarily by focusing on the (very real) failures of the Democrats while the other party is extremely literally the Nazi party.

        • queermunist she/her@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          13 days ago

          Everyone who disagrees with you is a bot, probably from Russia. You are very smart.

          Do you still think you’re going to be allowed to vote for the next president?

          • EldritchFeminity@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            13 days ago

            Everyone who disagrees with you is a bot, probably from Russia. You are very smart.

            Where did they say that? They just said bots in general. It’s well known that Russia has been running a propaganda campaign across social media platforms since at least the 2016 elections (just like the US is doing on Russian and Chinese social media, I’m sure. They do it on Americans as well. We’re probably the most propangandized country on the planet), but there’s plenty of incentive for corpo bots to be running their own campaigns as well.

            Or are you projecting for some reason? What do you get from defending Putin?

          • Serinus@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            13 days ago

            Everyone who disagrees with you is a bot

            I mean that’s unironically the problem. When there absolutely are bots out here, how do you tell?

            • queermunist she/her@lemmy.ml
              link
              fedilink
              English
              arrow-up
              0
              ·
              13 days ago

              Sure, but you seem to be under the impression the only bots are the people that disagree with you.

              There’s nothing stopping bots from grooming you by agreeing with everything you say.

        • supersquirrel@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          13 days ago

          Social media didn’t break people’s brains, the massive influx of conservative corporate money to distort society and keep existential problems from being fixed until it is too late and push people resort to to impulsive, kneejerk responses because they have been ground down to crumbs… broke people’s brains.

          If we didn’t have social media right now and all of this was happening, it would be SO much worse without younger people being able to find news about the Palestinian Genocide or other world news that their country/the rich conservatives around them don’t want them to read.

          It is what those in power DID to social media that broke people’s brains and it is why most of us have come here to create a social network not being driven by those interests.

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      I never liked the “don’t believe anything you read on the internet” line, it focuses too much on the internet without considering that you shouldn’t believe anything you read or hear elsewhere either, especially on divisive topics like politics.

      You should evaluate information you receive from any source with critical thinking, consider how easy it is to make false claims (e.g. probably much harder for a single source if someone claims that the US president has been assassinated than if someone claims their local bus was late that one unspecified day at their unspecified location), who benefits from convincing you of the truth of a statement, is the statement consistent with other things you know about the world,…

  • paraphrand@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    13 days ago

    I’m sure there are individuals doing worse one off shit, or people targeting individuals.

    I’m sure Facebook has run multiple algorithm experiments that are worse.

    I’m sure YouTube has caused worse real world outcomes with the rabbit holes their algorithm use to promote. (And they have never found a way to completely fix the rabbit hole problems without destroying the usefulness of the algorithm completely.)

    The actions described in this article are upsetting and disappointing, but this has been going on for a long time. All in the name of making money.

      • paraphrand@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        13 days ago

        That’s not at all what I was getting at. My point is the people claiming this is the worst they have seen have a limited point of view and should cast their gaze further across the industry, across social media.

    • peoplebeproblems@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      No - it’s research ethics. As in you get informed consent. It just involves the Internet.

      If the research contains any sort of human behavior recorded, all participants must know ahead of it and agree to participate in it.

      This is a blanket attempt to study human behavior without an IRB and not having to have any regulators or anyone other than tech bros involved.

    • conicalscientist@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      12 days ago

      I don’t know what you have in mind but the founders originally used bots to generate activity to make the site look popular. Which begs the question. What was really the root reddit cultures. Was it the bots following human activity to bolster it. Or were the humans merely following what the founders programmed the bots to post.

      One things for sure, reddit has always been a platform of questionable integrity.

  • Glitch@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    13 days ago

    I think it’s a straw-man issue, hyped beyond necessity to avoid the real problem. Moderation has always been hard, with AI it’s only getting worse. Avoiding the research because it’s embarrassing just prolongs and deepens the problem

    • thedruid@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      You think it’s anti science to want complete disclosure when you as a person are being experimented on?

      What kind of backwards thinking is that?

      • Sculptus Poe@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        11 days ago

        Not when disclosure ruins the experiment. Nobody was harmed or even could be harmed unless they are dead stupid, in which case the harm is already inevitable. This was posting on social media, not injecting people with random pathogens. Have a little perspective.

        • thedruid@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 days ago

          You do realize the ends do not justify the means?

          You do realize that MANY people on social media have emotional and mental situations occuring and that these experiments can have ramifications that cannot be traced?

          This is just a small reason why this is so damn unethical

          • Sculptus Poe@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            10 days ago

            In that case, any interaction would be unethical. How do you know that I don’t have an intense fear of the words “justify the means”? You could have just doomed me to a downward spiral ending in my demise. As if I didn’t have enough trouble. You not only made me see it, you tricked me into typing it.

            • thedruid@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 days ago

              you are being beyond silly.

              in no way is what you just posited true . unsuspecting nd non malicious social faux pas are in no way equal to Intentionally secretive manipulation used to garner data from unsuspecting people

              that was an embarrassingly bad attempt to defend an indefensible position, and one no-one would blame you for deleting and re-trying

              • Sculptus Poe@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                9 days ago

                Well, you are trying embarrassingly hard to silence me at least. That is fine. I was definitely positing an unlikely but possible case, I do suffer from extreme anxiety and what sets it off has nothing to do with logic, but you are also overstating the ethics violation by suggesting that any harm they could cause is real or significant in a way that wouldn’t happen with regular interaction on random forums.

  • ArbitraryValue@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    13 days ago

    ChangeMyView seems like the sort of topic where AI posts can actually be appropriate. If the goal is to hear arguments for an opposing point of view, the AI is contributing more than a human would if in fact the AI can generate more convincing arguments.

      • ArbitraryValue@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        13 days ago

        That lie was definitely inappropriate, but it would still have been inappropriate if it was told by a human. I think it’s useful to distinguish between bad things that happen to be done by an AI and things that are bad specifically because they are done by an AI. How would you feel about an AI that didn’t lie or deceive but also didn’t announce itself as an AI?

        • sinceasdf@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          13 days ago

          I think when posting on a forum/message board it’s assumed you’re talking to other people, so AI should always announce itself as such. That’s probably a pipe dream though.

          If anyone wants to specifically get an AI perspective they can go to an AI directly. They might add useful context to people’s forum conversations, but there should be a prioritization of actual human experiences there.

          • FauxLiving@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            12 days ago

            I think when posting on a forum/message board it’s assumed you’re talking to other people

            That would have been a good position to take in the early days of the Internet, it is a very naive assumption to make now. Even in the 2010s actors with a large amount of resources (state intelligence agencies, advertisers, etc) could hire human beings from low wage English speaking countries to generate fake content online.

            LLMs have only made this cheaper, to the point where I assume that most of the commenters on political topics are likely bots.

            • sinceasdf@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              12 days ago

              For sure, thus why I said it’s a pipe dream. We can dream though, maybe we will figure out some kind of solution one day.

              • FauxLiving@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                12 days ago

                The research in the OP is a good first step in figuring out how to solve the problem.

                That’s in addition to anti-bot measures. I’ve seen some sites that require you to solve a cryptographic hashing problem before accessing. It doesn’t slow a regular person down, but it does require anyone running a bot to provide a much larger amount of compute power to each bot which increases the cost to the operator.

        • tribut@infosec.pub
          link
          fedilink
          English
          arrow-up
          0
          ·
          13 days ago

          Nobody is blaming the AI model. We are blaming the researchers and users of AI, which is kind of the point.

        • shneancy@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          13 days ago

          the researchers said all AI posts were approved by a human before posting, it was their choice how many lies to include

        • Ecco the dolphin@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          13 days ago

          Which, in an ideal world, is why AI generated comments should be labeled.

          I always break when I see a deer at the side of the road.

          (Yes people can lie on the Internet. If you funded an army of propagandists to convince people by any means necessary I think you would find it expensive. People generally find lying like this to feel bad. It would take a mental toll. With AI, this looks possible for cheaper.)

          • Rolivers@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            0
            ·
            13 days ago

            I’m glad Google still labels the AI overview in search results so I know to scroll further for actually useful information.

  • ImplyingImplications@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    13 days ago

    The ethics violation is definitely bad, but their results are also concerning. They claim their AI accounts were 6 times more likely to persuade people into changing their minds compared to a real life person. AI has become an overpowered tool in the hands of propagandists.

    • ArchRecord@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      To be fair, I do believe their research was based on how convincing it was compared to other Reddit commenters, rather than say, an actual person you’d normally see doing the work for a government propaganda arm, with the training and skillset to effectively distribute propaganda.

      Their assessment of how “convincing” it was seems to also have been based on upvotes, which if I know anything about how people use social media, and especially Reddit, are often given when a comment is only slightly read through, and people are often scrolling past without having read the whole thing. The bots may not have necessarily optimized for convincing people, but rather, just making the first part of the comment feel upvote-able over others, while the latter part of the comment was mostly ignored. I’d want to see more research on this, of course, since this seems like a major flaw in how they assessed outcomes.

      This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.

        And the fact that you can generate hundreds or thousands of them at the drop of a hat to bury any social media topic in highly convincing ‘people’ so that the average reader is more than likely going to read the opinion that you’re pushing and not the opinion of the human beings.

      • TimewornTraveler@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        I mean that’s the point of research: to demonstrate real world problems and put it in more concrete terms so we can respond more effectively

  • teamevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    13 days ago

    Holy Shit… This kind of shit is what ultimately broke Tim kaczynski… He was part of MKULTRA research while a student at Harvard, but instead of drugging him, they had a debater that was a prosecutor pretending to be a student… And would just argue against any point he had to see when he would break…

    And that’s how you get the Unabomber folks.

    • Geetnerd@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      I don’t condone what he did in any way, but he was a genius, and they broke his mind.

      Listen to The Last Podcast on the Left’s episode on him.

      A genuine tragedy.

      • teamevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        You know when I was like 17 and they put out the manifesto to get him to stop attacking and I remember thinking oh it’s got a few interesting points.

        But I was 17 and not that he doesn’t hit the nail on the head with some of the technological stuff if you really step back and think about it and this is what I couldn’t see at 17 it’s really just the writing of an incell… He couldn’t communicate with women had low self-esteem and classic nice guy energy…