• Knock_Knock_Lemmy_In@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    The key result

    When researchers asked the AI to personalize its arguments to a Redditor’s biographical details, including gender, age, and political leanings (inferred, courtesy of another AI model, through the Redditor’s post history), a surprising number of minds indeed appear to have been changed. Those personalized AI arguments received, on average, far higher scores in the subreddit’s point system than nearly all human commenters

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      If they were personalized wouldn’t that mean they shouldn’t really receive that many upvotes other than maybe from the person they were personalized for?

      • the_strange@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        I would assume that people in a similar demographics are interested in similar topics. Adjusting the answer to a person within a demographic would therefore adjust it to all people within that demographic and interested in that specific topic.

        Or maybe it’s just the nature of the answer being more personal that makes it more appealing to people in general, no matter their background.

    • thanksforallthefish@literature.cafe
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      While that is indeed what was reported, we and the researchers will never know if the posters with shifted opinions were human or in fact also AI bots.

      The whole thing is dodgy for lack of controls, this isn’t science it’s marketing

  • TheObviousSolution@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    The reason this is “The Worst Internet-Research Ethics Violation” is because it has exposed what Cambridge Analytica’s successors already realized and are actively exploiting. Just a few months ago it was literally Meta itself running AI accounts trying to pass off as normal users, and not an f-ing peep - why do people think they, the ones who enabled Cambridge Analytica, were trying this shit to begin with. The only difference now is that everyone doing it knows to do it as a “unaffiliated” anonymous third party.

    • tauren@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Just a few months ago it was literally Meta itself…

      Well, it’s Meta. When it comes to science and academic research, they have rather strict rules and committees to ensure that an experiment is ethical.

      • thanksforallthefish@literature.cafe
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        You may wish to reword. The unspecified “they” reads like you think Meta have strict ethical rules. Lol.

        Meta have no ethics whatsoever, and yes I assume you meant universities have strict rules however the approval of this study marks even that as questionable

      • FarceOfWill@infosec.pub
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        The headline is that they advertised beauty products to girls after they detected them deleting a selfie. No ethics or morals at all

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      One of the Twitter leaks showed a user database that effectively had more users than there were people on earth with access to the Internet.

      Before Elon bought the company he was trashing them on social media for being mostly bots. He’s obviously stopped that now that he was forced to buy it, but the fact that Twitter (and, by extension, all social spaces) are mostly bots remains.

  • vordalack@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    This just shows how gullible and stupid the average Reddit user is. There’s a reason that there’s so many meme’s mocking them and calling them beta soyjacks.

    It’s kind of true.

  • Donkter@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    This is a really interesting paragraph to me because I definitely think these results shouldn’t be published or we’ll only get more of these “whoopsie” experiments.

    At the same time though, I think it is desperately important to research the ability of LLMs to persuade people sooner rather than later when they become even more persuasive and natural-sounding. The article mentions that in studies humans already have trouble telling the difference between AI written sentences and human ones.

    • Dasus@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I’m pretty sure that only applies due to a majority of people being morons. There’s a vast gap between the 2% most intelligent, 1/50, and the average intelligence.

      Also please put digital text on white on black instead of the other way around

      • angrystego@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        I agree, but that doesn’t change anything, right? Even if you are in the 2% most intelligent and you’re somehow immune, you still have to live with the rest who do get influenced by AI. And they vote. So it’s never just a they problem.

      • SippyCup@feddit.nl
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        What? Intelligent people get fooled all the time. The NXIVM cult was made up mostly of reasonably intelligent women. Shit that motherfucker selected for intelligent women.

        You’re not immune. Even if you were, you’re incredibly dependent on people of average to lower intelligence on a daily basis. Our planet runs on the average intelligence.

    • FourWaveforms@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      This is certainly not the first time this has happened. There’s nothing to stop people from asking ChatGPT et al to help them argue. I’ve done it myself, not letting it argue for me but rather asking it to find holes in my reasoning and that of my opponent. I never just pasted what it said.

      I also had a guy post a ChatGPT response at me (he said that’s what it was) and although it had little to do with the point I was making, I reasoned that people must surely be doing this thousands of times a day and just not saying it’s AI.

      To say nothing of state actors, “think tanks,” influence-for-hire operations, etc.

      The description of the research in the article already conveys enough to replicate the experiment, at least approximately. Can anyone doubt this is commonplace, or that it has been for the last year or so?

      • CBYX@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Not sure how everyone hasn’t expected Russia has been doing this the whole time on conservative subreddits…

        • skisnow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Russia are every bit as active in leftist groups whipping them up into a frenzy too. There was even a case during BLM where the same Russian troll farm organised both a protest and its counter-protest. Don’t think you’re immune to being manipulated to serve Russia’s long-term interests just because you’re not a conservative.

          They don’t care about promoting right-wing views, they care about sowing division. They support Trump because Trump sows division. Their long-term goal is to break American hegemony.

          • aceshigh@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            Yup. We’re all susceptible to joining a cult. No one willingly joins a cult, their group slowly morphs into one.

          • Madzielle@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            There have a been a few times over the last few years, that my “bullshit- this is an extemist plant/propaganda” meter has gone off for left leaning individuals.

            Meaning these comments/videos are aimed to look like they are left folks, but are meant to make the left look bad/extremist in order to push people from the working class movements.

            Im truly a layman, but you just know its out there. The goal is indeed to divide us, and everyone should be suspect of everything the see on the Internet and do proper vetting of their sources.

          • CBYX@feddit.org
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            The difference is in which groups are consequentially making it their identity and giving one political party carte blanche to break American politics and political norms (and national security orgs).

            100% agree though.

        • seeigel@feddit.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Or somebody else is doing the manipulation and is successfully putting the blame on Russia.

        • Geetnerd@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Those of us who are not idiots have known this for a long time.

          They beat the USA without firing a shot.

        • taladar@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Mainly I didn’t really expect that since the old methods of propaganda before AI use worked so well for the US conservatives’ self-destructive agenda that it didn’t seem necessary.

    • mke@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      That’s too far, though I understand the feeling.

      I think we should save the Hitler comparisons for individuals that actually deserve it. AI bros and genAI promoters are frequently assholes, but not unapologetically fascist genocidal ones.

    • Madzielle@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Not my looking like a psychopath to my husband deleting my long time google account to set up a burner (because i cant even use maps/tap to pay without one).

      I’m tired of being tracked. Being on lemmy I’ve gotten multiple ideas to help negate these apps/tracking models. I am ever greatful. Theres stil so much more I need to learn/do however.

  • thedruid@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Fucking a. I. And their apologist script kiddies. worse than fucking Facebook in its disinformation

  • TwinTitans@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Like the 90s/2000s - don’t put personal information on the internet, don’t believe a damned thin on it either.

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I never liked the “don’t believe anything you read on the internet” line, it focuses too much on the internet without considering that you shouldn’t believe anything you read or hear elsewhere either, especially on divisive topics like politics.

      You should evaluate information you receive from any source with critical thinking, consider how easy it is to make false claims (e.g. probably much harder for a single source if someone claims that the US president has been assassinated than if someone claims their local bus was late that one unspecified day at their unspecified location), who benefits from convincing you of the truth of a statement, is the statement consistent with other things you know about the world,…

    • mic_check_one_two@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Yeah, it’s amazing how quickly the “don’t trust anyone on the internet” mindset changed. The same boomers who were cautioning us against playing online games with friends are now the same ones sharing blatantly AI generated slop from strangers on Facebook as if it were gospel.

      • I feel like I learned more about the Internet and shit from Gen X people than from boomers. Though, nearly everyone on my dad’s side of the family, including my dad (a boomer), was tech literate, having worked in tech (my dad is a software engineer) and still continue to not be dumb about tech… Aside from thinking e-greeting cards are rad.

      • Serinus@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Back then it was just old people trying to groom 16 year olds. Now it’s a nation’s intelligence apparatus turning our citizens against each other and convincing them to destroy our country.

        I wholeheartedly believe they’re here, too. Their primary function here is to discourage the left from voting, primarily by focusing on the (very real) failures of the Democrats while the other party is extremely literally the Nazi party.

        • queermunist she/her@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 month ago

          Everyone who disagrees with you is a bot, probably from Russia. You are very smart.

          Do you still think you’re going to be allowed to vote for the next president?

          • Serinus@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            Everyone who disagrees with you is a bot

            I mean that’s unironically the problem. When there absolutely are bots out here, how do you tell?

            • queermunist she/her@lemmy.ml
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              Sure, but you seem to be under the impression the only bots are the people that disagree with you.

              There’s nothing stopping bots from grooming you by agreeing with everything you say.

          • EldritchFeminity@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            Everyone who disagrees with you is a bot, probably from Russia. You are very smart.

            Where did they say that? They just said bots in general. It’s well known that Russia has been running a propaganda campaign across social media platforms since at least the 2016 elections (just like the US is doing on Russian and Chinese social media, I’m sure. They do it on Americans as well. We’re probably the most propangandized country on the planet), but there’s plenty of incentive for corpo bots to be running their own campaigns as well.

            Or are you projecting for some reason? What do you get from defending Putin?

        • supersquirrel@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 month ago

          Social media didn’t break people’s brains, the massive influx of conservative corporate money to distort society and keep existential problems from being fixed until it is too late and push people resort to to impulsive, kneejerk responses because they have been ground down to crumbs… broke people’s brains.

          If we didn’t have social media right now and all of this was happening, it would be SO much worse without younger people being able to find news about the Palestinian Genocide or other world news that their country/the rich conservatives around them don’t want them to read.

          It is what those in power DID to social media that broke people’s brains and it is why most of us have come here to create a social network not being driven by those interests.

  • dontbelasagne@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Honestly this is why I think people should be forced to have their face as a profile picture on any forum. I want to know if I’m arguing with a edgy 14 yr old or a 50 yr old man and it would stop so much hate honestly.

    • thedruid@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Meh. Believe none of what you hear and very little of what you can see

      Unless a person is in front of you, don’t assume anything is real online. I mean it. Nothing online cannot be faked, nothing online HASNT been faked.

      The least trustworthy place in the universe. Is the internet.

    • blind3rdeye@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Realistic AI generated faces have been available for longer than realistic AI generated conversation ability.

  • paraphrand@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    I’m sure there are individuals doing worse one off shit, or people targeting individuals.

    I’m sure Facebook has run multiple algorithm experiments that are worse.

    I’m sure YouTube has caused worse real world outcomes with the rabbit holes their algorithm use to promote. (And they have never found a way to completely fix the rabbit hole problems without destroying the usefulness of the algorithm completely.)

    The actions described in this article are upsetting and disappointing, but this has been going on for a long time. All in the name of making money.

      • paraphrand@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        That’s not at all what I was getting at. My point is the people claiming this is the worst they have seen have a limited point of view and should cast their gaze further across the industry, across social media.

  • LovingHippieCat@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    If anyone wants to know what subreddit, it’s r/changemyview. I remember seeing a ton of similar posts about controversial opinions and even now people are questioning Am I Overreacting and AITAH a lot. AI posts in those kind of subs are seemingly pretty frequent. I’m not surprised to see it was part of a fucking experiment.

    • Refurbished Refurbisher@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      AIO and AITAH are so obviously just AI posting. It’s all just a massive circlejerk of AI and people who don’t know they’re talking to AI agreeing with each other.

    • eRac@lemmings.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      This was comments, not posts. They were using a model to approximate the demographics of a poster, then using an LLM to generate a response counter to the posted view tailored to the demographics of the poster.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        You’re right about this study. But, this research group isn’t the only one using LLMs to generate content on social media.

        There are 100% posts that are bot created. Do you ever notice how, on places like Am I Overreacting or Am I the Asshole that a lot of the posts just so happen to hit all of the hot button issues all at once? Nobody’s life is that cliche, but it makes excellent engagement bait and the comment chain provides a huge amount of training data as the users argue over the various topics.

        I use a local LLM, that I’ve fine tuned, to generate replies to people, who are obviously arguing in bad faith, in order to string them along and waste their time. It’s setup to lead the conversation, via red herrings and other various fallacies to the topic of good faith arguments and how people should behave in online spaces. It does this while picking out pieces of the conversation (and from the user’s profile) in order to chastise the person for their bad behavior. It would be trivial to change the prompt chains to push a political opinion rather than to just waste a person/bot’s time.

        This is being done as a side project, on under $2,000 worth of consumer hardware, by a barely competent progammer with no training in Psychology or propaganda. It’s terrifying to think of what you can do with a lot of resources and experts working full-time.

  • perestroika@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments.

    This seems to be the kind of a situation where, if the researchers truly believe their study is necessary, they have to:

    • accept that negative publicity will result
    • accept that people may stop cooperating with them on this work
    • accept that their reputation will suffer as a result
    • ensure that they won’t do anything illegal

    After that, if they still feel their study is necesary, maybe they should run it and publish the results.

    If then, some eager redditors start sending death threats, that’s unfortunate. I would catalouge them, but not report them anywhere unless something actually happens.

    As for the question of whether a tailor-made response considering someone’s background can sway opinions better - that’s been obvious through ages of diplomacy. (If you approach an influential person with a weighty proposal, it has always been worthwhile to know their background, think of several ways of how they might perceive the proposal, and advance your explanation in a way that relates better with their viewpoint.)

    AI bots which take into consideration a person’s background will - if implemented right - indeed be more powerful at swaying opinions.

    As to whether secrecy was really needed - the article points to other studies which apparently managed to prove the persuasive capability of AI bots without deception and secrecy. So maybe it wasn’t needed after all.

    • Djinn_Indigo@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      But those other studies didn’t make the news though, did they? The thing about scientists is that they aren’t just scientists, and the impact of their work goes beyond the papers that they publish. If doing something ‘unethical’ is what it takes to get people to wake up, then maybe the publication status is a lesser concern.

  • flango@lemmy.eco.br
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    […] I read through dozens of the AI comments, and although they weren’t all brilliant, most of them seemed reasonable and genuine enough. They made a lot of good points, and I found myself nodding along more than once. As the Zurich researchers warn, without more robust detection tools, AI bots might “seamlessly blend into online communities”—that is, assuming they haven’t already.