From the article:

This chatbot experiment reveals that, contrary to popular belief, many conspiracy thinkers aren’t ‘too far gone’ to reconsider their convictions and change their minds.

  • davidgro@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Anything can be used to make people believe them. That’s not new or a challenge.

    I’m genuinely surprised that removing such beliefs is feasible at all though.

    • Angry_Autist (he/him)@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago
      1. the person needs to have a connection to the conspiracy theorist that is stronger than the identity valence gained by adopting these conspiracies

      2. The person needs to speak emotionally and sincerely, using direct experience (cookie cutter rarely works here)

      3. The person needs to genuinely desire for the improvement of the other’s life

      That is the only way I have ever witnessed it personally work, and it still took weeks.

    • SpaceNoodle@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      If they’re gullible enough to be suckered into it, they can similarly be suckered out of it - but clearly the effect would not be permanent.

      • Zexks@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        That doesn’t follow with the “if you didnt reason your way into a believe you can’t reason your way out” line. Considering religious ferver I’m more inclined to believe this line than yours.

        • Azzu@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          No one said at all that AI used “reason” to talk people out of a conspiracy theory. In fact I would assume it’s incredibly unlikely since AI in general is not reasonable.