From the article:
This chatbot experiment reveals that, contrary to popular belief, many conspiracy thinkers aren’t ‘too far gone’ to reconsider their convictions and change their minds.
From the article:
This chatbot experiment reveals that, contrary to popular belief, many conspiracy thinkers aren’t ‘too far gone’ to reconsider their convictions and change their minds.
Anything can be used to make people believe them. That’s not new or a challenge.
I’m genuinely surprised that removing such beliefs is feasible at all though.
the person needs to have a connection to the conspiracy theorist that is stronger than the identity valence gained by adopting these conspiracies
The person needs to speak emotionally and sincerely, using direct experience (cookie cutter rarely works here)
The person needs to genuinely desire for the improvement of the other’s life
That is the only way I have ever witnessed it personally work, and it still took weeks.
If they’re gullible enough to be suckered into it, they can similarly be suckered out of it - but clearly the effect would not be permanent.
I’ve always believed the adage that you can’t logic someone out of a position they didn’t logic themselves into. It protects my peace.
That doesn’t follow with the “if you didnt reason your way into a believe you can’t reason your way out” line. Considering religious ferver I’m more inclined to believe this line than yours.
Why? It works as a corollary - there’s no logic involved in any of the stages described.
No one said at all that AI used “reason” to talk people out of a conspiracy theory. In fact I would assume it’s incredibly unlikely since AI in general is not reasonable.