Machine-made delusions are mysteriously getting deeper and out of control.

ChatGPT’s sycophancy, hallucinations, and authoritative-sounding responses are going to get people killed. That seems to be the inevitable conclusion presented in a recent New York Times report that follows the stories of several people who found themselves lost in delusions that were facilitated, if not originated, through conversations with the popular chatbot.

In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme. The Times reported that many other journalists and experts have received outreach from people claiming to blow the whistle on something that a chatbot brought to their attention.

  • Krauerking@lemy.lol
    link
    fedilink
    arrow-up
    0
    ·
    13 hours ago

    I dunno about you but I think to many people have decided that if it comes from computer it’s logical or accurate. This is just the next step in that except the computer just is a chat bot told to “yes and” working backwards to decide it’s accurate because it’s a computer so we tweak what it says until it feels right.
    It didn’t start right it’s likely not ending there unlike say finding the speed of gravity.

    Like this whole system works on people’s already existent faith in just that computers are giving them facts, even this garbage article is just getting what it wants to hear more than anything useful. Even if you tweak it to be less like that doesn’t make it more accurate or logical it just makes it more like what you wanted to hear it say.