Machine-made delusions are mysteriously getting deeper and out of control.

ChatGPT’s sycophancy, hallucinations, and authoritative-sounding responses are going to get people killed. That seems to be the inevitable conclusion presented in a recent New York Times report that follows the stories of several people who found themselves lost in delusions that were facilitated, if not originated, through conversations with the popular chatbot.

In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme. The Times reported that many other journalists and experts have received outreach from people claiming to blow the whistle on something that a chatbot brought to their attention.

  • Randomgal@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    9 hours ago

    Hey if you think chat gpt can break you (or has any agency at all), I have a bridge to sell you.

    • Allonzee@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      8 hours ago

      ChatGPT and the others have absolutely broken people, not because it has agency, but because in our dystopia of social media and (Mis)information overload, many just need the slightest push, and LLMs are perfect for taking those close to the edge off of it.

      I see LLM use as potentially as a toxic to the mind is as something like nicotine is to the body. It’s not Skynet meaning to harm or help us, it’s an invention that takes our written thoughts and blasts back a disturbing meta reflection/echo/output of a humanity’s average response to it. We don’t seem to care how that will effect us psychologically when there’s profit to be made.

      But there are already plenty of cases of murders and suicides with these as factors.