Machine-made delusions are mysteriously getting deeper and out of control.
ChatGPT’s sycophancy, hallucinations, and authoritative-sounding responses are going to get people killed. That seems to be the inevitable conclusion presented in a recent New York Times report that follows the stories of several people who found themselves lost in delusions that were facilitated, if not originated, through conversations with the popular chatbot.
…
In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme. The Times reported that many other journalists and experts have received outreach from people claiming to blow the whistle on something that a chatbot brought to their attention.
It is not the job of the government to prevent people from being delusional or putting up rubber bumpers for people with looser grasps of reality.
This is the same deal as surgeon general warnings. Put disclaimers on LLMs, fine, but we are all big boys and girls who can use a tool as we see fit. If you want to conk your lights out with a really shiny and charismatic hammer, go ahead, but the vast, VAST majority of people are perfectly safe and writing SQL queries in 1/100 the usual time.