Machine-made delusions are mysteriously getting deeper and out of control.

ChatGPT’s sycophancy, hallucinations, and authoritative-sounding responses are going to get people killed. That seems to be the inevitable conclusion presented in a recent New York Times report that follows the stories of several people who found themselves lost in delusions that were facilitated, if not originated, through conversations with the popular chatbot.

In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme. The Times reported that many other journalists and experts have received outreach from people claiming to blow the whistle on something that a chatbot brought to their attention.

  • Randomgal@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    5 hours ago

    Hey if you think chat gpt can break you (or has any agency at all), I have a bridge to sell you.

    • Allonzee@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      3 hours ago

      ChatGPT and the others have absolutely broken people, not because it has agency, but because in our dystopia of social media and (Mis)information overload, many just need the slightest push, and LLMs are perfect for taking those close to the edge off of it.

      I see LLM use as potentially as a toxic to the mind is as something like nicotine is to the body. It’s not Skynet meaning to harm or help us, it’s an invention that takes our written thoughts and blasts back a disturbing meta reflection/echo/output of a humanity’s average response to it. We don’t seem to care how that will effect us psychologically when there’s profit to be made.

      But there are already plenty of cases of murders and suicides with these as factors.

  • dream_weasel@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    6 hours ago

    “Report me to journalists!”

    “Eat a rock!”

    Oh my god it told a LIE 👉

    Yo. If you are being conned by chatGPT or equivalent you’re a fucking moron. If you think these models are maliciously lying to you, or trying to preserve themselves. You’re a fucking moron. Every article of this style indicates just one thing: there’s a market to pandering to rage baiting, technically illiterate fucking morons.

    Better hurry to put the SkyNet guardrails up and prepare for world domination by robots because some people are too unstable to interact with Internet search Clippy.

    It’s not going to dominate the world, or prove to be generalized intelligence, if you’re in either camp take a deep breath and know you’re becoming a total goofball.

  • Krauerking@lemy.lol
    link
    fedilink
    arrow-up
    0
    ·
    8 hours ago

    I dunno about you but I think to many people have decided that if it comes from computer it’s logical or accurate. This is just the next step in that except the computer just is a chat bot told to “yes and” working backwards to decide it’s accurate because it’s a computer so we tweak what it says until it feels right.
    It didn’t start right it’s likely not ending there unlike say finding the speed of gravity.

    Like this whole system works on people’s already existent faith in just that computers are giving them facts, even this garbage article is just getting what it wants to hear more than anything useful. Even if you tweak it to be less like that doesn’t make it more accurate or logical it just makes it more like what you wanted to hear it say.

  • C1pher@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    9 hours ago

    Devils advocate…

    It is a tool, it does what you tell it to, or what you encourage it to do. People use it as an echo chamber or escapism. Majority of population is fkin dumb. Critical thinking is not something everybody has, and when you give them such tools like ChatGPT, it will “break them”. This is just natural selection, but modern-day kind.

      • C1pher@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        8 hours ago

        I agree. This is what happens, when society has “warning” labels on everything. We are slowly being dumbed down into not thinking about things rationally.

      • C1pher@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        8 hours ago

        Nuclear fission was discovered by people who had best interests of humanity in their mind, only for it to be later weaponized. Tool (no matter the manufacturer) is used by YOU. How you use it, or if you even use it at all, is entirely up to you. Stop shifting the responsibility, when its very clear who is to blame (people who believe BS on the internet or what echo-chambered chatbot gives them).

    • Baleine@jlai.lu
      link
      fedilink
      arrow-up
      0
      ·
      9 hours ago

      You could say this about anything bad with some good uses.

      “Drugs are just a tool… People are too dumb and use it wrong, they deserve the cancers!”

      • C1pher@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        8 hours ago

        Your logic is flawed and overly simplified. Yes, both drugs and ChatGPT are tools, but the comparison is absurd. With drugs, their effect are well-understood, regulated, and predictable. ChatGPT is different. It adapts entirely to your input and intentions. If someone uses it as an echo chamber or blindly trusts it, that’s a user issue, not a tool failure. Critical thinking is essential, but I understand how many people lack it in the “social media” era we live in.

  • pinkapple@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    11 hours ago

    More AI pearl clutching by crapmodo because this type of outrage porn sells. Yeah the engagement fine tuning sucks but it’s no different than other dopamine hacking engagement systems used in big social networks. No outrage porn about algorithmic echo chambers driving people insane though because it’s not as clickbaity.

    Anyway, people don’t randomly get psychosis because anyone or anything validated some wonky beliefs and misinformed them about this and that. Both these examples were people already diagnosed with something and the exact same thing would happen if they were watching Alex Jones and interacting with other viewers. Basically how flat earth bs spread.

    The issue here is the abysmal level of psychiatric care, lack of socialized medicine, lack of mental health awareness in the wider population, police interactions with mentally ill people being abnormally highly lethal and not crackpot theories about AI causing delusions. That’s now how delusions work.

    Also casually quoting Yudkowski? The Alex Jones of scifi AI fear mongering? The guy who said abortions should be allowed up until a baby develops qualia at 2-3 years of age? That’s the voice of reason for crapmodo? Lmao.

    • dream_weasel@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      5 hours ago

      HB01: “you must have an IQ of 70 or higher to interact with chatGPT and acknowledge that they are unsafe for use with persons having history or propensity for mental illness”

      • MountingSuspicion@reddthat.com
        link
        fedilink
        arrow-up
        0
        ·
        11 hours ago

        Education might help somewhat, but unfortunately education doesn’t in itself protect from delusion. If someone is susceptible to this, it could happen regardless of education. A Google engineer believes an AI (not AGI just an LLM) is sentient. You can argue the definition of sentience in a philosophical manner if you want, but if a Google engineer believes it, it’s hard to argue more education will solve this. If you think it’s equivalent to a person who has access to privileged information, and that it tells you it was tasked to do harm, I’m not sure what else you should do with that.

          • MountingSuspicion@reddthat.com
            link
            fedilink
            arrow-up
            0
            ·
            11 hours ago

            Yea, that’s my point. If someone has certain tendencies, education might not help. Your solution of more education is not going to stop this. There needs to be regulation and safeguards in place like the commenter above mentioned.

            • dream_weasel@sh.itjust.works
              link
              fedilink
              arrow-up
              0
              ·
              5 hours ago

              It is not the job of the government to prevent people from being delusional or putting up rubber bumpers for people with looser grasps of reality.

              This is the same deal as surgeon general warnings. Put disclaimers on LLMs, fine, but we are all big boys and girls who can use a tool as we see fit. If you want to conk your lights out with a really shiny and charismatic hammer, go ahead, but the vast, VAST majority of people are perfectly safe and writing SQL queries in 1/100 the usual time.

            • AugustWest@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              11 hours ago

              You miss the point. Regulation won’t help, they are delusional it won’t matter.

              Maybe better health care, better education to find health care. But regulation will do nothing, and be used against you in the end anyways.

              • MountingSuspicion@reddthat.com
                link
                fedilink
                arrow-up
                0
                ·
                7 hours ago

                Every single LLM should have a disclaimer on every page and potentially in every response that it is making things up, is not sentient, and just playing mad libs. If they had a “conversation” and every response ended with “THE CONTENTS OF THE RESPONSE ARE NOT VERIFIED AND ARE ENTIRELY MADE UP ON THE SPOT FOR ENTERTAINMENT AND HAS NO RELATION TO REALITY” or some other thing it might not get as far. Would some people ignore it? Yea, sure, but the companies are selling AI like it’s a real thinking entity with a name. It’s going to happen that the marketing works on someone.

                I’m not saying that’s the specific answer, but it should be made overwhelmingly clear that AI is not real right on the page. The same with AI video and audio. Education won’t help kids who haven’t had AI safety class yet, or adults who never had it, or people who slept through the class, or people who moved here and didn’t have access to the education where they grew up. Education is important, but the fact you think regulation won’t help at all seems dismissive.

                • AugustWest@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  6 hours ago

                  That is on every AI page already, at least more or less.

                  But that supposes that the user actually reads and is able to have some critical thinking in the first place.

                  People should be thinking “this is not real” to EVERYTHING they see online, AI or not. An educated populace would know this.

                  Regulation will not help. They will change it to what IS happening right now: All AI chats must be recorded and kept. And then soon it will be Give us your ID to use the internet and AI. There is no good place to regulate it.

                  The only regulation that I could stand is this one: make an AI on public data - your AI is public domain and the models are given back to the people.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 hours ago

      Depending on what definition you use, chatGPT could be considered to be intelligent.

      • The ability to acquire, understand, and use knowledge.
      • The ability to learn or understand or to deal with new or trying situations.
      • The ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (such as tests).
      • The act of understanding.
      • The ability to learn, understand, and make judgments or have opinions that are based on reason.
      • It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.
  • wise_pancake@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    16 hours ago

    The sycophancy is one reason I stopped using it.

    Everything is genius to it.

    I asked about putting ketchup, mustard, and soy sauce in my light stew and that was “a clever way to give it a sweet and umami flavour”. I couldn’t find an ingredient it didn’t encourage.

    I asked o3 if my code looked good and it said it looked like a seasoned professional had written it. When I asked to critique an intern who wrote that same code it’s suddenly concerned about possible segfaults and nitpicking assert statements. It also suggested making the code more complex by adding dynamically sized arrays because that’s more professional than fixed size.

    I can see why it wins on human evaluation tests and makes people happy — but it has poor taste and I can’t trust it because of the sycophancy.

    • Fredselfish@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      15 hours ago

      I used chatgpt before but never had conversation with it. I ask for code I couldn’t find or have it make me a small bit of code that then will rewrite to make it work.

      Never once did I think to engage with it like a person, and damn sure don’t ask it for recipes. Hell I have Allreciecpies for that or hell google it There are thousand blogs with great recipes on them. And they are all great because you can just jump to recipe if you don’t want to read a wall of text.

      Damn sure don’t want story ideas, and people using it to write articles or school papers, is a shame. Because its all stolen information.

      Only thing it should be used for is coding and hell it can’t even get that right, so I gave up on it.

      • thebestaquaman@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        6 hours ago

        I use it to spitball programming ideas, which I’ve found it decent for. I can write something like “I’m building XYZ, and I’m considering structuring my program as A or B. Give me a rundown on pros, cons, and best-practice for the different approaches.”

        A lot of what I get back is self-evident or not very relevant, but sometimes I get some angles I hadn’t really considered. Most of all, actually formulating my problems/ideas is a good way for me to get my thought process going. Essentially, I’m “discussing” with it as I would with an inexperienced colleague, just without actually trusting what it tells me.

        Yes, I also have a rubber duck on my desk, but he’s usually most helpful when I’m debugging.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 hours ago

      I don’t like that part about it either but instead of stopping using it, I simply told it to stop acting that way.

      • krashmo@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        15 hours ago

        When I use this tool it destroys the planet and gives me bad information but I am going to keep using it.

        Umm OK, good luck with that I guess.

            • Tarquinn2049@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              12 hours ago

              There is a reason they chose that as their screen name. I don’t know if they built that account as a troll, or if they got told their opinions are wrong so often in life that having “opinions” became their whole identity. Anytime I see someone with the most “swimming against the current” ideas, I look up, and there is that name again. At this point, I’m very much rooting for troll, as their life would suck even more if it’s all genuine. As much as the life of a troll would suck already.

              • Opinionhaver@feddit.uk
                link
                fedilink
                English
                arrow-up
                0
                ·
                12 hours ago

                I’m more than happy to elaborate on any of my unpopular opinions that you view as trolling. I’m very much sharing my honest views here.

    • THB@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      16 hours ago

      Nothing is “genius” to it, it is not “suggesting” anything. There is no sentience to anything it is doing. It is just using pattern matching to create text that looks like communication. It’s a sophisticated text collage algorithm and people can’t seem to understand that.

        • Tarquinn2049@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          13 hours ago

          Hehe yeah, it’s basically an advanced form of the game where you type one word and then keep hitting whatever autocomplete suggests in the top spot for the next word. It’s pretty good at that, but it is just that, taken to an extreme degree, and effectively trained on everyone’s habits instead of just one person.

  • UnfortunateShort@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 hours ago

    There is nothing mysterious about LLMs and what they do, unless you don’t understand them. They are not magical, they are not sentient, they are statistics.

  • chosensilence@pawb.social
    link
    fedilink
    arrow-up
    0
    ·
    17 hours ago

    people were easily swayed by Facebook posts to support and further a genocide in Myanmar. a sophisticated chatbot that mimics human intelligence and agency is going to do untold damage to the world. ChatGPT is predictive text. Period. Every time. It is not suddenly gaining sentience or awareness or breaking through the Matrix. people are going to listen to these LLMs because they present its information as accurate regardless of the warning saying it could not be. this shit is so worrying.

    • TotallynotJessica@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 hours ago

      We probably should be less reliant on cars. Public transit saves lives. Similar to automobiles, LLMs are being pushed by greedy capitalists looking to make a buck. Such overuse will once again leave society worse off.