• Deconceptualist@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    As others are saying it’s 100% not possible because LLMs are (as Google optimistically describes) “creative writing aids”, or more accurately, predictive word engines. They run on mathematical probability models. They have zero concept of what the words actually mean, what humans are, or even what they themselves are. There’s no “intelligence” present except for filters that have been hand-coded in (which of course is human intelligence, not AI).

    “Hallucinations” is a total misnomer because the text generation isn’t tied to reality in the first place, it’s just mathematically “what next word is most likely”.

    https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/

      • Deconceptualist@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Ok, maybe there’s a possibility someday with that approach. But that doesn’t reflect my understanding or (limited) experience with the major LLMs (ChatGPT, Gemini) out in the wild today. Right now they confidently advise ingesting poison because it’s grammatically sound and they found it on some BS Facebook post.

        If ML engineers can design an internal concept of what constitutes valid information (a hard problem for humans, let alone machines) maybe there’s hope.

      • Natanael@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The problem is they have many different internal concepts with conflicting information and no mechanism for determining truthfulness or for accuracy or for pruning bad information, and will sample them all randomly when answering stuff

    • Tobberone@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      An LLM once explained to me that it didn’t know, it simulated an answer. I found that descriptive.

    • _number8_@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      all we know about ourselves is what’s in our memories. the way normal writing or talking works is just picking what words sound best in order

      • Deconceptualist@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        That’s not the whole story. “The dog swam across the ocean.” is a grammatically valid sentence with correct word order. But you probably wouldn’t write it because you have a concept of what a dog actually is and know its physiological limitations make the sentence ridiculous.

        The LLMs don’t have those kind of smarts. They just blindly mirror what we do. Since humans generally don’t put those specific words together, the LLMs avoid it too, based solely on probability. If lots of people started making bold claims about oceanfaring canids (e.g. as a joke), then the LLMs would absolutely jump onboard with no critical thinking of their own.

    • neo@lemy.lol
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I was wondering, are people working on networks that train to create a modular model of the world, in order to understand it / predict events in the world?

      I imagine that that is basically what our brains do.

      • Natanael@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Not really anything properly universal, but a lot of task specific models exists with integration with logic engines and similar stuff. Performance varies a lot.

        You might want to take a look at wolfram alpha’s plugin for chatgpt for something that’s public

      • eestileib@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Many attempts, some well-funded.

        They have been successful in very limited domains. For example, the F-35 integrated sensor suite.

    • Captain Aggravated@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Remember the game people used to play that was something like “type my girlfriend is and then let your phone keyboards auto suggestion take it from there?” LLMs are that.

  • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 🏆@yiffit.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    Here’s how you stop AI from hallucinating:

    Turn it off.

    Because everything they output is a hallucination. Just because sometimes those hallucinations are true to life doesn’t mean jack shit. Even a broken clock is right twice a day.

    “Only feed it accurate information.”

    Even that doesn’t work because it just mixes and matches every element of its input to generate a new, novel output. Which would inevitably be wrong.

    • john_lemmy@slrpnk.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Yeah, just pull the plug. The amount of time we waste talking about this shit for these assholes to play another round of monopoly is unbelievable

  • kaffiene@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I’m 100% sure he can’t. Or at least, not from LLMs specifically. I’m not an expert so feel free to ignore my opinion but from what I’ve read, “hallucinations” are a feature of the way LLMs work.

  • NutWrench@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    If you want to have good AI, you need to spend money and send your AI to college. Have real humans interact with it, correct it’s logic, make sure it understands sarcasm and logical fallacies.

    Or, you can go the cheap route: train it on 10 years of Reddit sh*tposts and hope for the best.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    It’s kind of funny how AI has the exact same problems some humans have.
    I always thought AI wouldn’t have that kind of problems, because they would be carefully fed accurate information.
    Instead they are taught from things like Facebook and the thing formerly known as Twitter.
    What an idiotic timeline we are in. LOL

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      It’s not the exact same problems humans have. It’s completely different. Marketers and hucksters just use anthropomorphic terminology to hype their dysfunctional programs.

    • foggy@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      What weirds me out is that the things it has issues with when generating images/video are basically a list of things lucid dreamers check on to see if they’re awake or dreaming.

      1. Hands. Are your hands… Hands? Do they make sense?

      2. Written language. Does it look like normal written language?

      (3. Turn the lights off/4. Pinch your nose and breath through it) - these two not so much

      1. How did I get here? Where was I before this? Does the transition make sense?

      2. Mirrors. Are they accurate?

      3. Displays on digital devices. Do they look normal?

      4. Clocks. Digital and analog… Do they look like they’re telling time? Even if they do, look away and check again.

      (9. Physics, try to do something physically impossible, like poking your finger through your palm. 10. Do you recognize people/do they recognize you) - on two more that aren’t relevant.

      But still… It’s kinda remarkable.

      Also, Nvidia launched their earth 2 earth simulator recently. So, simulation theory confirmed, I guess.

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Also, check your cell phone. Despite how ubiquitous they are in our daily lives, I don’t think I’ve seen a single cell phone in my dreams. Or any other phone, for that matter.

        And now that I think about it, I’ve definitely had a dream of being in my living room where there’s a TV, but I don’t remember the TV actually being in the dream.

        Weird.

    • MentalEdge@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      There’s also the fact that they can’t tell reality apart from fiction in general, because they don’t understand anything in the first place.

      LLMs have no way of differentiating a fantasy RPG elements from IRL things. So they can lose the plot on what is being discussed suddenly, and for seemingly no reason.

      LLMs don’t just “learn” facts from their training data. They learn how to pretend to be thinking, they can mimic but not really comprehend. If there were facts in the training data, it can regurgitate them, but it doesn’t actually know which facts apply to which subjects, or when to not make some up.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        They learn how to pretend

        True, and they are so darn good at it, that it can be somewhat confusing at times.
        But the current AIs are not the ones we read about in SciFi.

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Right? In all science fiction, artificial intelligence starts out better than us, and the only question is whether it can capture some idiosyncratic element of “being human.” Instead, AI has started out dumber than us, and we’re all standing around saying “uh what is this good for?”

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      The problem with AI hallucinations is not that the AI was fed inaccurate information, it’s that it’s coming up with information that it wasn’t fed in the first place.

      As you say, this is a problem that humans have. But I’m not terribly surprised these AIs have it because they’re being built in mimicry of how aspects of the human mind works. And in some cases it’s desirable behaviour, for example when you’re using an AI as a creative assistant. You want it to come up with new stuff in those situations.

      It’s just something you need to keep in mind when coming up with applications.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          Exactly, which is why I’ve objected in the past to calling Google Overview’s mistakes “hallucinations.” The AI itself is performing correctly, it’s giving an accurate overview of the search result it’s being told to create an overview for. It’s just being fed incorrect information.

    • NeoNachtwaechter@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Instead they are taught from things like Facebook and the thing formerly known as Twitter.

      Imagine they would teach in our schools to inform yourself about all the important things, and therefore you should read as many toilet walls as newspapers…

      • dan1101@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Yeah it’s the old garbage in, garbage out problem, the AI algorithms don’t really understand what they are outputting.

        I think at this point voice recognition and text generation AI would be more useful as something like a phone assistant. You could tell it complex things like “Mute my phone for the next 2 hours” or “Notify me if I receive an email from John Smith.” Those sort of things could be easily done by AI algorithms that A) Understand your voice and B) Are programmed to know all the features of the OS. Hopefully with a known dataset like a phone OS there shouldn’t be hallucination problems, the AI could just act as an OS concierge.

        • Rhaedas@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          The narrow purpose models seem to be the most successful, so this would support the idea that a general AI isn’t going to happen from LLMs alone. It’s interesting that hallucinations are seen as a problem yet are probably part of why LLMs can be creative (much like humans). We shouldn’t want to stop them, but just control when they happen and be aware of when the AI is off the tracks. A group of different models working together and checking each other might work (and probably has already been tried, it’s hard to keep up).

        • jaybone@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Seems Siri and Alexa could already do things like that without needing LLMs trained on Facebook shit.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    This is the best summary I could come up with:


    Even Apple CEO Tim Cook isn’t sure the company can fully stop AI hallucinations.

    In an interview with The Washington Post, Cook said he would “never claim” that its new Apple Intelligence system won’t generate false or misleading information with 100 percent confidence.

    These features will let you generate email responses, create custom emoji, summarize text, and more.

    Recent examples of how AI can get things wrong include last month’s incident with Google’s Gemini-powered AI overviews telling us to use glue to put cheese on pizza or a recent ChatGPT bug that caused it to spit out nonsensical answers.

    The voice assistant will turn to ChatGPT when it receives a question better suited for the chatbot, but it will ask for your permission before doing so.

    In the demo of the feature shown during WWDC, you can see a disclaimer at the bottom of the answer that reads, “Check important info for mistakes.”


    The original article contains 334 words, the summary contains 153 words. Saved 54%. I’m a bot and I’m open source!

  • Brickardo@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    That’s what it comes by not really understanding what you’re doing. Most of the AI models I work with are the state of the art just because they happen to work.

    In my case, when I solve a PDE using finite difference schemes, there are precise mathematical conditions that guarantee you if the method is going to be stable or not. When I do the same using AI, I can’t tell if my method is going to work or not unless I run it. Moreover, I’ve had it sometimes fail and sometimes succeed.

    It’s just the way it is for now.

    • DudeDudenson@lemmings.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      I mean companies world wide just jumped in the AI bandwagon like a lot of people did with the NFT one. Mostly because AI actually has solid use cases and can make a big difference in broad situations.

      Just since people are just slapping AI in everything it’s gonna end up being another fad to raise stock prices, like firing people last year.

      Let’s just hope when all of the hype blows over and the general public thinks of AI as the marketing buzzword that never works quite right we’ll keep AI in the things it’s actually useful for

      • Brickardo@feddit.nl
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        AI interest has come and gone. Some decades ago, people would slap the AI label to expert systems. If we go further back, one would call AI to solving problems in blocks world. It’s eventually going to fade away, just like all the previous waves did.

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Seeing these systems just making shit up when they’re not sure on the answer is probably the closest they’ll ever come to human behaviour.

    We’ve invented the virtual politician.

  • AdrianTheFrog@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    They can’t. AI has hallucinations. Google has shown that AI can’t even rely on external sources, either.

    • FiniteBanjo@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      At least LLMs will. The only real fix we’ve seen was running it through additional specialized LLMs to try to massage out errors, but that just increases costs and scale for marginally low results.

  • nieceandtows@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    If Apple can stop AI hallucination, any other AI company can also stop AI hallucination. Which is something they could have already done instead of making AI seem a joke on purpose. AI hallucinations are a sort of phenomena that nobody has control over. Why would Tim Cook have unique control over it?

    • cmbabul@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Unless Apple became the first to figure out how, then they suddenly have a huge leg up on the rest. Which is kinda how Apple has been making their bread for most of their successes in my lifetime

      • nieceandtows@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        eh. I don’t think Apple’s gonna be a pioneer in AI. If anybody can do it, it would be openai figuring it out first. Happy to be proven wrong tho.

        • cmbabul@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Oh I’m not suggesting the will or are able to, I’m coming from a strategic standpoint

      • 555@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Yeah. When Apple says it’s coming into a market, they mean they have already perfected it.

        • Zorsith@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          (Or let other companies polish up a feature/concept for a few years, slap a coat of Space Gray on it, and release it as a revolutionary “new” feature for apple)

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    I only trust moguls and political figures that are 100% sure of everything. I really like the confidence and it makes me feel like they deserve big paychecks and special rights because they must be so smart to have have no room for the doubt like the rest of us spineless imps. This guy is displaying weakness and should be shamed!

    I bet Tim Apple is going to fire his ass.

  • CosmoNova@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    I’m not exaggerating when I say there’s only like a dozen true experts for generative AI on the planet and even they’re not completely sure what’s going on in that blackbox. And as far as I’m aware Tim Cook isn’t even one of them. How would he know?

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      These programs are averaging massive amounts of data into a massive averaging function. There’s no way that a human could ever understand what’s going on inside that kind function. Humans can’t hold millions of weights/etc in their head and comprehend what it means. Otherwise, if humans could do this, there would be no point in doing this kind of statistics with computers.