Meta “programmed it to simply not answer questions,” but it did anyway.

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Kaplan noted that AI chatbots “are not always reliable when it comes to breaking news or returning information in real time,” because “the responses generated by large language models that power these chatbots are based on the data on which they were trained, which can at times understandably create some issues when AI is asked about rapidly developing real-time topics that occur after they were trained.”

    If you’re expecting a glorified autocomplete to know about things it doesn’t have in its training data, you’re an idiot.

    • Catoblepas@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      There are definitely idiots, but these idiots don’t get their ideas of how the world works out of thin air. These AI chatbot companies push the cartoon reality that this is a smart robot that knows things hard in their advertisements, and to learn otherwise you have to either listen to smart people or read a lot of text.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Some services will use glorified RAG to put more current info in the context.

      But yeah, if it’s just the raw model, I’m not sure what they were expecting.

    • Empricorn@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Yeah, the average person is the idiot here, for something they never asked for, and for something they see no value in. Companies threw billions of dollars at this emerging technology. Many things like Google Search have hallucinating, error-prone AI forced into the main product that is impossible to opt-out or use the (working) legacy version now…

      • doodledup@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        Nobody is forcing you to use it.

        I’m using it and I see great value in it. And if there are people that see value in a product then it’s worth the investment.

        • jeeva@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Yes, people are being forced to use it if they want to, for instance, search using Google or Bing.

          As the parent comment suggested, or there’s no way to opt out, currently.

          I’m glad you see value in it; I think the injection of LLM queries into search results I want to contain accurate results (and nothing more) a useless waste of power.

          • doodledup@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            Injecting that into search result is a bad thing, I’m with you on that. Try DuckDuckGo. They use Bing but don’t insert all of that AI crap. The results are much more vanilla. It’s actually easier to find stuff because it’s not that cluttered.

        • trollbearpig@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          I always ask all people defending AI, or rather LLMs, what’s the great value they all mention in their comments. So far the “best” answer I got was one dude using LLMs to extract info from decades old reports that no one has checked in 20 years hahaha. So glad we are allowing LLMs to deetroy the environment and plagiarize all creative work for that lol.

          So, what is the great value you see man?

          • doodledup@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            It was never made for information retrieval. It’s made for high-level reasoning and language understanding. That is where it shines. You completely misunderstand what this is all about. You’re trying to use a car to paint a wall.

            • trollbearpig@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              Me? I’m not using LLMs at all hahaha. I’m asking you, who says they have great value, to provide examples of their uses. I just provided pretty much the only one I have heard, which some random dude told me in a different thread. Everyone else, like you, just keeps it abstract and just bullshits and bullshits hahaha.

              • doodledup@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                Great use is subjective. But I use them to better understand university lectures. I can have a real discussion, ask questions, ask for examples and so on. I had countless situations where web searches would not have helped me because the ressources cannot do reasoning to explain intuitions. I’m also using it for coding. It’s awesome for boilerplate code. I also sometimes ask it to improve my existing code, so I can learn new coding practicesand tricks from that.

                None of these applications require the LLM to be correct 100% of the time. It’s still great value for me. And when I suspect that it’s wrong about something or it’s hallucinating or bad at explaining something, I’ll just do some web searches for validation.

                You might not find it useful because you’re using it wrong, or simply because you have no application for the value it can provide. But that doesn’t mean it’s all bad. OP certainly doesn’t know how to use it. I would never even think about asking it about historical events.

                • trollbearpig@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  3 months ago

                  So literally you use it for information retrieval hahahaha. I did use copilot, codium, and the jetbrains one for a bit. But I had to disable each one, the amount fo wrong code simply doesn’t justify the little boilerplate it generates.

                  • doodledup@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    0
                    ·
                    edit-2
                    3 months ago

                    That’s not information retrieval. There is a difference between asking it about historical events and asking it to come up with their own stuff based on reasoning. I know that it can be wrong about factual questions and I embrace that. OP and many others don’t understand that and think it’s a problem when the AI gives a wrong answer about a specific question. You’re simply using it wrong.

                    It’s been a while since ChatGPT4 has spit out non-working bullshit code for me. And if it does, I immediately notice it and it’ll still be a time-saver because there is at least something I can take from every response even if it’s a wrong response. I’m using it as intended. And I see value in it. So keep convincing yourself it’s terrible, but stop being annoying about it if others disagree.