• kaffiene@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    No that’s not it at all. People know that they don’t know some things. LLMs do not.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Exactly, the LLM isn’t “thinking,” it’s just matching inputs to outputs with some randomness thrown in. If your data is high quality, a lot of the time the answers will be appropriate given the inputs. If your data is poor, it’ll output surprising things more often.

      It’s a really cool technology in how much we get for how little effort we put in, but it’s not “thinking” in any sense of the word. If you want it to “think,” you’ll need to put in a lot more effort.

      • Richard@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Your brain is also “just” matching inputs to outputs using complex statistics, a huge number of interconnects and clever digital-analog mixed ionic circuitry.