I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

  • k110111@feddit.de
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    Its like saying an OS is just a bunch of if then else statements. While it is true, in practice it is far far more complicated.