• drislands@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    …because they frequently do? Glaring errors are like, the main thing LLMs produce besides hype.

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      They make glaring errors in logic, and confidently state things that are not true. But their whole “deal” is writing proper sentences based on predictive models. They don’t make mistakes like the excerpt you highlighted.

      • drislands@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Y’know what, that’s a fair point. Though I’m not the original commenter from the top, heh.

      • Garbanzo@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        I’m imagining that the first output didn’t cover everything they wanted so they tweaked it and pasted the results together and fucked it up.