• kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    I always love watching you comment something that’s literally true regarding LLMs but against the groupthink and get downvoted to hell.

    Clearly people aren’t aware that the pretraining pass is necessarily a regression to the mean and it requires biasing it using either prompt context or a fine tuning pass towards excellence in outputs.

    There’s a bit of irony to humans shitting on ChatGPT for spouting nonsense when so many people online happily spout BS that they think they know but don’t actually know.

    Of course a language model trained on the Internet ends up being confidently incorrect. It’s just a mirror of the human tendencies.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      Yeah, these AIs are literally trying to give us what they “think” we expect them to respond with.

      Which does make me a little worried given how frequently our fictional AIs end up in “kill all humans!” Mode. :)