• davel@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    17 hours ago

    It was stupid to have that in there in the first place given that an absence of politics/political bias is impossible.

    Just for one thing: the politics of the LLM’s training data will be reflected in the output. And the vast majority of the English-language corpus available will reflect Western, imperial core liberal politics.

    • pivot_root@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      13 hours ago

      And the vast majority of the English-language corpus available will reflect Western, imperial core liberal politics.

      Oh, I’m sure that isn’t going to be a problem for their goals. They can always overrepresent training data from 2016-2020 and 2024-2028 to add some balance to the model’s political compass. /s

  • Soulifix@kbin.melroy.org
    link
    fedilink
    arrow-up
    0
    ·
    17 hours ago

    There goes all of what remains of their integrity.

    It just proves further and further that AI by itself is not a bad thing. It’s the intent that man pours into it, that makes it a bad thing.

    Whatever man touches, turns to rigor mortis.