• Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    Outputting false information

    I understand what you mean, but technically that is lying, and I sort of disagree, because I think it’s easier for people to be aware of AI lying than “Outputting false information”.

    • reksas@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      Well, I guess its just a little thing and doesn’t ultimately matter. But little things add up

    • vortic@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      I think the disagreement here is semantics around the meaning of the word “lie”. The word “lie” commonly has an element of intent behind it. An LLM can’t be said to have intent. It isn’t conscious and, therefor, cannot have intent. The developers may have intent and may have adjusted the LLM to output false information on certain topics, but the LLM isn’t making any decision and has no intent.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 day ago

        IMO parroting lies of others without critical thinking is also lies.

        For instance if you print lies in an article, the article is lying. But not only the article, if the article is in a paper, the paper is also lying.
        Even if the AI is merely a medium, then the medium is lying. No matter who made the lie originally.

        Then we can debate afterwards the seriousness and who made up the lie, but the lie remains a lie no-matter what or who repeats it.