• zout@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    LLM’s may not have any intent, but companies do. In this case, Google decides to present the AI answer on top of the regular search answers, knowing that AI can make stuff up. MAybe the AI isn’t lying, but Google definitely is. Even with the “everything is experimental, learn more” line, because they’d just give the information if they’d really want you to learn more, instead of making you have to click again for it.

    • Snot Flickerman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      In other words, I agree with your assessment here. The petty abject attempts by all these companies to produce the world’s first real “Jarvis” are all couched in “they didn’t stop to think if they should.”

      • zout@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        My actual opnion is that they don’t want to think if they should, because they know the answer. The pressure to go public with a shitty model outweighs the responsibility to the people relying on the search results.

        • Snot Flickerman@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          5 months ago

          It is difficult to get a man to understand something when his salary depends on his not understanding it.

          -Upton Sinclair

          Sadly, same as it ever was. You are correct, they already know the answer, so they don’t want to consider the question.

          • dohpaz42@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago

            There’s also the argument that “if we don’t do it, somebody else would,” and I kind of understand that, while I also disagree with it.