• set_secret@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Ok very long and detailed response, i was responding to the initial comments that explicitly said if you give ai a made up thing it will definitely hallucinate. Which i demonstrated to be false in (multiple times). I’m not suggesting it doesn’t hallucinate a lot of the time still, but the comments were making out its 100% broken, and it clearly works for many queries very effectively, despite its limited applications. Im just suggesting we don’t throw the baby out with the bathwater.

    • tonarinokanasan@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 months ago

      I think the trouble is, what baby are we throwing out with the bathwater in this case? We can’t prevent LLMs from hallucinating (but we can mitigate it somewhat with carefully constructed prompts). So, use cases where we’re okay with that are fair game, but any use case (or in this case, law?) that would require the LLM never hallucinates aren’t attainable, and to get back earlier, this particular problem has nothing to do with capitalism.