we love google (and LLMs)

  • dysprosium@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    what are you whining about? Hallucination is inherently part of LLM as of today. Anything out of this should not be trusted with certainty. But employing it will have more benefits than just shadowing it for everyone. Take it as an unfinished project, so ignore the results if you like. Seriously, it’s physically possible to actually ignore the generative results.

    “Absolutely sued” my ass

    • xmunk@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      I absolutely agree and I consider LLM results to be “neat” but never trusted - if I think I should bake spaghetti squash at 350 I might ask an LLM and only find real advice if our suggested temperatures vary.

      But some people have wholly bought into the “it’s a magic knowledge box” bullshit - you’ll see opinions here on lemmy that generative AI can make novel creations that indicate true creativity… you’ll see opinions from C-level folks that LLMs can replace CS wholesale who are chomping at the bit to downsize call centers. Companies need to be careful about deceiving these users and those that feed into the mysticism really need to be stopped.