• jj4211@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    I kid you not, early on (mid 2023) some guy mentioned using ChatGPT for his work and not even checking the output (he was in some sort of non-techie field that was still in the wheelhouse of text generation). I expresssed that LLMs can include some glaring mistakes and he said he fixed it by always including in his prompt “Do not hallucinate content and verify all data is actually correct.”.

    • Passerby6497@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      Ah, well then, if he tells the bot to not hallucinate and validate output there’s no reason to not trust the output. After all, you told the bot not to, and we all know that self regulation works without issue all of the time.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        It gave me flashbacks when the Replit guy complained that the LLM deleted his data despite being told in all caps not to multiple times.

        People really really don’t understand how these things work…