Pro@programming.dev to Technology@lemmy.worldEnglish · 13 days agoAI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots.www.cmu.eduexternal-linkmessage-square65linkfedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkAI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots.www.cmu.eduPro@programming.dev to Technology@lemmy.worldEnglish · 13 days agomessage-square65linkfedilink
minus-squarePasserby6497@lemmy.worldlinkfedilinkEnglisharrow-up0·12 days agoAh, well then, if he tells the bot to not hallucinate and validate output there’s no reason to not trust the output. After all, you told the bot not to, and we all know that self regulation works without issue all of the time.
minus-squarejj4211@lemmy.worldlinkfedilinkEnglisharrow-up0·12 days agoIt gave me flashbacks when the Replit guy complained that the LLM deleted his data despite being told in all caps not to multiple times. People really really don’t understand how these things work…
Ah, well then, if he tells the bot to not hallucinate and validate output there’s no reason to not trust the output. After all, you told the bot not to, and we all know that self regulation works without issue all of the time.
It gave me flashbacks when the Replit guy complained that the LLM deleted his data despite being told in all caps not to multiple times.
People really really don’t understand how these things work…