Except, the other day I wanted to convert some units and the AI results was having a fucking stroke for some reason. The numbers did not make sense at all. Never seen it do that before, but alas, I did not take a screenshot.
LLMs don’t verify their output is true. Math is something where verifying its truth is easy. Ask an LLM how many Rs in strawberry and it’s plain to see if the answer is correct or not. Ask an LLM for a summary of Columbian history and it’s not as apparent. Ask an LLM for a poem about a tomato and there really isn’t a wrong answer.
Yeah, I never get these strange AI results.
Except, the other day I wanted to convert some units and the AI results was having a fucking stroke for some reason. The numbers did not make sense at all. Never seen it do that before, but alas, I did not take a screenshot.
deleted by creator
What do humans do? Does the human brain have different sections for language processing and arithmetic?
LLMs don’t verify their output is true. Math is something where verifying its truth is easy. Ask an LLM how many Rs in strawberry and it’s plain to see if the answer is correct or not. Ask an LLM for a summary of Columbian history and it’s not as apparent. Ask an LLM for a poem about a tomato and there really isn’t a wrong answer.
Meanwhile, GNU Units can do that, reliably and consistently, on a freaking 486. 😂