I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.
Any good examples on how to explain this in simple terms?
As some nerd playing with various Ai models at home with no formal training, any wisdom you think that’s worth sharing?
The only winning move is not to play.
But my therapist said she needs more VRam.