You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)
Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”
I think we should stop calling things AI unless they actually have their own intelligence independent of human knowledge and training.
Isn’t there already the term AGI for that?
Yes, and the researchers I know doing stuff with AI find the idea of AGI laughable.
But we aren’t intelligent without human training, either…
Never been tested due to ethical constraints
There have been very unethical experiments
Sure, but this one hasn’t been done, and if you walk up to a researcher and ask “y no lock bby in white box” they will tell you to leave and might even call the cops if you seemed particularly determined
Kind of has been, not in a scientific manner, but there’s the whole phenomenon of “feral human”.
There are examples of children raised by other animals.