I think it does accurately model the part of the brain that forms predictions from observations—including predictions about what people are going to say next, which lets us focus on the surprising/informative parts IRL. But with LLMs they just keep feeding it its own output as if it were an external agent it’s trying to predict.
It’s like a child with an imaginary friend, if you keep asking “What did she say after that?”
I think it does accurately model the part of the brain that forms predictions from observations—including predictions about what people are going to say next, which lets us focus on the surprising/informative parts IRL. But with LLMs they just keep feeding it its own output as if it were an external agent it’s trying to predict.
It’s like a child with an imaginary friend, if you keep asking “What did she say after that?”