Human brains also do processing of audio, video, self learning, feelings, and many more that are definitely not statistical text. There are even people without “inner monologue” that function just fine
Some research does use LLM in combination with other AI to get better results overall, but purely LLM isn’t going to work.
But language is a big thing in the human intelligence and consciousness.
I don’t know, and I would assume that anyone’s really know. But people without internal monologue I have a feeling that they have it but they are not aware of it. Or maybe they talk so much that all the monologue is external.
Interesting you focus on language. Because that’s exactly what LLMs cannot understand. There’s no LLM that actually has a concept of the meaning of words. Here’s an excellent essay illustrating my point.
The fundamental problem is that deep learning ignores a core finding of cognitive science: sophisticated use of language relies upon world models and abstract representations. Systems like LLMs, which train on text-only data and use statistical learning to predict words, cannot understand language for two key reasons: first, even with vast scale, their training and data do not have the required information; and second, LLMs lack the world-modeling and symbolic reasoning systems that underpin the most important aspects of human language.
The data that LLMs rely upon has a fundamental problem: it is entirely linguistic. All LMs receive are streams of symbols detached from their referents, and all they can do is find predictive patterns in those streams. But critically, understanding language requires having a grasp of the situation in the external world, representing other agents with their emotions and motivations, and connecting all of these factors to syntactic structures and semantic terms. Since LLMs rely solely on text data that is not grounded in any external or extra-linguistic representation, the models are stuck within the system of language, and thus cannot understand it. This is the symbol grounding problem: with access to just formal symbol system, one cannot figure out what these symbols are connected to outside the system (Harnad, 1990). Syntax alone is not enough to infer semantics. Training on just the form of language can allow LLMs to leverage artifacts in the data, but “cannot in principle lead to the learning of meaning” (Bender & Koller, 2020). Without any extralinguistic grounding, LLMs will inevitably misuse words, fail to pick up communicative intents, and misunderstand language.
But this “concepts” of things are built on the relation and iteration of this concepts with our brain.
A baby doesn’t born knowing that a table is a table. But they see a table, their parents say the word table, and they end up imprinting that what they have to say when they see that thing is the word table. That then they can relation with other things they know. I’ve watched some kids grow and learn how to talk lately and it’s pretty evident how repetition precedes understanding. Many kids will just repeat words that they parents said in certain situation when they happen to be in the same situation. It’s pretty obvious with small kids. But it’s a behavior you can also see a lot with adults, just repeating something they heard once they see that particular words fit the context
Also it’s interesting that language can actually influence the way concepts are constructed in the brain. For instance ancient greeks saw blue and green as the same colour, because they did only have one word for both colours.
I’m not sure if you’re disagreeing with the essay or not? But in any case what you’re describing is in the same vein, that is simply repeating a word without knowing what it actually means in context is exactly what LLMs do. They can get pretty good at getting it right most of the times but without actually being able to learn the concept and context of ‘table’ they will never be able to use it correctly 100% of the time. Or even more importantly for AGI apply reason and critical thinking. Much like a child repeating a word without much clue what it actually means.
Just for fun, this is what Gemini has to say:
Here’s a breakdown of why this “parrot-like” behavior hinders true AI:
Lack of Conceptual Grounding: LLMs excel at statistical associations. They learn to predict the next word in a sequence based on massive amounts of text data. However, this doesn’t translate to understanding the underlying meaning or implications of those words.
Limited Generalization: A child learning “table” can apply that knowledge to various scenarios – a dining table, a coffee table, a work table. LLMs struggle to generalize, often getting tripped up by subtle shifts in context or nuanced language.
Inability for Reasoning and Critical Thinking: True intelligence involves not just recognizing patterns but also applying logic, identifying cause and effect, and drawing inferences. LLMs, while impressive in their own right, fall short in these areas.
I mostly agree with it. What I’m saying is the understanding of the words come from the self dialogue made of those same words. How many times has a baby to repeat the word “mom” until they understand what a mother is? I think that without that previous repetition the more complex "understanding is impossible. That human understanding of concepts, especially the more complex concepts that make us humans, come from we being able to have a dialogue with ourselves and with other humans.
But this dialogue initiates as a Parrot, non-intelligent animals with brains that are very similar to ours are parrots. Small children are parrots (are even some adults). But it seems that after being a Parrot for some time it comes the ability to become an Human. That parrot is needed, and it also keeps itself in our consciousness. If you don’t put a lot of effort in your thoughts and says you’ll see that the Parrot is there, that you just express the most appropriate answer for that situation giving what you know.
The “understanding” of concepts seems just like a complex and big interconnection of Neural-Network-like outputs of different things (words, images, smells, sounds…). But language keeps feeling like the more important of those things for intelligent consciousness.
I have yet to read another article that other user posted that explained why the jump from Parrot to Human is impossible in current AI architecture. But at a glance it seems valid. But that does not invalidate the idea of Parrots being the genesis of Humans. Just that a different architecture is needed, and not in the statistical answer department, the article I was linked was more about size and topology of the “brain”.
A baby doesn’t learn concepts by repeating words over and certainly knows what a mother is before it has any label or language to articulate the concept. The label gets associated with the concept later and is not purely by parroting and indeed excessive parroting normally indicates speech development issues.
language is a big thing in the human intelligence and consciousness.
But an LLM isn’t actually language. It’s numbers that represent tokens that build words. It doesn’t have the concept of a table, just the numerical weighting of other tokens related to “tab” & “le”.
I don’t know how to tell you this. But your brain does not have words imprinted in it…
The concept of this is, funnily enough, something that is being studied that derived from language. For instance ancient greeks did not distinguish between green and blue, as both colours had the same word.
Words are not imprinted, they are a series of electrical impulses that we learn over time. As a reference about the complain that LLM does not have words but tokens that represent values within the network.
And those impulses and how we generate them while we think are of great importance on out consciousness.
Human brains also do processing of audio, video, self learning, feelings, and many more that are definitely not statistical text. There are even people without “inner monologue” that function just fine
Some research does use LLM in combination with other AI to get better results overall, but purely LLM isn’t going to work.
Yep, of course. We do more things.
But language is a big thing in the human intelligence and consciousness.
I don’t know, and I would assume that anyone’s really know. But people without internal monologue I have a feeling that they have it but they are not aware of it. Or maybe they talk so much that all the monologue is external.
Interesting you focus on language. Because that’s exactly what LLMs cannot understand. There’s no LLM that actually has a concept of the meaning of words. Here’s an excellent essay illustrating my point.
But this “concepts” of things are built on the relation and iteration of this concepts with our brain.
A baby doesn’t born knowing that a table is a table. But they see a table, their parents say the word table, and they end up imprinting that what they have to say when they see that thing is the word table. That then they can relation with other things they know. I’ve watched some kids grow and learn how to talk lately and it’s pretty evident how repetition precedes understanding. Many kids will just repeat words that they parents said in certain situation when they happen to be in the same situation. It’s pretty obvious with small kids. But it’s a behavior you can also see a lot with adults, just repeating something they heard once they see that particular words fit the context
Also it’s interesting that language can actually influence the way concepts are constructed in the brain. For instance ancient greeks saw blue and green as the same colour, because they did only have one word for both colours.
I’m not sure if you’re disagreeing with the essay or not? But in any case what you’re describing is in the same vein, that is simply repeating a word without knowing what it actually means in context is exactly what LLMs do. They can get pretty good at getting it right most of the times but without actually being able to learn the concept and context of ‘table’ they will never be able to use it correctly 100% of the time. Or even more importantly for AGI apply reason and critical thinking. Much like a child repeating a word without much clue what it actually means.
Just for fun, this is what Gemini has to say:
I mostly agree with it. What I’m saying is the understanding of the words come from the self dialogue made of those same words. How many times has a baby to repeat the word “mom” until they understand what a mother is? I think that without that previous repetition the more complex "understanding is impossible. That human understanding of concepts, especially the more complex concepts that make us humans, come from we being able to have a dialogue with ourselves and with other humans. But this dialogue initiates as a Parrot, non-intelligent animals with brains that are very similar to ours are parrots. Small children are parrots (are even some adults). But it seems that after being a Parrot for some time it comes the ability to become an Human. That parrot is needed, and it also keeps itself in our consciousness. If you don’t put a lot of effort in your thoughts and says you’ll see that the Parrot is there, that you just express the most appropriate answer for that situation giving what you know.
The “understanding” of concepts seems just like a complex and big interconnection of Neural-Network-like outputs of different things (words, images, smells, sounds…). But language keeps feeling like the more important of those things for intelligent consciousness.
I have yet to read another article that other user posted that explained why the jump from Parrot to Human is impossible in current AI architecture. But at a glance it seems valid. But that does not invalidate the idea of Parrots being the genesis of Humans. Just that a different architecture is needed, and not in the statistical answer department, the article I was linked was more about size and topology of the “brain”.
A baby doesn’t learn concepts by repeating words over and certainly knows what a mother is before it has any label or language to articulate the concept. The label gets associated with the concept later and is not purely by parroting and indeed excessive parroting normally indicates speech development issues.
But an LLM isn’t actually language. It’s numbers that represent tokens that build words. It doesn’t have the concept of a table, just the numerical weighting of other tokens related to “tab” & “le”.
I don’t know how to tell you this. But your brain does not have words imprinted in it…
The concept of this is, funnily enough, something that is being studied that derived from language. For instance ancient greeks did not distinguish between green and blue, as both colours had the same word.
You said
You also said
You need to pick an argument and stick to it.
what do you not understand?
Words are not imprinted, they are a series of electrical impulses that we learn over time. As a reference about the complain that LLM does not have words but tokens that represent values within the network.
And those impulses and how we generate them while we think are of great importance on out consciousness.
So, are words important to the human brain or not? You are not consistent.