Yes and it’s wrong either. Now most people here are also falling in the same hole. I’m here not to promote/support/standing with LLM or Gen-AI, I want to correct what is wrong. You can hate something but please, be objective and rational.
Not really, since “AI” is a pre-existing and MUCH more general term which has been intentionally commandeered by bad actors to mean a particular type of AI.
I completely agree. Using AI to refer specifically to LLMs does reflect the influence of marketing from companies that may not fully represent the broader field of artificial intelligence. Sounds ironic to those who oppose LLM usage might end up sounding like the very bad actors they criticize if they also use the same misleading terms.
As a non-English main, Deepl is useful for my locals (and for me). It’s just how it’s implemented. Still being open-minded, yeah, the extensive resource usage is bad for the earth tho, wishing there would be optimization.
I don’t get to decide if the marketing terms used by the companies I hate end up becoming the common terms.
If I stubbornly refuse to use the common terms and instead only use the technical terms, then I’m only limiting the reach of my message.
OpenAI marketing has successfully made LLM one of the definitions of the term AI, and the most common term used to refer to the tech, in public spaces.
AI remains a broader field of study, an active field of study which tons of people are invested in,and they use AI to refer to the broader field of study in which they’re professionally invested.
I’m just describing how language works.
No you’re not. And you’re not as smart as you think you are.
If everyone says a word means a thing
It’s not literally everybody, and you know it, and you also know that LLMs are not the entire actual category of AI.
That is how language works. Word definitions are literally just informal consensus agreement. Dictionaries are just descriptions of observed usage. Not literally everyone needs to agree on it.
This isn’t some kind of independent conclusion I came to on my own; I used to think like you appear to, but then I watched some explanations from authors and from professional linguists, and they changed my mind about language prescriptivism.
If you say “AI” in most contexts, more people will know what you mean than if you say “LLM”. If your goal is communication, then by that measure “AI” is “more correct” (but again, correctness isn’t even applicable here)
People still know what LLMs are, and they know that it’s a subset of AI. If the internet is swamped with bots actively trying to set linguistic habits for marketing reasons, you’re not required to perpetuate and validate that.
Shills and goons are trying to make “AI” refer to LLMs specifically. It’s an ad campaign. You’re not getting paid to perpetuate this stupidity.
Do you think all the words we use today meant exactly the same thing 300 years ago?
No, people used it “incorrectly” and that usage gains popularity, and that makes it correct.
What you call illiteracy is literally how etymology works.
Just to clarify, do you personally agree that LLMs are a subset of AI, with AI being the broader category that includes other technologies beyond LLMs?
I come from a technical background and have worked in AI to help people and small businesses whether it’s for farming, business decisions, and more. I can’t agree with the view that AI is inherently bad; it’s a valuable tool for many. What’s causing confusion is that ‘AI’ is often used to mean LLMs, which is inaccurate from a technical perspective. My goal is simply to encourage precise language use to avoid misunderstandings. People often misuse words in ways that stray far from their original etymology. For example, in Indonesia, we use the word ‘literally’ as it’s meant — in a literal sense, not figuratively, as it’s often misused in English nowadays. The word ‘literally’ in Indonesian would be translated as ‘secara harfiah,’ and when used, it means exactly as stated. Just like ‘literally,’ words should stay connected to their roots, whether Latin, Greek, or otherwise, as their original meanings give them their true value and purpose.
Depending on context, jargon and terminology change.
In this context, I’d agree that LLMs are a subset tech under the umbrella term “AI”. But in common English discourse, LLM and AI are often used interchangeably. That’s not wrong because correctness is defined by the actual real usage of native speakers of the language.
I also come from a tech background. I’m a developer with 15 years experience, and I work for a large company, and my job is currently integrating LLMs and more traditional ML models into our products, because our shareholders think we need to.
Specificity is useful in technical contexts, but in these public contexts, almost everyone knows what we’re talking about, so the way we’re using language is fine.
So are you saying that a slur (for Black people, for example) is linguistically “correct by definition” ? And it actually describes members of the demographic?
The LLM shills have made “AI” refer exclusively to LLMs. Honestly the best ad campaign ever.
Yes and it’s wrong either. Now most people here are also falling in the same hole. I’m here not to promote/support/standing with LLM or Gen-AI, I want to correct what is wrong. You can hate something but please, be objective and rational.
Language is descriptive not prescriptive.
If people use the term “AI” to refer to LLMs, then it’s correct by definition.
Not really, since “AI” is a pre-existing and MUCH more general term which has been intentionally commandeered by bad actors to mean a particular type of AI.
AI remains a broader field of study.
I completely agree. Using AI to refer specifically to LLMs does reflect the influence of marketing from companies that may not fully represent the broader field of artificial intelligence. Sounds ironic to those who oppose LLM usage might end up sounding like the very bad actors they criticize if they also use the same misleading terms.
This hype cycle is insane, and the gross psychology of the hype obscures the real usefulness of LLMs.
As a non-English main, Deepl is useful for my locals (and for me). It’s just how it’s implemented. Still being open-minded, yeah, the extensive resource usage is bad for the earth tho, wishing there would be optimization.
I don’t get to decide if the marketing terms used by the companies I hate end up becoming the common terms.
If I stubbornly refuse to use the common terms and instead only use the technical terms, then I’m only limiting the reach of my message.
OpenAI marketing has successfully made LLM one of the definitions of the term AI, and the most common term used to refer to the tech, in public spaces.
It doesn’t matter what you want, I’m just describing how language works.
If everyone says a word means a thing, then it means that thing. Words can have multiple meanings.
AI remains a broader field of study, an active field of study which tons of people are invested in, and they use AI to refer to the broader field of study in which they’re professionally invested.
No you’re not. And you’re not as smart as you think you are.
It’s not literally everybody, and you know it, and you also know that LLMs are not the entire actual category of AI.
That is beyond pedantry.
That is how language works. Word definitions are literally just informal consensus agreement. Dictionaries are just descriptions of observed usage. Not literally everyone needs to agree on it.
This isn’t some kind of independent conclusion I came to on my own; I used to think like you appear to, but then I watched some explanations from authors and from professional linguists, and they changed my mind about language prescriptivism.
If you say “AI” in most contexts, more people will know what you mean than if you say “LLM”. If your goal is communication, then by that measure “AI” is “more correct” (but again, correctness isn’t even applicable here)
People still know what LLMs are, and they know that it’s a subset of AI. If the internet is swamped with bots actively trying to set linguistic habits for marketing reasons, you’re not required to perpetuate and validate that.
Shills and goons are trying to make “AI” refer to LLMs specifically. It’s an ad campaign. You’re not getting paid to perpetuate this stupidity.
It’s partially correct but AI don’t always mean it’s LLM. Etymology is important here. Don’t normalize illiteracy.
This is how etymology works.
Do you think all the words we use today meant exactly the same thing 300 years ago?
No, people used it “incorrectly” and that usage gains popularity, and that makes it correct.
What you call illiteracy is literally how etymology works.
Just to clarify, do you personally agree that LLMs are a subset of AI, with AI being the broader category that includes other technologies beyond LLMs?
I come from a technical background and have worked in AI to help people and small businesses whether it’s for farming, business decisions, and more. I can’t agree with the view that AI is inherently bad; it’s a valuable tool for many. What’s causing confusion is that ‘AI’ is often used to mean LLMs, which is inaccurate from a technical perspective. My goal is simply to encourage precise language use to avoid misunderstandings. People often misuse words in ways that stray far from their original etymology. For example, in Indonesia, we use the word ‘literally’ as it’s meant — in a literal sense, not figuratively, as it’s often misused in English nowadays. The word ‘literally’ in Indonesian would be translated as ‘secara harfiah,’ and when used, it means exactly as stated. Just like ‘literally,’ words should stay connected to their roots, whether Latin, Greek, or otherwise, as their original meanings give them their true value and purpose.
Depending on context, jargon and terminology change.
In this context, I’d agree that LLMs are a subset tech under the umbrella term “AI”. But in common English discourse, LLM and AI are often used interchangeably. That’s not wrong because correctness is defined by the actual real usage of native speakers of the language.
I also come from a tech background. I’m a developer with 15 years experience, and I work for a large company, and my job is currently integrating LLMs and more traditional ML models into our products, because our shareholders think we need to.
Specificity is useful in technical contexts, but in these public contexts, almost everyone knows what we’re talking about, so the way we’re using language is fine.
You know it’s bad when someone with my username thinks you’re being too pedantic lol. Dont be a language prescriptivist.
If people use [slur] to refer to [demographic] that does not make it correct by definition.
Linguistically correct, and morally correct, are not the same thing.
So are you saying that a slur (for Black people, for example) is linguistically “correct by definition” ? And it actually describes members of the demographic?