You need to understand that lemmy has a lot of users that actually understand neural networks and the nuanced mechanics of machine learning FAR better than the average layperson.
As someone on Lemmy I have to disagree. A lot of people claim they do and pretend they do, but they generally don’t. They’re like AI tbh. Confidently incorrect a lot of
It’s just semantics in this case. Catloaf’s argument is entirely centered around the definition of the word “lie,” and while I agree with that, most people will understand the intent behind the usage in the context it is being used in. AI does not tell the truth. AI is not necessarily accurate. AI “lies.”
The language we use is quite important here because if we as a society value truth as a goal, the general public need to be made aware that these systems are truth-agnostic and that any truthfulness is merely a byproduct of stringing related tokens together. There is a word for assertions that don’t have any regard for the truth in the philosophical literature: bullshit. If this, more precise, language was widespread in regard to AI we might prevent future pollution of the truth as these systems become more widespread.
In this case semantics matter because using terms like halluilcinations, lies, honesty, and all the other anthromorphic bullshit is designed to make people think neural networks are far more advanced than they actually are.
Nn. It’s to make people who don’t understand llms be cautious in placing their trust in them. To communicate that clearly, language that is understandable to people who don’t understand llms need to be used.
I can’t believe this Is the supposed high level of discourse on lemmy
I can’t believe this Is the supposed high level of discourse on lemmy
Lemmy users and AI have a lot of things in common, like being confidently incorrect and making things up to further their point. AI at least agrees and apologises when you point out that it’s wrong, it doesn’t double down and cry to the mods to get you banned.
It’s not “anthropomorphic bullshit”, it’s technical jargon that you’re not understanding because you’re applying the wrong context to the definitions. AI researchers use terms like “hallucination” to mean specific AI behaviours, they use it in their scientific papers all the time.
You need to understand that lemmy has a lot of users that actually understand neural networks and the nuanced mechanics of machine learning FAR better than the average layperson.
As someone on Lemmy I have to disagree. A lot of people claim they do and pretend they do, but they generally don’t. They’re like AI tbh. Confidently incorrect a lot of
People frequently act like Lemmy users are different to Reddit users, but that really isn’t the case. People act the same here as they did/do there.
That’s a huge, arrogant and quite insulting statement. Your making assumptions based on stereotypes
I’m pushing back on someone who’s themselves being dismissive and arrogant.
No. You’re mad at someone who isn’t buying that a. I. 's are anything but a cool parlor trick that isn’t ready for prime time
Because that’s all I’m saying. The are wrong more often than right. They do not complete tasks given to them and they really are garbage
Now this is all regarding the publicly available a. Is. What ever new secret voodoo one. Think has or military has, I can’t speak to.
*you’re
You’re just as bad.
Let’s focus on a spell check issue.
That’s why we have trump
And A LOT of people who don’t and blindly hate AI because of posts like this.
It’s just semantics in this case. Catloaf’s argument is entirely centered around the definition of the word “lie,” and while I agree with that, most people will understand the intent behind the usage in the context it is being used in. AI does not tell the truth. AI is not necessarily accurate. AI “lies.”
The language we use is quite important here because if we as a society value truth as a goal, the general public need to be made aware that these systems are truth-agnostic and that any truthfulness is merely a byproduct of stringing related tokens together. There is a word for assertions that don’t have any regard for the truth in the philosophical literature: bullshit. If this, more precise, language was widespread in regard to AI we might prevent future pollution of the truth as these systems become more widespread.
AI doesn’t lie, it just gets things wrong but presents them as correct with confidence - like most people.
AI returns incorrect results.
In this case semantics matter because using terms like halluilcinations, lies, honesty, and all the other anthromorphic bullshit is designed to make people think neural networks are far more advanced than they actually are.
Nn. It’s to make people who don’t understand llms be cautious in placing their trust in them. To communicate that clearly, language that is understandable to people who don’t understand llms need to be used.
I can’t believe this Is the supposed high level of discourse on lemmy
Lemmy users and AI have a lot of things in common, like being confidently incorrect and making things up to further their point. AI at least agrees and apologises when you point out that it’s wrong, it doesn’t double down and cry to the mods to get you banned.
I know. it would be a lot better world if a. I apologists could just admit they are wrong
But nah. They better than others.
It’s not “anthropomorphic bullshit”, it’s technical jargon that you’re not understanding because you’re applying the wrong context to the definitions. AI researchers use terms like “hallucination” to mean specific AI behaviours, they use it in their scientific papers all the time.