Always trust user input. Surely the AI will figure it out.
AI is the future, unfortunately we are eternally stuck in the present.
Looks like it’s learned that adding “according to Quora” makes it look more authoritative. Maybe with a few more weeks of training it’ll figure out how to make fake citations of sources that are actually trustworthy.
I think it’s also a way of shifting the blame.
As @Karyoplasma@discuss.tchncs.de pointed out, this is an actual answer on Quora so at least it got that right
Just wait until it starts taking stuff from 4chan, twitch, and twitter. Things are going to be come so much more interesting.
Google signing a contract with 4chan for data training is actually so stupid I don’t think it’ll ever happen.
4chan is almost certainly blacklisted from basically everything AI given the sites content and history of intentionally destroying chatbots/earlier 'AI’s.
But at the same time they paid reddit millions to train on “authoritative” posts like that one from “fuckSmith” that suggested to add glue to pizza
Coconut um!
AI is just very creative, ok?
Can’t even rly blame the AI at that point
Sure we can. If it gives you bad information because it can’t differentiate between a joke a good information…well, seems like the blame falls exactly at the feet of the AI.
Should an LLM try to distinguish satire? Half of lemmy users can’t even do that
Sarcasm detection is a very hard problem in NLP to be fair
Do you just take what people say on here as fact? That’s the problem, people are taking LLM results as fact.
It should if you are gonna feed it satire to learn from
If it’s being used to give the definite answer of a search, so it should. If it can, than it shouldn’t be used for that
“If it’s on the internet it must be true” implemented in a billion dollar project.
Not sure what would frighten me more: the fact that this is trainings data or if it was hallucinated
Neither, in this case it’s an accurate summary of one of the results, which happens to be a shitpost on Quara. See, LLM search results can work as intended and authoritatively repeat search results with zero critical analysis!
Pretty sure AI will start telling us “You should not believe everything you see on the internet as told by Abraham Lincoln”
one is unlike the other
“nutted” instead “um”
I’ll allow that one because it said “According to Quora” so you knew to ignore it.
I think the names end with um in Latin.
https://www.wordhippo.com/what-is/the/latin-word-for-d0be2dc421be4fcd0172e5afceea3970e2f3d940.html
Everything ends with um in Latin!
Latinum. Fix that for you.
So why does everything end with a vowel n modern Italian?
Hoc casu non est
AI is the vulture
Wow they really did it.
They put the um in the coconut and shake it all up
Have an angry (and admiring) upvote.
I would appear the coconut is the only one they didn’t put the um in though
The funny thing is that the answer is 100% technically correct. There is indeed post on quora that states that including coconut.
raise your hand if you ever thought training ‘ai’ on the whole of the internet was a good idea.
Wow weird. Found one of these that is not a lie
Looks like AI is lots and lots of “artificial” and close to nothing in the area of “intelligence”.
as real as artificial cheese.
I for one am enjoying this AI thing at Google. I haven’t had that many laughs from just searching for things.