If you legitimately got this search result - please fucking reach out to your local suicide hot line and make them aware. Google needs to be absolutely sued into the fucking ground for mistakes like these that are
Google trying to make a teensy bit more money
Absolutely will push at least a few people over the line into committing suicide.
We must hold companies responsible for bullshit their AI produces.
what are you whining about? Hallucination is inherently part of LLM as of today. Anything out of this should not be trusted with certainty. But employing it will have more benefits than just shadowing it for everyone. Take it as an unfinished project, so ignore the results if you like. Seriously, it’s physically possible to actually ignore the generative results.
I absolutely agree and I consider LLM results to be “neat” but never trusted - if I think I should bake spaghetti squash at 350 I might ask an LLM and only find real advice if our suggested temperatures vary.
But some people have wholly bought into the “it’s a magic knowledge box” bullshit - you’ll see opinions here on lemmy that generative AI can make novel creations that indicate true creativity… you’ll see opinions from C-level folks that LLMs can replace CS wholesale who are chomping at the bit to downsize call centers. Companies need to be careful about deceiving these users and those that feed into the mysticism really need to be stopped.
Not for everyone, but it would help a lot of people who have depression that was caused primarily by financial stress, working in a job/career that they aren’t passionate about, etc…
Money doesn’t buy happiness but it can help someone who is struggling to meet their basic needs not get stuck in a depressive state. Plus, it can be used in exchange for goods and services that show efficacy against depression.
Everyone’s brains are different. For some SSRIs might work. For others, SNRIs. While there are claims of cocaine and prostitutes being helpful for some, that’s not really scientifically proven and there the significant health and imprisonment risks. There is, however, strong evidence for certain psychedelics.
Is this not real? I’ve done some Googling diligence and it’s been inconclusive - I’d really like to know as there are starry eyed sales people who keep pushing strong for integrating customer facing AI and I’ve been looking for a concrete example of it fucking up that’d leave us really liable. This and the “add glue to cheese” are both excellent examples that I haven’t been able to verify the veracity of.
I’m not sure how you’d tell unless there is some reputable source that claims they saw this search result themselves, or you found it yourself. Making a fake is as easy as inspect element -> edit -> screenshot.
Stupid actions in adding unsanitized AI output to search results are real, those very specific memetic searches leading to single Reddit comment seem to not be real
Should Reddit or quora be liable if Google used a link instead? Ai doesn’t need to work 100% of the time. It just needs to be better than what we are using.
What you’re focused on is actually the DMCA safe harbor provision.
If Reddit says, “We have a platform and some dumbass said to snort granulated sugar” it’s different from Google saying, “You should snort granulated sugar.”
Make it apple employees in store and Microsoft forums. If humans give bad advice 10% of the time and Ai (or any technological replacement) makes mistakes 1% of the time, you can’t point to that 1% as a gotcha.
You’re shifting the goal posts though - prior to AI being an expert reference on the internet was expensive and dangerous, since you could potentially be held liable - as such a lot of topic areas simply lacked expert reference sources. Google has declared itself an expert reference in every topic utilizing Gemini - it isn’t, this will end badly for them.
If you legitimately got this search result - please fucking reach out to your local suicide hot line and make them aware. Google needs to be absolutely sued into the fucking ground for mistakes like these that are
Google trying to make a teensy bit more money
Absolutely will push at least a few people over the line into committing suicide.
We must hold companies responsible for bullshit their AI produces.
i pulled the image from a meme channel, so i dont know if its real or not, but at the same time, this below does look like a legit response
So you can put raw chicken meat inside your armpit and it’s done? Sounds legit.
If you have a fever.
Leaving my chicken for 10 minutes near a window on a warm summer day and then digging in
It’s like sushi… kinda
…does the chicken’s power level need to be over 9000 in order to be safe to eat?
Turns out AI is about as bad at verifying sources as Lemmy users.
I have read elsewhere that it was faked.
what are you whining about? Hallucination is inherently part of LLM as of today. Anything out of this should not be trusted with certainty. But employing it will have more benefits than just shadowing it for everyone. Take it as an unfinished project, so ignore the results if you like. Seriously, it’s physically possible to actually ignore the generative results.
“Absolutely sued” my ass
I absolutely agree and I consider LLM results to be “neat” but never trusted - if I think I should bake spaghetti squash at 350 I might ask an LLM and only find real advice if our suggested temperatures vary.
But some people have wholly bought into the “it’s a magic knowledge box” bullshit - you’ll see opinions here on lemmy that generative AI can make novel creations that indicate true creativity… you’ll see opinions from C-level folks that LLMs can replace CS wholesale who are chomping at the bit to downsize call centers. Companies need to be careful about deceiving these users and those that feed into the mysticism really need to be stopped.
Be depressed
Want to commit suicide
Google it
Gets this result
Remembers comment
Sues
Gets thousands of dollars
Depression cured (maybe)
well at least you’d be suicidal with money!
Lots of dead famous rich people show that money does not cure depression.
Not for everyone, but it would help a lot of people who have depression that was caused primarily by financial stress, working in a job/career that they aren’t passionate about, etc…
Money doesn’t buy happiness but it can help someone who is struggling to meet their basic needs not get stuck in a depressive state. Plus, it can be used in exchange for goods and services that show efficacy against depression.
What kind of goods and services?
Everyone’s brains are different. For some SSRIs might work. For others, SNRIs. While there are claims of cocaine and prostitutes being helpful for some, that’s not really scientifically proven and there the significant health and imprisonment risks. There is, however, strong evidence for certain psychedelics.
TL;DR - Drugs might be helpful for some.
This seems to not be real (yet) though.
Is this not real? I’ve done some Googling diligence and it’s been inconclusive - I’d really like to know as there are starry eyed sales people who keep pushing strong for integrating customer facing AI and I’ve been looking for a concrete example of it fucking up that’d leave us really liable. This and the “add glue to cheese” are both excellent examples that I haven’t been able to verify the veracity of.
I gotchu on the cheese
My comment - relied on another user’s modified prompt to avoid Google’s incredibly hasty fix
I’m not sure how you’d tell unless there is some reputable source that claims they saw this search result themselves, or you found it yourself. Making a fake is as easy as inspect element -> edit -> screenshot.
Stupid actions in adding unsanitized AI output to search results are real, those very specific memetic searches leading to single Reddit comment seem to not be real
This is from the account that spread the image originally: https://x.com/ai_for_success/status/1793987884032385097
Alternate Bluesky link with screencaps (must be logged in): https://bsky.app/profile/joshuajfriedman.com/post/3ktarh3vgde2b
Just so others do not need to click etc: they found out it was faked and apologize for spreading fake news.
Thank you, internet sleuth!
Should Reddit or quora be liable if Google used a link instead? Ai doesn’t need to work 100% of the time. It just needs to be better than what we are using.
What you’re focused on is actually the DMCA safe harbor provision.
If Reddit says, “We have a platform and some dumbass said to snort granulated sugar” it’s different from Google saying, “You should snort granulated sugar.”
That’s… not relevant to my point at all.
Make it apple employees in store and Microsoft forums. If humans give bad advice 10% of the time and Ai (or any technological replacement) makes mistakes 1% of the time, you can’t point to that 1% as a gotcha.
You’re shifting the goal posts though - prior to AI being an expert reference on the internet was expensive and dangerous, since you could potentially be held liable - as such a lot of topic areas simply lacked expert reference sources. Google has declared itself an expert reference in every topic utilizing Gemini - it isn’t, this will end badly for them.