Uses a tool the bad way despite it being public knowledge that it’s bad for mental health
Was predisposed to mental health problems
Died, partly because they talked to a chatbot
“It’s the chatbot’s, creator fault”, despite the chatbot never being made to cause those problems, and efforts being made to fix those problems
…
Yea nah, it’s just anti-ai people doing their thing again and not being objective.
Get a better fight, such as hating on pharmaceutical laboratories companies pushing the use of extremely addictive substances for profit, despite them knowing the immense risk they cause to consumers, and financing false ads to make it safe.
If Sam Altman belongs in prison, it would either be:
Because he’s destroying the planet (ecologically)
Because he stole lots of content to train his models
There’s a reason dangerous tools are required to have guards and safety features. It’s not enough that it’s known to be dangerous, that doesn’t stop accidents.
Some things are - on purpose - made easy to misuse and - by design - accessible to people, who are likely to misuse them. All this money, this supposedly cutting edge technology, and reporting to the police, but they aren’t able to tell when a child is at risk and report it as well?
Smells like bullshit to me. More like they don’t care. I’m not so sure children should even be allowed to use chatbots in the first place. Or only allowed to use versions specifically trained for interactions with children. But of course - banning children from accessing youtube and wikipedia is a much more pressing concern.
…
Yea nah, it’s just anti-ai people doing their thing again and not being objective.
Get a better fight, such as hating on pharmaceutical laboratories companies pushing the use of extremely addictive substances for profit, despite them knowing the immense risk they cause to consumers, and financing false ads to make it safe.
If Sam Altman belongs in prison, it would either be:
There’s a reason dangerous tools are required to have guards and safety features. It’s not enough that it’s known to be dangerous, that doesn’t stop accidents.
If you missuse some things at this point, then it’s not the thing’s fault
Some things are - on purpose - made easy to misuse and - by design - accessible to people, who are likely to misuse them. All this money, this supposedly cutting edge technology, and reporting to the police, but they aren’t able to tell when a child is at risk and report it as well?
Smells like bullshit to me. More like they don’t care. I’m not so sure children should even be allowed to use chatbots in the first place. Or only allowed to use versions specifically trained for interactions with children. But of course - banning children from accessing youtube and wikipedia is a much more pressing concern.