• hotdogcharmer@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    Sam Altman belongs in prison. His machine encouraged and guided a child to kill themselves. His machine actively stopped that child seeking outside help. Sam Altman belongs in prison. Sam Altman does not need another $20,000,000,000,000. He needs to go through the legal system and be sentenced and sent to prison because his machine pushed a child to suicide.

    • Electricd@lemmybefree.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 days ago

      Uses a tool the bad way despite it being public knowledge that it’s bad for mental health

      Was predisposed to mental health problems

      Died, partly because they talked to a chatbot

      “It’s the chatbot’s, creator fault”, despite the chatbot never being made to cause those problems, and efforts being made to fix those problems

      Yea nah, it’s just anti-ai people doing their thing again and not being objective.

      Get a better fight, such as hating on pharmaceutical laboratories companies pushing the use of extremely addictive substances for profit, despite them knowing the immense risk they cause to consumers, and financing false ads to make it safe.

      If Sam Altman belongs in prison, it would either be:

      • Because he’s destroying the planet (ecologically)
      • Because he stole lots of content to train his models
      • Natanael@infosec.pub
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        There’s a reason dangerous tools are required to have guards and safety features. It’s not enough that it’s known to be dangerous, that doesn’t stop accidents.

        • DupaCycki@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 days ago

          Some things are - on purpose - made easy to misuse and - by design - accessible to people, who are likely to misuse them. All this money, this supposedly cutting edge technology, and reporting to the police, but they aren’t able to tell when a child is at risk and report it as well?

          Smells like bullshit to me. More like they don’t care. I’m not so sure children should even be allowed to use chatbots in the first place. Or only allowed to use versions specifically trained for interactions with children. But of course - banning children from accessing youtube and wikipedia is a much more pressing concern.

    • sunbytes@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      He’s pretty untouchable.

      Every government thinks AI is the next gold/oil rush and whoever gets to be the “AI country” will become excruciatingly rich.

      That’s why they’re being given IP exemptions and all sorts of legal loopholes are being attempted/ set up for them.

    • Chaotic Entropy@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      Yeah… whatever this is doesn’t care if you’re seeking to kill yourself, but does care if you ask something that isn’t state sanctioned.

      • JohnEdwa@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 days ago

        That is one of the fundamental flaws of machine learning like this, the way they are trained means they end up always trying to agree with the user, because not doing so is taken as being a “wrong” answer. That is why they hallucinate answers too - because “I don’t know” is not an acceptable answer, but generating something plausible that the user takes as truth works.
        You then have to manually try to reign them in and prevent them from talking about things you don’t want them to, but they are trivially easy to fool. IIRC, in one of these suicide cases the LLM did refuse to talk about suicide, until the user told it it was all just for a fictional story. And you can’t really “fix” that without completely banning it from talking about those things in every single occasion, because someone will find a way around it eventually.

        And yeah, they don’t care, because they are essentially just predictive text algorithms turned up to 11. Chatbots like ChatGPT and other LLMs are an excellent application of both meanings of the word “Artificial Intelligence” - they emulate human intelligence by faking being intelligent, when they in reality are not.