• ribboo@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It’s rather interesting here that the board, consisting of a fairly strong scientific presence, and not so much a commercial one, is getting such hate.

    People are quick to jump on for profit companies that do everything in their power to earn a buck. Well, here you have a company that fires their CEO for going too much in the direction of earning money.

    Yet every one is all up in arms over it. We can’t have the cake and eat it folks.

    • TurtleJoe@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      It’s my opinion that every single person in the upper levels is this organization is a maniac. They are all a bunch of so-called “rationalist” tech-right AnCaps that justify their immense incomes through the lens of Effective Altruism, the same ideology that Sam Bankman-fried used to justify his theft of billions from his customers.

      Anybody with the urge to pick a “side” here ought to think about taking a step back and reconsider; they are all bad people.

  • Even_Adder@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    You’re not going to develop AI for the benefit of humanity at Microsoft. If they go there, we’ll know "Open"AI’s mission was all a lie.

    • Gork@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah Microsoft is definitely not going to be benevolent. But I saw this as a foregone conclusion since AI is so disruptive that heavy commercialization is inevitable.

      We likely won’t have free access like we do now and it will be enshittified like everything else now and we’ll need to pay yet another subscription to even access it.

  • Pohl@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    If you ever needed a lesson in the difference between power and authority, this is a good one.

    The leaders of this coup read the rules and saw that they could use the board to remove Altman, they had the authority to make the move and “win” the game.

    It seems that they, like many fools mistook authority for power. The “rules” said they could do it! Alas they did not have the power to execute the coup. All the rules in the world cannot make the organization follow you.

    Power comes from people who grant it to you. Authority comes from paper. Authority is the guidelines for the use of power, without power, it is pointless.

  • Marxism-Fennekinism@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    https://time.com/6247678/openai-chatgpt-kenya-workers/

    To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.

    OpenAI’s outsourcing partner in Kenya was Sama, a San Francisco-based firm that employs workers in Kenya, Uganda and India to label data for Silicon Valley clients like Google, Meta and Microsoft. Sama markets itself as an “ethical AI” company and claims to have helped lift more than 50,000 people out of poverty.

    The data labelers employed by Sama on behalf of OpenAI were paid a take-home wage of between around $1.32 and $2 per hour depending on seniority and performance. For this story, TIME reviewed hundreds of pages of internal Sama and OpenAI documents, including workers’ payslips, and interviewed four Sama employees who worked on the project. All the employees spoke on condition of anonymity out of concern for their livelihoods.

    […]

    Documents reviewed by TIME show that OpenAI signed three contracts worth about $200,000 in total with Sama in late 2021 to label textual descriptions of sexual abuse, hate speech, and violence. Around three dozen workers were split into three teams, one focusing on each subject. Three employees told TIME they were expected to read and label between 150 and 250 passages of text per nine-hour shift. Those snippets could range from around 100 words to well over 1,000. All of the four employees interviewed by TIME described being mentally scarred by the work. Although they were entitled to attend sessions with “wellness” counselors, all four said these sessions were unhelpful and rare due to high demands to be more productive at work. Two said they were only given the option to attend group sessions, and one said their requests to see counselors on a one-to-one basis instead were repeatedly denied by Sama management.

    […]

    One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.

    […]

    That month, Sama began pilot work for a separate project for OpenAI: collecting sexual and violent images—some of them illegal under U.S. law—to deliver to OpenAI. The work of labeling images appears to be unrelated to ChatGPT.

    Gonna leave this here.

      • Marxism-Fennekinism@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        The last quote danced around it but if the implication is that they were seeking out and collecting CSAM which is a sex crime to access, possess and distribute, why the fuck are the boards of both companies not in prison and on the sex offender list?!

        I mean, I know why, but

        • SacrificedBeans@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          I’m sure there’s some loophole there, maybe between countries’ laws. And if there isn’t, Hey! We’ll make one!

        • smooth_tea@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          I really find this a bit alarmist and exaggerated. Consider the motive and the alternative. You really think companies like that have any other options than to deal with those things?

        • Clbull@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Isn’t CSAM classed as images and videos which depict child sexual abuse? Last time I checked written descriptions alone did not count, unless they were being forced to look at AI generated image prompts of such acts?

          • Strawberry@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            That month, Sama began pilot work for a separate project for OpenAI: collecting sexual and violent images—some of them illegal under U.S. law—to deliver to OpenAI. The work of labeling images appears to be unrelated to ChatGPT.

            This is the quote in question. They’re talking about images

    • Clbull@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      So they paid Kenyan workers $2 an hour to sift through some of the darkest shit on the internet.

      Ugh.

    • GenesisJones@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      This reminds me of an NPR podcast from 5 or 6 years ago about the people who get paid by Facebook to moderate the worst of the worst. They had a former employee giving an interview about the manual review of images that were CP andrape related shit iirc. Terrible stuff