My personal views as someone working in big tech who has recently been softened by seeing the upper echelons of big corporations: AI is not being used as a tool to help anyone, it’s just another example of capitalist greed. The thing that’s really concerning though is open AI began as some sort of non-profit claiming they are operating to help make the world a better place, for the public good, the benefit of everyone…

Then they suddenly started helping Microsoft exclusively, which has been criticized many times for being a monopoly, fairly so. Having a corporate sponsor is one thing. But being exclusively in Microsoft’s pocket and helping them improve a paid AI model and series of AI services that are not exclusively and freely available to the public, yet they are getting huge amounts of funding to operate there non-profit from public sources, it seems like a scam. Unbelievable that they are allowed to do this

But the part that’s really troubling is how they view general intelligence and the future of technology and especially AI. Sam Altman really wants to push for artificial general intelligence, that’s the goal. He wanted to be mobile so it’s accessible basically anywhere. But the thing he never mentions is how it benefits society at large. By society, I mean everyone, not just big companies but health care, wellness, well-being of the public, advancing medical technology, making food more accessible, making housing more accessible for people, improving the state of the world that we live in… None of these are concerns for Open AI

Open AI is basically just a scheme, a literal pyramid scheme. Users have realized that there’s a way to get easier work done in general, so they keep funding and paying for open AI services which fuels their organization. But they are not interested in doing anything beneficial or good for humanity at all. That’s pretty troubling to know.

  • ItsComplicated@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Then they suddenly started helping Microsoft exclusively, which has been criticized many times for being a monopoly,

    Iirc, Microsoft has 49% of it. OpenAI is having another round where it is speculated Apple will contribute along with Microsoft again.

    Wasn’t Altman in charge of World Coin where he wanted everyone to scan their eyes for money to encourage a universal ID/wage?

  • kusivittula@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    making the world a better place would be aswesome, but making buttloads of money is awesomer. - sam altman, probably.

  • masterspace@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Then they suddenly started helping Microsoft exclusively, which has been criticized many times for being a monopoly, fairly so. Having a corporate sponsor is one thing. But being exclusively in Microsoft’s pocket and helping them improve a paid AI model and series of AI services that are not exclusively and freely available to the public, yet they are getting huge amounts of funding to operate there non-profit from public sources, it seems like a scam. Unbelievable that they are allowed to do this

    I don’t understand what your complaint here is. Microsoft invested a fuck ton of money into OpenAI, and OpenAI uses Azure servers to run their models. There’s nothing really nefarious there on OpenAI’s part. You can argue that Microsoft might be acting anit-competitive, in that they’re basically paying a company to use their cloud services instead of competitors, but Microsoft is still a pretty distant #2 to AWS in the Cloud, and OpenAI is not the one with obligations to not abuse their dominant market position.

    If anything, OpenAI has gotten scammed out of this relationship as Microsoft engineers have gotten to work closely with them to get their models running efficiently, and now suddenly Microsoft has published very capable models in their Phi line.

    But the part that’s really troubling is how they view general intelligence and the future of technology and especially AI. Sam Altman really wants to push for artificial general intelligence, that’s the goal. He wanted to be mobile so it’s accessible basically anywhere. But the thing he never mentions is how it benefits society at large. By society, I mean everyone, not just big companies but health care, wellness, well-being of the public, advancing medical technology, making food more accessible, making housing more accessible for people, improving the state of the world that we live in… None of these are concerns for Open AI

    I mean, the obvious answer is the exact same as every other form of automation, from steam engines to computers. The more tasks you can automate, then the more of people’s time you free up to work on other stuff.

    That and AGIs have the potential to actually surpass in ways and make discoveries and insights that would take us decades.

    Open AI is basically just a scheme, a literal pyramid scheme. Users have realized that there’s a way to get easier work done in general, so they keep funding and paying for open AI services which fuels their organization. But they are not interested in doing anything beneficial or good for humanity at all. That’s pretty troubling to know.

    You’re literally just describing every single software company, and quite frankly, most businesses. It’s not a pyramid scheme, it’s capitalism, which is also a pyramid scheme, but not an abnormal one.

    Don’t believe that any for-profit company is altruistic, OpenAI included.

  • tyler@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    None of that started to happen until Altman overthrew the board (effectively) and now those board members that cared about that stuff are gone. Turns out the board was right about him.

  • N0body@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    That phenomenon isn’t exclusive to AI. A societal structure where a minuscule number of people amass all wealth and see themselves as gods is an inherently evil system.

    Everything they do and everything they create is a means to increase their wealth and power at the expense of everyone else. At the expense of the planet itself. Regulators, courts, the government, and every other institution become captured vehicles that exist to increase their wealth and power. The people are made poorer and poorer while being distracted by bread and circuses.

    The only war is class war.

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    It’s pretty well known at this point, they lied about their initial goals and nothing they’ve done in ages has been “open”. It’s all been closed research that they won’t share because they don’t want anyone else to catch up.

    Also they had a lot of friction with their AI safety department because caring about safety would slow down business.

    • Buttflapper@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Honestly don’t see how they get away with this and don’t get sued into oblivion? Or why the justice system is letting this happen? Because when you start a non-profit and claim that you’re doing it for the public good, I thought there would be some sort of legal requirement that you’re actually doing that? Meta, which is a for-profit company, has chosen to release their AI models to the public for anyone to use, you can go get a copy of llama 3.1 right now to go and use all on your own and people are tweaking it and doing all these crazy open source changes to llama 3.1. It’s crazy.

  • paw@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    My understanding of the whole “being beneficial for humanity” is that:

    1. It’s kind of a meme that you need to have as a silicon valley start-up. Like Google’s niw dropped “don’t be evil”.
    2. If the founders and the investors, the share holder, get rich or richer, then this is already beneficial to humanity. In a net positive way similar to trickle-down-economics. At least thatvis what I think their line of thinking is.

    Having said that, I think LLMs or Machine Learning can be used for useful things but I also think - as stated - the message " being beneficial for humanity" is hollow in a broader sense.

    • huginn@feddit.it
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Google never dropped that phrase btw - it’s still in the employee manual.

      They just act like they dropped it

    • bionicjoey@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Silicon Valley (the HBO show) was joking about this a decade ago. “Making the world a better place through highly scalable caching and consensus algorithms”, “I don’t want to live in a world where someone else makes the world a better place better than we do”, etc.

      • datavoid@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I watched this show with my brother, who then went on to create massive tech company. We fell out of touch, and when we finally reconnected he told me how much money he was worth within 5 minutes (presumably 8 figures by now), followed by how his only goal was to make the world a better place…

        I hate corporations 🙂

  • LainTrain@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    OpenAI is the corpo is the very antithesis of “open”.

    Actual Open-weight FOSS AI models ala Stable Diffusion or LocalLLaMa/Mistral are definitely a good thing for everyone one though for sure.

    Corposhit is always evil.