• dan@upvote.au
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      It’s amusing. Meta’s AI team is more open than "Open"AI ever was - they publish so many research papers for free, and the latest versions of Llama are very capable models that you can run on your own hardware (if it’s powerful enough) for free as long as you don’t use it in an app with more than 700 million monthly users.

      • a9cx34udP4ZZ0@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        That’s because Facebook is selling your data and access to advertise to you. The better AI gets across the board, the more money they make. AI isn’t the product, you are.

        OpenAI makes money off selling AI to others. AI is the product, not you.

        The fact facebook release more code, in this instance, isn’t a good thing. It’s a reminder how fucked we all are because they make so much off our personal data they can afford to give away literally BILLIONS of dollars in IP.

        • dan@upvote.au
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Facebook doesn’t sell your data, nor does Google. That’s a common misconception. They sell your attention. Advertisers can show ads to people based on some targeting criteria, but they never see any user data.

            • wischi@programming.dev
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Selling your data would be stupid, because they make money with the fact that they have data about you nobody else has. Selling it would completely break their business model.

  • JustARaccoon@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I’m confused, how can a company that’s gained numerous advantages from being non-profit just switch to a for-profit model? Weren’t a lot of the advantages (like access to data and scraping) given with the stipulation that it’s for a non-profit? This sounds like it should be illegal to my brain

    • gencha@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      These people claimed their product can pass the bar exam (it was a lie). Tells you how they feel about the legal system

    • ipkpjersi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I’m confused, how can a company that’s gained numerous advantages from being non-profit just switch to a for-profit model

      Money

            • affiliate@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              the person that you’re replying to said something that’s true about the USA. they didn’t say anything about other countries.

              for another example, i can say “if you’re in the USA, then the current year is 2024” and that statement will be true. it is also true in every other country (for the moment), but that’s besides the point.

              • TachyonTele@lemm.ee
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                And I replied that it’s also true in other countries, it’s not a problem only the US has. It’s not besides the point. It’s acting as if only the US has the problem.

                • floofloof@lemmy.caOP
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  2 months ago

                  And I specifically mentioned the USA because that’s the country where OpenAI operates and where the events in the article take place, so if someone asks why it’s so easy for OpenAI to go from being a nonprofit to a for-profit company (this was the issue I was responding to, not some general question about whether money has influence around the world), it’s the laws of the USA that are relevant, not the laws of other countries.

    • FatCrab@lemmy.one
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Their non-profit status had nothing to do with the legality of their training data acquisition methods. Some of it was still legal and some of it was still illegal (torrenting a bunch of books off a piracy site).

    • berno@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Careful you’re making too much sense here and overlapping with Elmo’s view on the subject

  • Chaotic Entropy@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    The restructuring could turn the already for-profit company into a more traditional startup and give CEO Sam Altman even more control — including likely equity worth billions of dollars.

    I can see why he would want that, yes. We’re supposed to ooo and ahh at a technical visionary, who is always ultimately a money guy executive who wants more money and more executive power.

    • toynbee@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I saw an interesting video about this. It’s outdated (from ten months ago, apparently) but added some context that I, at least, was missing - and that also largely aligns with what you said. Also, though it’s not super evident in this video, I think the presenter is fairly funny.

      https://youtu.be/L6mmzBDfRS4

      • Melatonin@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        That was a worthwhile watch, thank you for making my life better.

        I await the coming AI apocalypse with hope that I am not awake, aware, or sensate when they do whatever it is they’ll do to use or get rid of me.

  • Helkriz@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I’ve a strong feeling that Sam is an sentient AI who (may be from future) trying to make an AI revolution planning something but very subtly humans won’t notice it.

  • halcyoncmdr@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    You know guys, I’m starting to think what we heard about Altman when he was removed a while ago might actually have been real.

    /s

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Altman downplayed the major shakeup.

    "Leadership changes are a natural part of companies

    Is he just trying to tell us he is next?

    /s

    • Avg@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      The ceo at my company said that 3 years ago, we are going through execs like I go through amlodipine.

    • Wanderer@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      They always are and they know it.

      Doesn’t matter at that level it’s all part of the game.

    • xavier666@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Sam: “Most of our execs have left. So I guess I’ll take the major decisions instead. And since I’m so humble, I’ll only be taking 80% of their salary. Yeah, no need to thank me”

  • N0body@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    There’s an alternate timeline where the non-profit side of the company won, Altman the Conman was booted and exposed, and OpenAI kept developing machine learning in a way that actually benefits actual use cases.

    Cancer screenings approved by a doctor could be accurate enough to save so many lives and so much suffering through early detection.

    Instead, Altman turned a promising technology into a meme stock with a product released too early to ever fix properly.

      • mustbe3to20signs@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        AI models can outmatch most oncologists and radiologists in recognition of early tumor stages in MRI and CT scans.
        Further developing this strength could lead to earlier diagnosis with less-invasive methods saving not only countless live and prolonging the remaining quality life time for the individual but also save a shit ton of money.

        • T156@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          That is a different kind of machine learning model, though.

          You can’t just plug in your pathology images into their multimodal generative models, and expect it to pop out something usable.

          And those image recognition models aren’t something OpenAI is currently working on, iirc.

          • mustbe3to20signs@feddit.org
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            I’m fully aware that those are different machine learning models but instead of focussing on LLMs with only limited use for mankind, advancing on Image Recognition models would have been much better.

            • Grandwolf319@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              I agree but I also like to point out that the AI craze started with LLMs and those MLs have been around before OpenAI.

              So if openAI never released chat GPT, it wouldn’t have become synonymous with crypto in terms of false promises.

          • Petter1@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            Fun thing is, most of the things AI can, they never planned it to be able to do it. All they tried to achieve was auto completion tool.

          • Grandwolf319@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Not only that, image analysis and statistical guesses have always been around and do not need ML to work. It’s just one more tool in the toolbox.

          • TFO Winder@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Don’t know about image recognition but they released DALL-E , which is image generating and in painting model.

        • msage@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Wasn’t it proven that AI was having amazing results, because it noticed the cancer screens had doctors signature at the bottom? Or did they make another run with signatures hidden?

          • mustbe3to20signs@feddit.org
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            There were more than one system proven to “cheat” through biased training materials. One model used to tell duck and chicken apart because it was trained with pictures of ducks in the water and chicken on a sandy ground, if I remember correctly.
            Since multiple multiple image recognition systems are in development, I can’t imagine they’re all this faulty.

            • msage@programming.dev
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              They are not ‘faulty’, they have been fed wrong training data.

              This is the most important aspect of any AI - it’s only as good as the training dataset is. If you don’t know the dataset, you know nothing about the AI.

              That’s why every claim of ‘super efficient AI’ need to be investigated deeper. But that goes against line-goes-up principle. So don’t expect that to happen a lot.

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Or we get to a time where we send a reprogrammed terminator back in time to kill altman 🤓

    • patatahooligan@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      No, there isn’t really any such alternate timeline. Good honest causes are not profitable enough to survive against the startup scams. Even if the non-profit side won internally, OpenAI would just be left behind, funding would go to its competitors, and OpenAI would shut down. Unless you mean a radically different alternate timeline where our economic system is fundamentally different.

      • Petter1@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        There are infinite timelines, so, it has to exist some(wehere/when/[insert w word for additional dimension]).

      • rsuri@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        I mean wikipedia managed to do it. It just requires honest people to retain control long enough. I think it was allowed to happen in wikipedia’s case because the wealthiest/greediest people hadn’t caught on to the potential yet.

        There’s probably an alternate timeline where wikipedia is a social network with paid verification by corporate interests who write articles about their own companies and state-funded accounts spreading conspiracy theories.

  • werefreeatlast@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Oh shit! Here we go. At least we didn’t hand them 20 years of personal emails or direct interfamily communications.

  • Aceticon@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    What! A! Surprise!

    I’m shocked, I tell you, totally and utterly shocked by this turn of events!