“The real benchmark is: the world growing at 10 percent,” he added. “Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we’ll be fine as an industry.”

Needless to say, we haven’t seen anything like that yet. OpenAI’s top AI agent — the tech that people like OpenAI CEO Sam Altman say is poised to upend the economy — still moves at a snail’s pace and requires constant supervision.

  • CompostMaterial@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    That is not at all what he said. He said that creating some arbitrary benchmark on the level or quality of the AI, (e.g.: as it’s as smarter than a 5th grader or as intelligent as an adult) is meaningless. That the real measure is if there is value created and out out into the real world. He also mentions that global growth is up by 10%. He doesn’t provide data that correlates the grow with the use of AI and I doubt that such data exists yet. Let’s not just twist what he said to be “Microsoft CEO says AI provides no value” when that is not what he said.

    • Melvin_Ferd@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      AI is the immigrants of the left.

      Of course he didn’t say this. By the media want you to think he did.

      “They’re taking your jobs”

    • Shadywack@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      I think that’s pretty clear to people who get past the clickbait. Oddly enough though, if you read through what he actually said, the takeaway is basically a tacit admission, interpreted as him trying to establish a level-set on expectations from AI without directly admitting the strategy of massively investing in LLM’s is going bust and delivering no measurable value, so he can deflect with “BUT HEY CHECK OUT QUANTUM”.

  • bearboiblake@pawb.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 days ago

    microsoft rn:

    ✋ AI

    👉 quantum

    can’t wait to have to explain the difference between asymmetric-key and symmetric-key cryptography to my friends!

  • straightjorkin@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    Makes sense that the company that just announced their qbit advancement would be disparaging the only “advanced” thing other companies have shown in the last 5 years.

  • WalnutLum@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    He probably saw that softbank and masayoshi son were heavily investing in it and figured it was dead.

  • werefreeatlast@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    That’s because they want to use AI in a server scenario where clients login. That translated to American English and spoken with honesty means that they are spying on you. Anything you do on your computer is subject to automatic spying. Like you could be totally under the radar, but as soon as you say the magic words together bam!..I’d love a sling thong for my wife…bam! Here’s 20 ads, just click to purchase since they already stole your wife’s boob size and body measurements and preferred lingerie styles. And if you’re on McMaster… Hmm I need a 1/2 pipe and a cap…Better get two caps in case you cross thread on…ding dong! FBI! We know you’re in there! Come out with your hands up!

    • epicstove@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      The only thing stopping me from switching to Linux is some college software (Won’t need it when I’m done) and 1 game (which no longer gets updates and thus is on the path to a slow sad demise)

      So I’m on the verge of going Penguin.

        • WagyuSneakers@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 days ago

          You’re really forcing it at that point. Wine can’t run most of what I need to use for work. I’m excited for the day I can ditch Windows, but it’s not any time soon unfortunately. I’ll have to live with WSL.

      • Jeena@piefed.jeena.netOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        Yeah use Windows in a VM and your game probably just works too, I was surprised that all games I have on Steam now just work on Linux.

        Years ago when I switched from OSX to Linux I just stopped gaming because of that but I started testing my old games and suddenly no problems with them anymore.

  • AA5B@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    For a lot of years, computers added no measurable productivity improvements. They sure revolutionized the way things work in all segments of society for something that doesn’t increase productivity.

    AI is an inflating bubble: excessive spending, unclear use case. But it won’t take long for the pop, clearing out the failures and making successful use cases clearer, the winning approaches to emerge. This is basically the definition of capitalism

    • capybara@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      What time span are you referring to when you say “for a lot of years”?

      • AA5B@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        Vague memories of many articles over much of my adult life decrying the costs of whatever the current trend with computers is being higher than the benefits.

        And I believe it, it’s technically true. There seems to be a pattern of bubbles where everyone jumps on the new hot thing, spend way too much money on it. It’s counterproductive, right up until the bubble pops, leaving the transformative successes.

        Or I believe it was a long term thing with electronic forms and printers. As long as you were just adding steps to existing business processes, you don’t see productivity gains. It took many years for businesses to reinvent the way they worked to really see the productivity gains

        • Snowstorm@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          If you want a reference there is a Rational Reminder Podcast (nerdy and factual personal finance podcast from a Canadian team) about this concept. It was the illustrated with trains or phone infrastructure 100 years ago : new technology looks nice -> people invest stupid amounts in a variety of projects-> some crash bring back stock valuations to reasonable level and at that point the technology is adopted and its infrastructure got subsidized by those who lost money on the stock market hot thing. Then a new hot thing emerge. The Internet got its cycle in 2000, maybe AI is the next one. Usually every few decade the top 10 in the s/p 500 changes.

    • Womble@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      eh, the entireity of training GPT4 and the whole world using it for a year turns out to be about 1% of the gasoline burnt just by the USA every single day. Its barely a rounding error when it comes to energy usage.

  • funkless_eck@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    I’ve been working on an internal project for my job - a quarterly report on the most bleeding edge use cases of AI, and the stuff achieved is genuinely really impressive.

    So why is the AI at the top end amazing yet everything we use is a piece of literal shit?

    The answer is the chatbot. If you have the technical nous to program machine learning tools it can accomplish truly stunning processes at speeds not seen before.

    If you don’t know how to do - for eg - a Fourier transform - you lack the skills to use the tools effectively. That’s no one’s fault, not everyone needs that knowledge, but it does explain the gap between promise and delivery. It can only help you do what you already know how to do faster.

    Same for coding, if you understand what your code does, it’s a helpful tool for unsticking part of a problem, it can’t write the whole thing from scratch

    • mr_jaaay@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      Exactly - I find AI tools very useful and they save me quite a bit of time, but they’re still tools. Better at some things than others, but the bottom line is that they’re dependent on the person using them. Plus the more limited the problem scope, the better they can be.

      • wordcraeft@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 days ago

        Yes, but the problem is that a lot of these AI tools are very easy to use, but the people using them are often ill-equipped to judge the quality of the result. So you have people who are given a task to do, and they choose an AI tool to do it and then call it done, but the result is bad and they can’t tell.

        • mr_jaaay@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 days ago

          True, though this applies to most tools, no? For instance, I’m forced to sit through horrible presentations beause someone were given a task to do, they created a Powerpoint (badly) and gave a presentation (badly). I don’t know if this is inherently a problem with AI…

    • michaelnik@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      LLMs could be useful for translation between programming languages. I asked it to recently for server code given a client code in a different language and the LLM generated code was spot on!

      • mke@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        5 days ago

        I remain skeptical of using solely LLMs for this, but it might be relevant: DARPA is looking into their usage for C to Rust translation. See the TRACTOR program.

    • raspberriesareyummy@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      So why is the AI at the top end amazing yet everything we use is a piece of literal shit?

      Just that you call an LLM “AI” shows how unqualified you are to comment on the “successes”.

      • funkless_eck@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 days ago

        What are you talking about? I read the papers published in mathematical and scientific journals and summarize the results in a newsletter. As long as you know equivalent undergrad statistics, calculus and algebra anyone can read them, you don’t need a qualification, you could just Google each term you’re unfamiliar with.

        While I understand your objection to the nomenclature, in this particular context all major AI-production houses including those only using them as internal tools to achieve other outcomes (e.g. NVIDIA) count LLMs as part of their AI collateral.

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 days ago

          The mechanism of machine learning based on training data as used by LLMs is at its core statistics without contextual understanding, the output is therefore only statistically predictable but not reliable. Labeling this as “AI” is misleading at best, directly undermining democracy and freedom in practice, because the impressively intelligent looking output leads naive people to believe the software knows what it is talking about.

          People who condone the use of the term “AI” for this kind of statistical approach are naive at best, snake oil vendors or straightout enemies of humanity.

      • Lifter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        Not this again… LLM is a subset of ML which is a subset of AI.

        AI is very very broad and all of ML fits into it.

        • jacksilver@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          This is the issue with current public discourse though. AI has become shorthand for the current GenAI hypecycle, meaning for many AI has become a subset of ML.

        • SoftestSapphic@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          A Large Language Model is not a Machine Learning program.

          An LLM is a program that translates human speech into sentiment instead of trying to acheive literal translations. It’s a layer that sits on other tech to make it easier for a program to talk with a person. It is not intelligent, an LLM does not learn.

          You really don’t know what you are talking about. A perfect example of how obfuscating tech to make it sound cool invites any random person to have an opinion on “AI”

          When people say AI is not real or intelligent they are speaking from a computer scientist perspective instead of trying to make sense of something they don’t understand from scratch.

          • jacksilver@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 days ago

            LLMs are deep learning models that were developed off of multi-head attention/transformer layers. They are absolutely Machine Learning as they use a blend of supervised and unsupervised training (plus some reinforcement learning with some recent developments like DeepSeek).

          • AbsentBird@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 days ago

            LLMs are a type of machine learning. Input is broken into tokens, which are then fed through a type of neural network called a transformer model.

            The models are trained with a process known as deep learning, which involves the probabilistic analysis of unstructured data, which eventually enables the model to recognize distinctions between pieces of content.

            That’s like textbook machine learning. What you said about interpreting sentiment isn’t wrong, but it does so with machine learning algorithms.

          • KingRandomGuy@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 days ago

            I’m a researcher in ML and LLMs absolutely fall under ML. Learning in the term “Machine Learning” just means fitting the parameters of a model, hence just an optimization problem. In the case of an LLM this means fitting parameters of the transformer.

            A model doesn’t have to be intelligent to fall under the umbrella of ML. Linear least squares is considered ML; in fact, it’s probably the first thing you’ll do if you take an ML course at a university. Decision trees, nearest neighbor classifiers, and linear models all are machine learning models, despite the fact that nobody would consider them to be intelligent.

    • earphone843@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 days ago

      For coding it’s also useful for doing the menial grunt work that’s easy but just takes time.

      You’re not going to replace a senior dev with it, of course, but it’s a great tool.

      My previous employer was using AI for intelligent document processing, and the results were absolutely amazing. They did sink a few million dollars into getting the LLM fine tuned properly, though.

  • finitebanjo@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    YES

    YES

    FUCKING YES! THIS IS A WIN!

    Hopefully they curtail their investments and stop wasting so much fucking power.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      I think the best way I’ve heard it put is “if we absolutely have to burn down a forest, I want warp drive out of it. Not a crappy python app”

  • surph_ninja@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    That’s standard for emerging technologies. They tend to be loss leaders for quite a long period in the early years.

    It’s really weird that so many people gravitate to anything even remotely critical of AI, regardless of context or even accuracy. I don’t really understand the aggressive need for so many people to see it fail.

    • andros_rex@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 days ago

      Because there’s already been multiple AI bubbles (eg, ELIZA - I had a lot of conversations with FREUD running on an Apple IIe). It’s also been falsely presented as basically “AGI.”

      AI models trained to help doctors recognize cancer cells - great, awesome.

      AI models used as the default research tool for every subject - very very very bad. It’s also so forced - and because it’s forced, I routinely see that it has generated absolute, misleading, horseshit in response to my research queries. But your average Joe will take that on faith, your high schooler will grow up thinking that Columbus discovered Colombia or something.

    • Furbag@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      I just can’t see AI tools like ChatGPT ever being profitable. It’s a neat little thing that has flaws but generally works well, but I’m just putzing around in the free version. There’s no dollar amount that could be ascribed to the service that it provides that I would be willing to pay, and I think OpenAI has their sights set way too high with the talk of $200/month subscriptions for their top of the line product.

    • Blakdragon@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      For me personally, it’s because it’s been so aggressively shoved in my face in every context. I never asked for it, and I can’t escape it. It actively gets in my way at work (github copilot) and has already re-enabled itself at least once. I’d be much happier to just let it exist if it would do the same for me.

  • sighofannoyance@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    And crashing the markets in the process… At the same time they came out with a bunch of mambo jumbo and scifi babble about having a million qbit quantum chip… 😂

    • seejur@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      Tech is basically trying to push up the stocks one hype idea after another. Social media bubble about to burst? AI! AI about to burst? Quantum! I’m sure that when people will start realizing quantum computing is another smokescreen, a new moronic idea will start to gain steam from all those LinkedIn “luminaries”

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        Quantum computation is a lot like fusion.

        We know how it works and we know that it would be highly beneficial to society but, getting it to work with reliability and at scale is hard and expensive.

        Sure, things get over hyped because capitalism but that doesn’t make the technology worthless… It just shows how our economic system rewards lies and misleading people for money.

        • seejur@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          It also can solve only a limited set of problems. People is under the impression that they can suddenly game at 10k full path ray tracing if they have a quantum cpu, while in reality for 99.9% of the problem is only as fast as normal cpus

          • FauxLiving@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 days ago

            That doesn’t make it worthless.

            People are often wrong about technology, that’s independent of the technology’s usefulness. Quantum computation is incredibly useful for the applications that require it, things that are completely impossible to calculate with classical computers can be done using quantum algorithms.

            This is true even if there are people on social media who think that it’s a new graphics card.

            • seejur@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              6 days ago

              Absolutely, but its application is not as widespread as someone not into science might think. The only thing it might actually impact the average Joe is cryptography I think

  • Kokesh@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    It is fun to generate some stupid images a few times, but you can’t trust that “AI” crap with anything serious.

    • Encrypt-Keeper@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      I was just talking about this with someone the other day. While it’s truly remarkable what AI can do, its margin for error is just too big for most if not all of the use cases companies want to use it for.

      For example, I use the Hoarder app which is a site bookmarking program, and when I save any given site, it feeds the text into a local Ollama model which summarizes it, conjures up some tags, and applies the tags to it. This is useful for me, and if it generates a few extra tags that aren’t useful, it doesn’t really disrupt my workflow at all. So this is a net benefit for me, but this use case will not be earning these corps any amount of profit.

      On the other end, you have Googles Gemini that now gives you an AI generated answer to your queries. The point of this is to aggregate data from several sources within the search results and return it to you, saving you the time of having to look through several search results yourself. And like 90% of the time it actually does a great job. The problem with this is the goal, which is to save you from having to check individual sources, and its reliability rate. If I google 100 things and Gemini correctly answers 99 of those things accurate abut completely hallucinates the 100th, then that means that all 100 times I have to check its sources and verify that what it said was correct. Which means I’m now back to just… you know… looking through the search results one by one like I would have anyway without the AI.

      So while AI is far from useless, it can’t now and never will be able to be relied on for anything important, and that’s where the money to be made is.

      • dubyakay@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        Even your manual search results may have you find incorrect sources, selection bias for what you want to see, heck even AI generated slop, so the AI generated results will just be another layer on top. Link aggregating search engines are slowly becoming useless at this rate.

        • Encrypt-Keeper@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          While that’s true, the thing that stuck out to me is not even that the AI was mislead by itself finding AI slop, or even somebody falsely asserting something. I googled something with a particular yea or no answer. “Does X technology use Y protocol”. The AI came back with “Yes it does, and here’s how it uses it”, and upon visiting the reference page for that answer, it was documentation for that technology where it explained very clearly that x technology does NOT use Y protocol, and then went into detail on why it doesn’t. So even when everything lines up and the answer is clear and unambiguous, the AI can give you an entirely fabricated answer.

          • merc@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 days ago

            What’s really awful is that it seems like they’ve trained these LLMs to be “helpful”, which means to say “yes” as much as possible. But, that’s the case even when the true answer is “no”.

            I was searching for something recently. Most people with similar searches were trying to do X, I was trying to do Y which was different in subtle but important differences. There are tons of resources out there showing how to do X, but none showing how to do Y. The “AI” answer gave me directions for doing Y by showing me the procedure for doing X, with certain parts changed so that they match Y instead. It doesn’t work like that.

            Like, imagine a recipe that not just uses sugar but that relies on key properties of sugar to work, something like caramel. Search for “how do I make caramel with stevia instead of sugar” and the AI gives you the recipe for making caramel with sugar, just with “stevia” replacing every mention of “sugar” in the original recipe. Absolutely useless, right? The correct answer would be “You can’t do that, the properties are just too different.” But, an LLM knows nothing, so it is happy just to substitute words in a recipe and be “helpful”.