• k0e3@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I hate when they make these “holographic screen” images and the screen isn’t mirrored. If the guy that’s working on it is looking at it normally, then it should be mirrored for the camera.

    • Doorknob@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      It’s actually a really good representation of how execs are viewing AI. It’s a bunch of meaningless graphs and pictures of robots with the word ‘AI’ sprinkled over the place, the whole thing is backwards for the worker, and it’s imaginary.

  • Kissaki@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I’m confused by the article suddenly changing to seemingly other semi-related topics and pieces.

  • Kissaki@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    for example, “have seen revenues jump from zero to $20 million in a year,” he said. “It’s because they pick one pain point, execute well, and partner smartly with companies who use their tools,” he added.

    Sounds like they were able to sell their AI services. That doesn’t really measure AI success, only product market success.

    Celebrating a revenue jump from zero, presumably because they did not exist before, is… quite surprising. It’s not like they became more efficient thanks to AI.

  • Rooty@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Makes a planet burning bullshit machine.

    Does not have any monetization plans.

    “I’m telling you guys, it’s only a matter of time before investor money starts rolling in”

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Oh, the investor money already rolled in.

      It’s just that investors also want it to roll back out again. That’s what they’re going to struggle with.

  • ikt@aussie.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Interesting article

    The data also reveals a misalignment in resource allocation. More than half of generative AI budgets are devoted to sales and marketing tools, yet MIT found the biggest ROI in back-office automation—eliminating business process outsourcing, cutting external agency costs, and streamlining operations.

    That surprises me, marketing and sales being the main user of AI, I thought the back-office automation for sure was going to be by far number 1

    Workforce disruption is already underway, especially in customer support and administrative roles. Rather than mass layoffs, companies are increasingly not backfilling positions as they become vacant. Most changes are concentrated in jobs previously outsourced due to their perceived low value.

    So the number 1 user is sales/marketing but it’s back office admin jobs that are most impacted?

    I guess it can be hard to monitor how AI impacts a business as well, I use it multiple times a day but it’s not like I can put down “i searched and found information more quickly” as a way that the company made more money

    • RememberTheApollo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Think of how many ads you see and hear. From pharmaceuticals to entertainment industry ads the amount of money poured into sales and marketing is absurd. If there’s any one massively under-accounted for scourge on modern society and finance it’s ad agencies consuming huge amounts of budget, bandwidth, and just being constantly in your face. Pharma alone spends ~$20Bn on ads.

    • manxu@piefed.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I read that slightly differently: the jobs “disrupted” away are customer support, generally outsourced due to their perceived low value = phone support. Basically, phone customer support is being terminated in favor of chat bots.

      • ikt@aussie.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        there would have to be a trade off for sure, if the quality of your customer support goes down people will simply move to a provider that has better support unless the prices are so cheap they stay with them because they don’t value good support as much as cheap prices, so you’d hope they’re AI bots help reduce the amount of queries people have then they get passed through to an actual human if there’s an real issue

        I’m currently on a $15 AUD a month plan for 18GB’s of data that rolls over every month, company called ‘Amaysim’ in Australia, their customer service has actual humans but they’re actually about as useful as a chatbot (not an AI chatbot, the old school one that can’t do shit), but I don’t leave because tbf I’ve only interacted with them like 2 times in 15 years and I value the hundreds of dollars I’ve saved over their crappy customer support

        With that said I’ve purchased some premium top of the line products lately revolving around renewables and their customer service wasn’t much better

    • chobeat@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      That surprises me, marketing and sales being the main user of AI, I thought the back-office automation for sure was going to be by far number 1

      Generative AI is a bullshit generator. Bullshit in your marketing=good. Bullshit in your backend=bad.

       > So the number 1 user is sales/marketing but it’s back office admin jobs that are most impacted?
      

      GenAI is primarily adopted to justify mass layoffs and only secondarily to create business value. It’s the mass layoffs that drive AI adoption, not the other way around.

      • ikt@aussie.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Generative AI is a bullshit generator

        Bro did you post this article thinking it was bad but only read the headline?

    • Eagle0110@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      This is really interesting but it doesn’t surprise me.

      AI and implementation of AI tend to be inherently good at optimizing efficiency of a system, and they can be exceptionally good at it. Look at Nvidia DLSS realtime AI upscaling for video games for example, it’s fundamentally a conventional TAA pipeline (a normally computationally expensive Antialiasing technique) that’s super boosted in efficiency by AI in only one of the steps of the pipeline, and it’s so efficient that it can be use to make image even more clear than original and in real time. And many of the actually practical machine learning systems that have demonstrated good results in scientific and engineering applications are fundamentally conventional algorithms that’s efficiency boosted so that the computation takes merely hours instead of many decades so they became practical. Not surprising the same approach can be used for business systems and give you actually good results.

      But I fortunately majority of the snake oil marketing for AI products and services seem to focus on claims for generating new content with AI, which is exactly what the marketing people would want LMAO

  • graycube@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    We also don’t know the true cost of these tools since most AI service providers are still operating at a loss.

    • Thorry84@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Not simply operating at a loss, absolutely dumping their prices giving away their products for almost nothing to gain market share. They are burning money at an impressive rate, just for some imaginary payoff in the future.

      • mmmac@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        This is true with most VC backed tech companies, not just AI

        • Thorry84@feddit.nl
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 month ago

          What’s your point?

          Sure that’s the point of venture capital, throwing some money at the wall and see what sticks. You’d expect to have most of them fail, but the one good one makes up for it.

          However in this case it isn’t people throwing some money at startups. It’s large companies like Microsoft throwing trillions into this new tech. And not just the one company looking for a little niche to fill, all of them are all in, flooding the market with random shit.

          Uber and Spotify are maybe not the best examples to use, although they are examples of people throwing away money in hopes of some sort of payoff (even though they both made a small profit recently, but nowhere near digging themselves out of the hole). They are however problematic in the way they operate. Uber’s whole deal is exploiting workers, turning employees into contractors just to exploit them. And also skirting regulations around taxis for the most part. They have been found to be illegal in a lot of civilised countries and had to change the way they do business there, limit their services or not operate in those countries at all. Spotify is music and the music industry is a whole thing I won’t get into.

          The current AI bubble isn’t comparable to venture capital investing in some startups. It’s more comparable to the dotcom bubble, where the industry is perceived to move in a certain direction. Either companies invest heavily and get with the times, or they die. And smart investors put their money in anything with the new tech, since that’s where the money is going to be made. Back then the new tech was the internet, now the new tech is AI. We found out the hard way, it was total BS. The internet wasn’t the infinite money glitch people thought it was and we all paid the price.

          However the scale of that bubble was small as compared to this new AI bubble. And the internet was absolutely a trans-formative technology, changing the way we work and live forever. It’s too early to say if this LLM based “AI” technology will do the same, but I doubt it. The amount of BS thrown around these days is too high. As someone with a somewhat good grasp of how LLMs actually work on a fundamental level, the promised made aren’t backed up by facts. And the amount of money being put into this aren’t near any even optimistic payoff in the future.

          If you want to throw in a simple, over simplified example: This AI boom is more like people throwing money at Theranos than anything else.

          • ikt@aussie.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 month ago

            Really you heard this:

            AI Took My Job 徐铭轩MaoMao

            https://suno.com/song/14572e0f-a446-4625-90ff-3676a790a886

            And went wow, a song that fuckin slaps made by a computer and weren’t impressed?

            You saw this:

            Age of Beyond

            https://www.youtube.com/watch?v=vp7xoPeWzEw

            And went a fuckin sci-fi movie trailer made in 2 months, what a piece of shit this AI is eh?

            To me I’m fuckin stunned but that’s just me, on top of this we’re only in year like 5 of AI going mainstream, where will it be in 10 years? 20 years?

            • Thorry84@feddit.nl
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              1 month ago

              Well maybe one person is a little bit more impressed by some pretty pictures than another person. I really don’t see what that has to do with a company like Microsoft putting their money into this? They don’t make songs or movie trailers.

              To me I’m stunned but that’s just me, on top of this we’re only in year like 5 of AI going mainstream, where will it be in 10 years? 20 years?

              This is a common trap a lot of people fall into. See what improvements have been made the last couple of years, who knows where it will end up right? Unfortunately, reality doesn’t work like that. Improvements made in the past don’t guarantee improvements will continue in the future. There are ceilings that can be run into and are hard to break. There can even be hard limits that are impossible to break. There might be good reasons to not further develop promising technologies from the past into the future. There is no such thing as infinite growth.

              Edit:

              Just checked out that song, man that song is shit…

              “My job vanished without lift.” What does that even mean? That’s not even English.

              And that’s just one of the dozens of issues I’ve seen in 30 secs. You are kidding yourself if you think this is the future, that’s one shit future bro.

              • ikt@aussie.zone
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                1 month ago

                Well maybe one person is a little bit more impressed by some pretty pictures than another person.

                Fair enough, quick question for you, make me an image that is similar to the screenshot I took of the movie, tree in the middle of a futuristic city without using AI.

                I’m not asking you to make a movie or audio or anything like this, just make something that resembles a futuristic city, maybe you can do it in gimp? Let me know how you go :)

                • Thorry84@feddit.nl
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  1 month ago

                  All right, we are done here. I’ve tried to engage with you in a fair and honest way. Giving you the benefit of the doubt and trying to respond to the points you are trying to make.

                  But it appears you are just a troll or an idiot, either way I’m done.

            • absentbird@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              The gains in AI have been almost entirely in compute power and training, and those gains have run into powerful diminishing returns. At the core it’s all still running the same Markov chains as the machine learning experiments from the dawn of computing; the math is over a hundred years old and basically unchanged.

              For us to see another leap in progress we’ll need to pioneer new calculations and formulate different types of thought, then find a way to integrate that with large transformer networks.

              • ikt@aussie.zone
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                1 month ago

                At the core it’s all still running the same Markov chains as the machine learning experiments from the dawn of computing

                Sure but tanks today at their core still look like tanks from ww2, when things work well they work well, when did Mixture of Experts for example start to apply to deep learning? Can you think of anything else outside of compute and training that helps AI? What about building a search engine around the ability for it to get and summarise sources (perplexity)?

                For us to see another leap in progress we’ll need to pioneer new calculations and formulate different types of thought, then find a way to integrate that with large transformer networks.

                To be fair AI is already incredible, ai generated music/video/images are already getting billions of views and coding agents are already generating millions of lines of code every day and AI is already being utilised heavily in heathcare, learning, translation, military… this was posted earlier today:

                Ukrainian sniper pulls off record 4-km shot that killed two Russians. Yes, it took AI

                Rifle Used: 14.5 mm Snipex Alligator, an anti-materiel rifle originally meant to destroy equipment, not personnel. Its official effective range is 2,000 m—only half the distance achieved in this shot.

                • Guidance Tools: The sniper used AI-assisted targeting and drone surveillance to calibrate the record-breaking strike.

                https://euromaidanpress.com/2025/08/17/ukrainian-sniper-4km-ai-shot/

                Now whether we get to AGI is a whole other thing, that I agree would need a major leap

                • absentbird@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  1 month ago

                  Mixture of experts has been in use since 1991, and it’s essentially just a way to split up the same process as a dense model.

                  Tanks are an odd comparison, because not only have they changed radically since WW2, to the point that many crew positions have been entirely automated, but also because the role of tanks in modern combat has been radically altered since then (e.g. by the proliferation of drone warfare). They just look sort of similar because of basic geometry.

                  Consider the current crop of LLMs as the armor that was deployed in WW1, we can see the promise and potential, but it has not yet been fully realized. If you tried to match a WW1 tank against a WW2 tank it would be no contest, and modern armor could destroy both of them with pinpoint accuracy while moving full speed over rough terrain outside of radar range (e.g. what happened in the invasion of Iraq).

                  It will take many generational leaps across many diverse technologies to get from where we are now to realizing the full potential of large language models, and we can’t get there through simple linear progression any more than tanks could just keep adding thicker armor and bigger guns, it requires new technologies.

          • ikt@aussie.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            I always find it odd with how much confidence you guys talk about AI without ever using it

            • ag10n@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              lol, I have llama.cpp and ollama setup on a separate pc just so I can understand and discuss from experience.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        The same was true for YouTube in the beginning, they operated at a loss, and when people were hooked on the service, they monetized it.

        • frongt@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          YouTube wasn’t created to make money, it was created to watch the wardrobe malfunction.

      • altphoto@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        A future where we don’t have jobs so the rich can make more money by selling us stuff? But I won’t have money to pay for stuff! Hmmm!

        • Zron@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          All MBAs and CEOs are like puppies chasing their own tails.

          They want the growth because number go up good. They’ll do anything for number go up. And when number go up, they get the good and then they need to focus on next number go up.

          They have no long term plan other than number go up. For the next few quarters, they can slap AI on anything and number go up. What happens if AI takes all the non manual labor jobs? Or if it turns out AI is useless and they wasted billions on snake oil? They don’t know, cause they were thinking about number go up right now, not number go up later.

          Our economy is a farce.

        • SoftestSapphic@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          The real reason is they want enough money pumped into AI so someone can automate fascism.

          That’s seriously the plan

          Fucking clown world

      • corsicanguppy@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        And, in doing so, they’ve set the market price at that value for the service they advertise, which is more than they deliver already.

        When Ai enters the Valley of Discontent, the price it can set for what it actually offers will be even less than it is now.

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        It’s hard to imagine that gaining market share is even meaningful right now. There’s such a profusion of stuff out there. How much does it actually mean if someone is using your product today, I wonder?

  • Danitos@reddthat.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    This is happening at my company. They gave us 6 months to build an AI-tool to replace a non-AI tool that has been very well built and tested for 6 years, and works perfectly well. The AI tool has some very amazing features, but it could never replace the good old tool.

    The idiot in charge of the project has such a bad vision on the tool, yet likes to overhype it and oversell it so much.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      The idiot in charge of the project has such a bad vision on the tool, yet likes to overhype it and oversell it so much.

      AI in a nutshell.

      A shame, because the underlying technology - with time and patience and less of an eye towards short term profits - could be very useful in sifting large amounts of disorganized information. But what we got was so far removed from what anyone asked for.

      • Derpgon@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Management was planning implementing Google Vertex (AI search platform), but since we already have all our data in ElasticSearch and it supports vectors, I said why not try to implement it myself. With integrated GPU and a very small model, I could create a working POC and it is gonna be - not overexaggerating - 50 times cheaper.

        • k0e3@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Don’t tell management. Start a new company then sell them what you made.

      • regedit@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Capitalism strikes again! All the good generative AI could and does sometimes do but some capitalist made an email sound less like a soulless, corporate turd and it was to the moon with whatever state the tech was at! Rich people have no creativity, imagination, or understanding of the tech. They’re always looking for ways to remove labor costs and make those phat stacks! We could have used generative AI to handle a lot of the shitty, mundane stuff at a faster rate, but no they chose to replace the artists’ creations so they didn’t have to pay for the cost of labor.

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          As an enhancement to an existing suite of diagnostic tools, certainly.

          Not as a stand in for an oncology department, though.

          • willington@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            As an assist to an actual oncologist, only.

            I can see AI as a tool in some contexts, doing some specific tasks better than an unassisted person.

            But as a replacement for people, AI is a dud. I would rather be alone than have a gf AI. And yes I am taking trauma and personal+cultural baggage into account. LLM is also a product of our culture for the most part, so will have our baggage anyway. But at least in principle it could be trained to not have certain kinds of baggage, and still, I would rather deal with a person save for the simplest and lowest stake interactions.

            If we want better people, we need to enfranchise them and remove most paywalls from the world. Right now the world instead of being inviting is bristling with physical, cultural, and virtual fences, saying to us, “you don’t belong and aren’t welcome in 99.99% of the space, and the other 0.01% will cost you.” Housing for now is only a privelege. In a world like that it’s a miracle the people are as decent as they are. If we want better people we have to delibarately, on purpose, choose broadbased human flourishing as a policy objective, and be ruthless to any enemies of said objective. No amnesty for the billionaires and wannabe billionaires. Instead they are trying to shove down our throats AI/LLMs and virtual worlds as replacements for an actually decent and inviting world.

      • Boomer Humor Doomergod@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        As someone whose had to analyze probably a billion lines of log files in my career, having an AI to do at least some sifting would be pretty great.

        Or one that watches a Grafana dashboard and figures out some obscure, long term pattern I can’t see that’s causing an annoying problem.

  • FlashMobOfOne@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    I hope you’re all keeping some money set aside for when the LLM bubble pops. It could end up being the best time to invest at a discount since March 2020.

    • bier@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      When Trump got elected I sold some of my stocks. My investments are also my retirement funds, so I don’t need them for a while. I’m waiting until the next crash starts, or something else that’s pretty bad (war, rogue AI, whatever). If the market crashes I can immediately step in.

      • FlashMobOfOne@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Oil stocks were down 90% in March 2020. That’s what I went with. You can profit off a lot of things if you’re willing to hold for a few years.

        • bier@feddit.nl
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          I once had stocks in Shell, mainly because it’s a Dutch company (I’m Dutch) and its a pretty stable investment. But it felt very wrong to invest in oil. So after a few months I sold them (thankfully with a little profit).

          Personally I just don’t want to invest in anything I think fucks up the world. It’s also why I don’t want to invest in meta and some other tech companies.

          • ubergeek@lemmy.today
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            Personally I just don’t want to invest in anything I think fucks up the world.

            For better or worse, then you wont want to invest in well, anything. Capitalism is always fucking up the world.

            That said, most don’t have a choice about it, if they ever want to retire.

            The best I can get you is investing in Seed Commons, or something similar: https://seedcommons.org/invest

  • phutatorius@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    It depends on the objectives. They were successful at selling useless crap to fools.

    Frankly, I don’t believe that even 5% werre successful by any objective criteria.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    As a non native English speaker, I found the “pilot” thing a bit confusing. But:

    pilot = pilot program

    And then it made sense.

    Anyways I think it’s not so much the 95% that fail that matter, it’s the 5% that succeed.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Anyways I think it’s not so much the 95% that fail that matter, it’s the 5% that succeed.

      Succeeding in what is also a critical point.

      Succeeding in dramatically improving the library sciences? In rapidly sequencing and modeling data in chemistry or biology? In language translation? Cool. Cool. Cool.

      Succeeding in slop-ifying mass media? In convincingly fabricating data to dupe industry professionals and regulatory officials? In jamming up my phone with automated sales calls? Less cool.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        Correct, not all things that matter are positive.

        But it’s the 5% we need to focus on.