• Saledovil@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 hours ago

    It’s safe to assume that any metric they don’t disclose is quite damning to them. Plus, these guys don’t really care about the environmental impact, or what us tree-hugging environmentalists think. I’m assuming the only group they are scared of upsetting right now is investors. The thing is, even if you don’t care about the environment, the problem with LLMs is how poorly they scale.

    An important concept when evaluating how something scales is are marginal values, chiefly marginal utility and marginal expenses. Marginal utility is how much utility do you get if you get one more unit of whatever. Marginal expenses is how much it costs to get one more unit. And what the LLMs produce is the probably that a token, T, follows on prefix Q. So P(T|Q) (read: Probably of T, given Q). This is done for all known tokens, and then based on these probabilities, one token is chosen at random. This token is then appended to the prefix, and the process repeats, until the LLM produces a sequence which indicates that it’s done talking.

    If we now imagine the best possible LLM, then the calculated value for P(T|Q) would be the actual value. However, it’s worth noting that this already displays a limitation of LLMs. Namely even if we use this ideal LLM, we’re just a few bad dice rolls away from saying something dumb, which then pollutes the context. And the larger we make the LLM, the closer its results get to the actual value. A potential way to measure this precision would be by subtracting P(T|Q) from P_calc(T|Q), and counting the leading zeroes, essentially counting the number of digits we got right. Now, the thing is that each additional digit only provides a tenth of the utility to than the digit before it. While the cost for additional digits goes up exponentially.

    So, exponentially decaying marginal utility meets exponentially growing marginal expenses. Which is really bad for companies that try to market LLMs.

    • Jeremyward@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 hours ago

      Well I mean also that they kinda suck, I feel like I spend more time debugging AI code than I get working code.

      • SkunkWorkz@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        27 minutes ago

        I only use it if I’m stuck even if the AI code is wrong it often pushes me in the right direction to find the correct solution for my problem. Like pair programming but a bit shitty.

        The best way to use these LLMs with coding is to never use the generated code directly and atomize your problem into smaller questions you ask to the LLM.

      • squaresinger@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 hour ago

        That’s actually true. I read some research on that and your feeling is correct.

        Can’t be bothered to google it right now.

    • Tollana1234567@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 hours ago

      those are his lying/making up hand gestures. its the same thing trump does with his hands when hes lying or exaggerating, he does the wierd accordian hands.

  • threeduck@aussie.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 hours ago

    All the people here chastising LLMs for resource wastage, I swear to god if you aren’t vegan…

    • Bunbury@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 hours ago

      Whataboutism isn’t useful. Nobody is living the perfect life. Every improvement we can make towards a more sustainable way of living is good. Everyone needs to start somewhere and even if they never move to make more changes at least they made the one.

    • Saledovil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 hours ago

      Animal agriculture has significantly better utility and scaling than LLMs. So, its not hypocritical to be opposed to the latter but not the former.

    • lowleekun@ani.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 hours ago

      Dude, wtf?! You can’t just go around pointing out peoples hypocrisy. Companies killing the planet is big bad.

      People joining in? Dude just let us live!! It is only animals…

      big /s

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 hours ago

      I mean, they’re both bad.

      But also, “Throw that burger in the trash I’m not eating it” and “Uninstall that plugin, I’m not querying it” have about the same impact on your gross carbon emissions.

      These are supply side problems in industries that receive enormous state subsides. Hell, the single biggest improvement to our agriculture policy was when China stopped importing US pork products. So, uh… once again, thank you China for saving the planet.

      • 3abas@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 hours ago

        It’s not, you’re just personally insulted. The livestock industry is responsible for about 15% of human caused greenhouse gas emissions. That’s not negligible.

        • k0e3@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 hours ago

          So, I can’t complain about any part of the remaining 85% if I’m not vegan? That’s so fucking stupid. Do you not complain about microplastics because you’re guilty of using devices with plastic in them to type your message?

      • stratoscaster@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 hours ago

        What is it with vegans and comparing literally everything to veganism? I was in another thread and it was compared to genocide, rape, and climate change all in the same thread. Insanity

  • fuzzywombat@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 hours ago

    Sam Altman has gone into PR and hype overdrive lately. He is practically everywhere trying to distract the media from seeing the truth about LLM. GPT-5 has basically proved that we’ve hit a wall and the belief that LLM will just scale linearly with amount of training data is false. He knows AI bubble is bursting and he is scared.

    • Saledovil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 hours ago

      He’s also already admitted that they’re out of training data. If you’ve wondered why a lot more websites will run some sort of verification when you connect, it’s because there’s a desperate scramble to get more training data.

    • Tollana1234567@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 hours ago

      MS already released, thier AI doesnt make money at all, in fact its costing too much. of course hes freaking out.

    • Saledovil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 hours ago

      Current genAI? Never. There’s at least one breakthrough needed to build something capable of actual thinking.

    • xthexder@l.sw0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 hours ago

      Most certainly it won’t happen until after AI has developed a self-preservation bias. It’s too bad the solution is turning off the AI.

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 hours ago

    Photographer1: Sam, could you give us a goofier face?

    *click* *click*

    Photographer2: Goofier!!

    *click* *click* *click* *click*

    • cenzorrll@piefed.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 hours ago

      He looks like someone in a cult. Wide open eyes, thousand yard stare, not mentally in the same universe as the rest of the world.

  • scintilla@crust.piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 hours ago

    So more energy use for what they people that are into AI are calling a worse model. Is someone going to get fired for this?

  • cecilkorik@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 hours ago

    So like, is this whole AI bubble being funded directly by the fossil fuel industry or something? Because the AI training and the instantaneous global adoption of them is using energy like it’s going out of style. Which fossil fuels actually are (going out of style, and being used to power these data centers). Could there be a link? Gotta find a way to burn all the rest of the oil and gas we can get out of the ground before laws make it illegal. Makes sense, in their traditional who gives a fuck about the climate and environment sort of way, doesn’t it?

    • BillyTheKid@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 hours ago

      I mean, AI is using like 1-2% of human energy and that’s fucking wild.

      My take away is we need more clean energy generation. Good things we’ve got countries like China leading the way in nuclear and renewables!!

      • ayyy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 hours ago

        Yes, China is producing a lot of solar panels (a good thing!) but the percentage of renewables is actually going down. They are adding coal faster than solar.

      • Womble@piefed.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 hours ago

        Do you have a source for that? Because given a chatgpt query takes a similar amount of energy to running a hair dryer for a few seconds i find it hard to believe.

        • Rimu@piefed.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          15 hours ago

          a similar amount of energy to running a hair dryer

          We see a lot of those kinds of comparisons. Thing is, you run a hair dryer once per day at most. Or it’s compared to a google search, often. Again, most people will do a handful of searches each day. A ChatGPT conversation can be hundreds of messages back and forth. A Claude Code session can go for hours and involve millions of tokens. An individual AI inference might be pretty tame but the quantity of them is another level.

          If it was so efficient then they wouldn’t be building Manhatten-sized datacenters.

          • Womble@piefed.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            14 hours ago

            ok, but running a hairdryer for 5 minutes is well up into the hundreds of queries which is more than the vast majority of people will use in a week. The post I replied to was talking about it being 1-2% of energy usage, so that includes transport, heating and heavy industry. It just doesnt pass the smell test to me that something where a weeks worth of usage is exceeded by a person drying their hair once is comparable with such vast users of energy.

      • cecilkorik@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 hours ago

        All I know is that I’m getting real tired of this Matrix / Idiocracy Mash-up Movie we’re living in.

  • dinckel@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 hours ago

    Duh. Every company like this “suddenly” starts withholding public progress reports, once their progress fucking goes downhill. Stop giving these parasites handouts