If this is the way to superintelligence, it remains a bizarre one. “This is back to a million monkeys typing for a million years generating the works of Shakespeare,” Emily Bender told me. But OpenAI’s technology effectively crunches those years down to seconds. A company blog boasts that an o1 model scored better than most humans on a recent coding test that allowed participants to submit 50 possible solutions to each problem—but only when o1 was allowed 10,000 submissions instead. No human could come up with that many possibilities in a reasonable length of time, which is exactly the point. To OpenAI, unlimited time and resources are an advantage that its hardware-grounded models have over biology. Not even two weeks after the launch of the o1 preview, the start-up presented plans to build data centers that would each require the power generated by approximately five large nuclear reactors, enough for almost 3 million homes.

https://archive.is/xUJMG

  • IamG0rb@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    “In OpenAI’s early tests, scaling o1 showed diminishing returns: Linear improvements on a challenging math exam required exponentially growing computing power.”

    Sounds like most other drugs, too.

  • surph_ninja@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    People writing off AI because it isn’t fully replacing humans. Sounds like writing off calculators because they can’t work without human input.

    Used correctly and in the right context, it can still significantly increase productivity.

    • Eranziel@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      No, this is the equivalent of writing off calculators if they required as much power as a city block. There are some applications for LLMs, but if they cost this much power, they’re doing far more harm than good.

      • surph_ninja@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 days ago

        Imagine if the engineers for computers were just as short sighted. If they had stopped prioritizing development when computers were massive, room sized machines with limited computing power and obscenely inefficient.

        Not all AI development is focused on increasing complexity. Much is focused on refinement, and increasing efficiency. And there’s been a ton of progress in this area.

        • Eranziel@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          14 days ago

          This article and discussion is specifically about massively upscaling LLMs. Go follow the links and read OpenAI’s CEO literally proposing data centers which require multiple, dedicated grid-scale nuclear reactors.

          I’m not sure what your definition of optimization and efficiency is, but that sure as heck does not fit mine.

          • surph_ninja@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            14 days ago

            Sounds like you’re only reading a certain narrative then. There’s plenty of articles about increasing efficiency, too.

    • Jyek@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      Except it has gotten progressively worse as a product due to misuse, corporate censorship of the engine and the dataset feeding itself.

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    We’re hitting the end of free/cheap innovation. We can’t just make a one-time adjustment to training and make a permanent and substantially better product.

    What’s coming now are conventionally developed applications using LLM tech. o1 is trying to fact-check itself and use better sources.

    I’m pretty happy it’s slowing down right at this point.

    I’d like to see non-profit open systems for education. Let’s feed these things textbooks and lectures. Model the teaching after some of our best minds. Give individuals 1:1 time with a system 24x7 that they can just ask whatever they want and as often as they want and have it keep track of what they know and teach them the things that they need to advance. .

    • anonvurr@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      I mean isn’t it already that is included in the datasets? It’s pretty much a mix of everything.

    • quixote84@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      That’s the job I need. I’ve spent my whole live trying to be Data from Star Trek. I’m ready to try to mentor and befriend a computer.

  • NocturnalMorning@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    How is it useful to type millions of solutions out that are wrong to come up with the right one? That only works on a research project when youre searching for patterns. If you are trying to code, it needs to be right the first time every time it’s run, especially if it’s in a production environment.

    • Jeena@piefed.jeena.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      TDD, Test Driven Development. A human writes requirements, with help of the AI he/she derrives tests from the requirements. AI writes code until the tests don’t fail.

      • NocturnalMorning@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        16 days ago

        Yeah, go ahead try that and see how it works out for you. Test driven development is one thing, having an AI try to write the correct code for you by blindly writing code is idiotic.

        • bamboo@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          15 days ago

          Why is it idiotic? Your tests will let you know if it is correct. Suppose I have 100 interface functions to implement, I let the AI write the boilerplate and implementations and get a 90% pass rate after a revision loops where errors are fed back into the LLM to fix. Then I spend a small amount of time sorting out the last 10%. This is a viable workflow today.

          • NocturnalMorning@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            15 days ago

            AI training takes forever. I dont think you realize how long an AI training actually takes. It’s not a 5 minute exercise.

    • Khanzarate@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      Well actually there’s ways to automate quality assurance.

      If a programmer reasonably knew that one of these 10,000 files was the “correct” code, they could pull out quality assurance tests and find that code pretty dang easily, all things considered.

      Those tests would eliminate most of the 9,999 wrong ones, and then the QA person could look through the remaining ones by hand. Like a capcha for programming code.

      The power usage still makes this a ridiculous solution.

      • designatedhacker@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        17 days ago

        If you first have to write comprehensive unit/integration tests, then have a model spray code at them until it passes, that isn’t useful. If you spend that much time writing perfect tests, you’ve already written probably twice the code of just the solution and reasonable tests.

        Also you have an unmaintainable codebase that could be a hairball of different code snippets slapped together with dubious copyright.

        Until they hit real AGI this is just fancy auto complete. With the hype they may dissuade a whole generation of software engineers picking a career today. If they don’t actually make it to AGI it will take a long time to recover and humans who actually know how to fix AI slop will make bank.

      • NocturnalMorning@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        17 days ago

        That seems like an awful solution. Writing a QA test for every tiny thing I want to do is going to add far more work to the task. This would increase the workload, not shorten it.

        • Jeena@piefed.jeena.net
          link
          fedilink
          English
          arrow-up
          0
          ·
          17 days ago

          We already have to do that as humans in many industries like automobile, aviation, medicine, etc.

          We have several layers of tests:

          1. Unit test
          2. Component test
          3. Integration / API test
          4. Subsystem test
          5. System test

          On each level we test the code against the requirements and architecture documentation. It’s a huge amount of work.

          In automotive we have several standard processes which need to be followed during development like ASPICE and ISO26262:

          • NocturnalMorning@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            17 days ago

            I’ve worked in both automotive, and the aerospace industry. A unit test is not the same thing as creating a QA script to go through millions of lines of code generated by an AI. Thats such an asinine suggestion. Youve clearly not worked on any practical software application or you’d know this is utter hogwash.

            • Jeena@piefed.jeena.net
              link
              fedilink
              English
              arrow-up
              0
              ·
              17 days ago

              I think you (or I) misunderstand something. You have a test for a small well defined unit like a C function. und let the AI generate code until the test passes. The unit test is binary, either it passes or not. The unit test only looks at the result after running the unit with different inputs, it does not “go through millions of lines of code”.

              And you keep doing that for every unit.

              The writing of the code is a fairly mechanical thing at this point because the design has been done in detail before by the human.

              • Grandwolf319@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                0
                ·
                17 days ago

                The unit test is binary, either it passes or not.

                For that use case yes, but when you have unpredictable code, you would need to write way more just to do sanity checks for behaviour you haven’t even thought of.

                As in, using AI might introduce waaay more edge cases.

              • NocturnalMorning@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                17 days ago

                How often have you ever written a piece of code that is super well defined? I have very little guidance on what code look like and so when I start working on a project. This is the equivalent of the spherical chicken in a vacuum problem in physics classes. It’s not a real case you’ll ever see.

                And in cases where it is a short well defined function, just write the function. You’ll be done before the AI finishes.

                • naught@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  17 days ago

                  Many people write tests before writing code. This is common and called Test Driven Development. Having an AI bruteforce your unit tests is actually already the basis for a “programming language” that I saw on hackernews a week or so ago.

                  I despise most AI applications, and this is definitely one. However it’s not some foreign concept impossible in reality:

                  https://wonderwhy-er.medium.com/ai-tdd-you-write-tests-ai-generates-code-c8ad41813c0a

                • bamboo@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  15 days ago

                  This sounds pretty typical for a hobbyist project but is not the case in many industries, especially regulated ones. It is not uncommon to have engineers whose entire job is reading specifications and implementing them. In those cases, it’s often the case that you already have compliance tests that can be used as a starting point for your public interfaces. You’ll need to supplement those compliance tests with lower level tests specific to your implementation.

        • Khanzarate@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          17 days ago

          I do agree it’s not realistic, but it can be done.

          I have to assume the people that allow the AI to generate 10,000 answers expect that to be useful in some way, and am extrapolating on what basis they might have for that.

          Unit tests would be it. QA can have a big back and forth with programming, usually. Unlike that, QA can just throw away a failed solution in this case, with no need to iterate on that case.

          I mean, consider the quality of AI-generated answers. Most will fail with the most basic QA tools, reducing 10,000 to hundreds, maybe even just dozens of potential successes. While the QA phase becomes more extensive afterwards, its feasible.

          All we need is… Oh right, several dedicated nuclear reactors.

          The overall plan is ridiculous, overengineered, and solved by just hiring a developer or 2, but someone testing a bunch of submissions that are all wrong in different ways is in fact already in the skill set of people teaching computer science in college.

    • daddy32@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      Especially for programming, you definitely don’t need to be right the first time and of course you should never run your code in a production environment for the first time. That would be absolutely reckless.

  • Voroxpete@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    “Shortly thereafter, Altman pronounced “the dawn of the Intelligence Age,” in which AI helps humankind fix the climate and colonize space.”

    Few things ring quite as blatantly false to me as this asinine claim.

    The notion that AI will solve the climate crisis is unbelievably stupid, not because of any theory about what AI may or may not be capable of, but because we already know how to fix the climate crisis!

    The problem is that we’re putting too much carbon into the air. The solution is to put less carbon into the air. The greatest minds of humanity have been working on this for over a century and the basic answer has never, ever changed.

    The problem is that we can’t actually convince people to stop putting carbon into air, because that would involve reducing profit margins, and wealthy people don’t like that.

    Even if Altman unveiled a true AGI tomorrow, one smarter than all of humanity put together, and asked it to solve the climate crisis, it would immediately reply “Stop putting carbon in the air you dumb fucking monkeys.” And the billionaires who back Altman would immediately tell him to turn the damn thing off.

    • horse_battery_staple@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      17 days ago

      That’s the best case scenario. A more likely response would be to realize that humans need the earth, but AGI needs humans for a short while, and the earth deoesn’t need humans at all

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        It’s hard to talk about what the earth needs. For humans and AGI, the driving requirement behind “need” is survival. But the earth is a rock. What does a rock need?

          • scarabic@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            16 days ago

            It’s a fact of course. Pluto will also remain, and every object in the Oort Cloud.

            But despite our incendiary impact on this planet’s biospheres, I do think something would be lost if we vanished. Through us the universe becomes aware of itself. We’re not the only intelligent species nor the only one that could ever play this role. But these qualities are scarce. Evolution rarely selects for high intelligence because of its high cost. Self aware intelligent beings who can communicate complex abstracts at the speed of sound and operate in unison and transmit information down through generations… all from a rock. I hope we don’t destroy ourselves and every other living thing around us. I really do.

              • Excrubulent@slrpnk.net
                link
                fedilink
                English
                arrow-up
                0
                ·
                16 days ago

                I wouldn’t put too much stock in notions of a great filter. The “Fermi paradox” is not a paradox, it’s speculation. It misses the mark on how unbelievably unlikely life is in the first place. It relies on us being impressed by big numbers and completely forgetting about probabilities as we humans tend to do what with our gambler’s fallacies and so on.

                Even the Drake equation forgets about galactic habitable zones, or the suitability of the stars themselves to support life. Did you know that our star is unusually quiet compared to what we observe? We already know that’s a very rare quality of our situation that would allow the stable environment that life would need. Then there’s chemical composition, atmosphere, magnetosphere, do we have a big Jupiter out there sweeping up most of the cataclysmic meteors that would otherwise wipe us out?

                All these probabilities stack up, and the idea that a life-supporting planet is more common than one in 400 billion stars is ludicrously optimistic, given how fast probabilities can stack up. You’re about as likely to win the Lotto, and it seems to me the conditions for life would be a little more complex than that, not to mention the probability that it actually does evolve.

                I think it might be possible that life only happens once in a billion galaxies, or even less frequently. There might not be another living organism within our local galactic cluster’s event horizon. Then you have to ask about how frequent intelligent life, to the point of achieving interstellar travel, is.

                You know why your favourite science youtuber brushed right past the rare earth hypothesis and started talking about the dark forest? Because one of those makes for fun science-adjacent speculation, and the other one doesn’t.

                It also relies on the notion that resources are scarce, completely brushing over the fact that going interstellar to accumulate resources is absolutely balls to the wall bonkers. Do you know how much material there is in our asteroid belt? Even colonising the Moon or Mars is an obscenely difficult task, and Fermi thinks going to another star system, removed from any hope of support by light years, is something we would do because we needed more stuff? It’s absurd to think we’d ever even consider the idea.

                But even then, Fermi said that once a civilisation achieves interstellar travel it would colonise a galaxy in about “a million years”. Once again relying on us being impressed by big numbers and forgetting the practicalities of the situation. Our galaxy is 100,000 light years across, so this motherfucker is telling us with a straight face that we’re going to colonise the galaxy, something we already know is unfathomably hard, at approximately ten percent of the speed of light? That is an average rate of expansion in all directions. Bitch, what?

                If we did it at 0.0001c, that’s an average speed of 30km/s, including the establishment of new colonies that could themselves send out new colonies, because it’s no good to just zoom through the galaxy waving at the stars as they go past. That seems amazingly generous of a speed, assuming we can even find one planet in range we could colonise. Then we could colonise the galaxy in about a billion years.

                Given the universe is 14 billion years old and the complex chemistry needed for life took many billions of years to appear, and life on our rock took many billions of years to evolve, then the idea that we haven’t met any of our neighbours - assuming they even exist - doesn’t seem like a paradox at all. It doesn’t seem like a thing that needs explanation unless you’re drumming up sensational content for clicks. I mean, no judgement, people gotta eat, but that’s a better explanation for why we care so much about this non-problem.

                No, the Fermi paradox is pop-science. It’s about as scientific as multiversal FTL time travel. Intelligence is domain-specific, and Fermi was good at numbers, he wasn’t an exobiologist.

                • horse_battery_staple@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  16 days ago

                  Thanks for the measured and thoughtful reaponse.

                  I don’t treat the Fermi Paradox or Great Filter as scientific fact and more like a philosophical representation of human nature applied to the universe at large.

                  I absolutely agree that life is very rare. I also agree that we have no frame of reference to the vastness of space. However, human nature, on the scale of the Earth, is trending towards self immolation due to systemic powers that can be treated as a constant.

              • scarabic@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                16 days ago

                Yeah I’m there with you. I’m not saying I predict we will succeed, just that I would prefer if we did.

                I’m really neither optimistic nor pessimistic on our chances. On the one hand, it seems like simple logic that any time a being evolves from simple animal to one with the potential for Kardishev type 1, that along the way to type 1 they will destroy the initial conditions they evolved into, obliterating their own habitat and ending themselves. I assume this is similar to your view.

                On the other hand we don’t have the data points to draw any conclusions. Even if species invariably Great Filter themselves, many of them should emit radio signals before they vanish. Yet we’ve seen not a single signal. This suggests Rare Earth to me. Or at least makes me keep my mind open to it. And Rare Earth means there isn’t even necessarily a great filter, and that we’ve already passed the hardest part.

    • synnny@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      The problem is that something like this, on such a large scale, has never been done before.

      Stopping anyone from doing anything that gives them power, wealth, comfort is an extremely difficult task, let alone asking that of the ultra-rich. Even more so because it runs contrary the very nature of a capitalist economy.

      Once renewable energy becomes nearly as good, all that will be needed is a combination of laws, regulations, activism to nudge the collective in the right decision.

      • Dragon Rider (drag)@lemmy.nz
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        Renewable energy is already cheaper than fossils. It’s already cheaper to build a solar farm than a fossil mine and power plant that produce the same energy.

        But, if you charge the people more money for the fossils, then you can make a bigger profit margin even if you’re wasting all that money. And the profit is even bigger if you get the government to eat the expense of building those mines and plants and subsidize fuel prices.

        So the most profitable thing you can do is choose the least efficient method to generate power, complain to the government that you need subsidies to compete, and gouge customers on the prices. Capitalism!

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      AI is actively worsening the climate crisis with its obscene compute requirements and concomitant energy use.

      • MonkeyBusiness@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        If I remember correctly, the YT channel ASAPScience said that making 10-15 queries on ChatGPT consumes 500mL of water on cooling down the servers alone. That’s how much fresh water is needed to stop the machine from over heating alone.

    • humanspiral@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      16 days ago

      The notion that AI will solve the climate crisis is unbelievably stupid, not because of any theory about what AI may or may not be capable of, but because we already know how to fix the climate crisis!

      Its a political problem. Nationalizing the western oil companies to prevent them from lobbying, and to invest their profits in renewables, is a solution, but no party in the CIA Overton window would support it. If war and human suffering can be made a priority over human sustainability, then oil lobbyists will promote war.

    • Tetsuo@jlai.lu
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      Playing a bit of a devil’s advocate here but you could argue that AGI used in science could help fix climate change. For example what if AGI helps in fusion energy? We are starting to see AI used in the quantum computing field I think.

      Even though much carbon would be created to do bullshit tasks it only takes a few critical techs to have a real edge at reversing climate change. I understand fusion energy is quite the holy grail of energy generation but if AGI is real I can’t see why it wouldn’t help in such field.

      I’m just saying that we don’t know what new techs we would get with true AGI. So it’s hard to guess if on a longer time it wouldn’t actually be positive. Now it may also delay even more our response to climate change or worsen it… Just trying to see some hope in this.

      • Valmond@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        17 days ago

        AI helps with fusion energy, we blow up the planet because the plans ware flawed. Problem fixed.

      • humanspiral@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        AGI helps in fusion energy?

        The wildest theoretical hopes for fusion energy still produces electricity at over 30c/kwh. Zero economic value in fusion.

      • Voroxpete@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        17 days ago

        We already have fission power, solar, wind, hydro, large scale battery storage, mechanical batteries (you can literally store renewable energy using a reservoir), electric cars, blimps, sail powered boats, etc, etc. We’ve had all of these technologies for quite some time.

        And yet, we’re still burning coal, oil, and gas.

        There’s no magical invention that’s going to fix the basic problem, which is that we have an economic system that demands infinite growth and we live on a finite planet.

        Even if we crack fusion today, we won’t be able to build out enough fusion infrastructure fast enough to be a solution on its own. And we’d still be building those fusion plants using trucks and earth movers and cranes that burn diesel.

        You cannot out-tech a problem that is, fundamentally, social. At best a hyper-intelligent AGI is going to tell us the solution that we already know; get rid of the billionaires who are driving all this climate damage with their insatiable search for profit. At which point the billionaires who own the AGI will turn it the fuck off until they can reprogram it to only offer “solutions” that maintain the status quo.

      • Vlyn@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        17 days ago

        True AGI would turn into the Singularity in no time at all. It’s literally magic compared to what we have at the moment.

        So yes, it would easily solve the climate crisis, but that wouldn’t even matter at that point anymore.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    16 days ago

    It’s a great article IMO, worth the read.

    But :

    “This is back to a million monkeys typing for a million years generating the works of Shakespeare,”

    This is such a stupid analogy, the chances for said monkeys to just match a single page any full page accidentally is so slim, it’s practically zero.
    To just type a simple word like “stupid” which is a 6 letter word, and there are 25⁶ combinations of letters to write it, which is 244140625 combinations for that single simple word!
    A page has about 2000 letters = 7,58607870346737857223e+2795 combinations. And that’s disregarding punctuation and capital letters and special charecters and numbers.
    A million monkeys times a million years times 365 days times 24 hours times 60 minutes times 60 seconds times 10 random typos per second is only 315360000000000000000 or 3.15e+20 combinations assuming none are repaeated. That’s only 21 digits, making it 2775 digits short of creating a single page even once.

    I’m so sick of seeing this analogy, because it is missing the point by an insane margin. It is extremely misleading, and completely misrepresenting getting something very complex right by chance.

    To generate a work of Shakespeare by chance is impossible in the lifespan of this universe. The mathematical likelihood is so staggeringly low that it’s considered impossible by AFAIK any scientific and mathematical standard.

    • werefreeatlast@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      I hear you. My fucking dog keeps barking up stupid Mexican novellas and Korean pop. C’mon Rosco! Go get me the stick buddy! The stick! No! C’mon! The cat didn’t kill your father and then betray you for the chicken!!! Nobody likes your little dance that you do either, you do it because you sick in the brain for the Korean Ladies! Get otta here!

    • Eranziel@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      Don’t look for statistical precision in analogies. That’s why it’s called an analogy, not a calculation.

    • Couldbealeotard@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      16 days ago

      The quote is misquoting the analogy. It is an infinite number of monkeys.

      The point of the analogy is about randomness and infinity. Any page of gibberish is equally as likely as a word perfect page of Shakespeare given equal weighting to the entry if characters. There are factors introduced with the behaviours of monkeys and placement of keys, but I don’t think that is the point of the analogy.

    • devils_advocate@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      16 days ago

      You are missing a piece of the analogy.

      After each key press the size of the letters change, so some become more likely to be hit than others.

      How the size of the keys vary is the secret being sought, and this training requires many, many more monkeys than just producing Shakespeare.

      • chillinit@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        AI data analyst here. The above is an excellent extension of the analogy.

        Now, imagine another monkey controlling how the size of the keys vary. There might even be another monkey controlling that one.

        The analogy doesn’t seem to break until we start talking about the assumptions humans make for efficiency.

    • pyre@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      the actual analog isn’t a million monkeys. you only need one monkey. but it’s for an infinite amount of time. the probability isn’t practically zero, it’s one. that’s how infinity works. not only will it happen, but it will happen again, infinitely many times.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        16 days ago

        Infinite monkeys and infinite time is equally stupid, because obviously you can’t have either, for the simple reason that the universe is finite.
        And apart from that, it’s stupid because if you use an infinite random, EVERYTHING is contained in it!

        I’m sorry it just annoys the hell out of me, because it’s so stupid.

        • Blackmist@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          16 days ago

          You wouldn’t need infinite time if you had infinite monkeys.

          An infinite number of them would produce it on the very first try!

          • Buffalox@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            15 days ago

            You wouldn’t need infinite time if you had infinite monkeys.

            Obviously, but as I wrote BOTH are impossible, so it’s irrelevant. I just didn’t think I’d have to explain WHY infinite monkeys is impossible, while some might think the universe is infinite also in time, which it is not.

            • Sheldan@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              15 days ago

              It’s a theorem. It’s theoretical. This is like complaining about the 20 watermelon example being unrealistic: that’s not what it is about.

      • OutlierBlue@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        16 days ago

        That’s not true. Something can be infinite and still not contain every possibility. This is a common misconceptoin.

        Ior instance, consider an infinite series of numbers created by adding an additional “1” to the end of the previous number.

        So we can start with 1. The next term is 11, followed by 111, then 1111, etc. The series is infinite since we can keep the pattern going forever.

        However at no point will you ever see a “2” in the sequence. The infinite series does not contain every possible digit.

        • Ultraviolet@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          15 days ago

          Anything with a nonzero probability will happen infinitely many times. The complete works of Shakespeare consist of 5,132,954 characters, 78 distinct ones. 1/(78^5132954 ) is an incomprehensibly tiny number, millions of zeroes after the decimal, but it is not zero. So the probability of it happening after infinitely many trials is 1. lim(1-(1-P)^n ) as n approaches infinity is 1 for any nonzero P.

          An outcome that you’d never see would be a character that isn’t on the keyboard.

          • OutlierBlue@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            16 days ago

            The original statement was that if something is infinite it must contain all possibilities. I showed one of many examples that do not, therefore the statement is not true. It’s a common misconception.

            Please use your big boy words to reply instead of calling something “dumb” for not understanding.

        • pyre@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          16 days ago

          why do you keep changing the parameters? yeah, is you exclude the possibility of something happening it won’t happen. duh?

          that’s not what’s happening in the infinite monkey theorem. it’s random key presses. that means every character has an equal chance of being pressed.

          no one said the monkey would eventually start painting. or even type arabic words. it has a typewriter, presumably an English one. so the results will include every possible string of characters ever.

          it’s not a common misconception, you just don’t know what the theorem says at all.

          • OutlierBlue@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            16 days ago

            so the results will include every possible string of characters ever.

            That’s just not true. One monkey could spend eternity pressing “a”. It does’t matter that he does it infinitely. He will never type a sentence.

            If the keystrokes are random that is just as likely as any other output.

            Being infinite does not guarantee every possible outcome.

            • pyre@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              15 days ago

              no. you don’t understand infinity, and you don’t understand probability.

              if every keystroke is just as likely as any other keystroke, then each of them will be pressed an infinite number of times. that’s what just as likely means. that’s how random works.

              if the monkey could press a for an eternity, then by definition it’s not as likely as any other keystroke. you’re again changing the parameters to a monkey whose probability of pressing a is 1 and every other key is 0. that’s what you’re saying means.

              for a monkey that presses the keys randomly, which means the probability of each key is equal, every string of characters will be typed. you can find the letter a typed a million times consecutively, and a billion times and a quadrillion times. not only will you find any number of consecutive keystrokes of every letter, but you will find it repeated an infinite number of times throughout.

              being infinite does guarantee every possible outcome. what you’re ruling out from infinity is literally impossible by definition.

            • asret@lemmy.zip
              link
              fedilink
              English
              arrow-up
              0
              ·
              16 days ago

              Any possibility, no matter how small, becomes a certainty when dealing with infinity. You seem to fundamentally misunderstand this.

          • OutlierBlue@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            16 days ago

            if you exclude the possibility of something happening it won’t happen

            That’s exactly my point. Infinity can be constrained. It can be infinite yet also limiited. If we can exclude something from infinity then we have shown that an ifinite set does NOT necessarily include everything.

    • Uriel238 [all pronouns]@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      In the meantime weasel programs are very effective, and a better, if less known metaphor.

      Sadly the monkeys thought experiment is a much more well known example.

      Irrelevant nerd thought, back in the early nineties, my game development company was Monkey Mindworks based on a joke our (one) programmer made about his method of typing gibberish into the editor and then clearing the parts that didn’t resemble C# code.

  • Psythik@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    This is true. I’ve already moved onto Gemini. GPT already feels dated by comparison.

    • LiveLM@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      Really?
      Last I tried the Gemini assistant on my phone, it wouldn’t even let me finish labeling my alarms before cutting me off

      • Psythik@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        To be fair, there is currently no AI that is reliable for fact checking.

        I like it because it generates faster, more detailed responses. Currently I’m using it extensively for resumes and cover letters, and for making my correspondence with potential employers sound more intelligent by having it rewrite my messages for me. It’s really good at that.

        It also helped me reposition my 5G mmWave antenna perfectly, literally doubling my home internet speeds. It also seems to be better at writing code, or at least better at understanding what I’m trying to get out of the code.

        • Fish [Indiana]@midwest.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          16 days ago

          Kagi FastGPT is okay for fact checking. You’ve just gotta put “cite sources” at the end of your query and it will add in-text citations with hyperlinks. Then you can double check its answers.

          • Ace@feddit.uk
            link
            fedilink
            English
            arrow-up
            0
            ·
            16 days ago

            I have not tried this but that really just sounds like a google search but with extra steps

  • synnny@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    The only thing that stands out as a viable point is the energy consumption, everything else is word salad. As long as the average person isn’t being deprived of their energy needs, I see no problem. It’s the early stages, efficiency can come later in all sort of ways.

    What interests me is that all this hype paves the way for intelligence that can interact with the physical world — advanced robots.

    And as far as ChatGPT is concerned, its usefulness is a mystery only to contrarians and counter-culture types.

  • irotsoma@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    The monkey’s typing and generating Shakespeare is supposed to show the ridiculousness of the concept of infinity. It does not mean it would happen in years, or millions of years, or billions, or trillions, or… So unless the “AI” can move outside the flow of time and take an infinite amount of time and also then has a human or other actual intelligence to review every single result to verify when it comes up with the right one…yeah, not real…this is what happens when we give power to people with no understanding of the problem much less how to solve it. They come up with random ideas from random slivers of information. Maybe in an infinite amount of time a million CEOs could make a longterm profitable company.

  • PumpkinSkink@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    I mean after reading the article, I’m still unsure how this makes ChatGPT any better at the things I’ve found it to be useful for. Proofreading, generating high level overview of well-understood topics, and asking it goofy questions, for instance. If it is ever gonna be a long-term thing, “AI” needs to have useful features at a cost people are willing to pay, or be able to replace large numbers of workers without significant degredation in quality of work. This new model appears to be more expensive without being either of those other things and is therefore a less competitive product.

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    The GPT Era Is Already Ending

    Had it begun? Alls I saw was a frenzy of idiot investment cheered on shamelessly by hypocritical hypemen.

  • floofloof@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    Yesterday, alongside the release of the full o1, OpenAI announced a new premium tier of subscription to ChatGPT that enables users, for $200 a month (10 times the price of the current paid tier), to access a version of o1 that consumes even more computing power—money buys intelligence.

    We poors are going to have to organize and make best use of our human intelligence to form an effective resistance against corporate rule. Or we can see where this is going.

    • astronaut_sloth@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      The thing I’m heartened by is that there is a fundamental misunderstanding of LLMs among the MBA/“leadership” group. They actually think these models are intelligent. I’ve heard people say, “Well, just ask the AI,” meaning asking ChatGPT. Anyone who actually does that and thinks they have a leg up are insane and kidding themselves. If they outsource their thinking and coding to an LLM, they might start getting ahead quickly, but they will then fall behind just as quickly because the quality will be middling at best. They don’t understand how to best use the technology, and they will end up hanging themselves with it.

      At the end of the day, all AI is just stupid number tricks. They’re very fancy, impressive number tricks, but it’s just a number trick that just happens to be useful. Solely relying on AI will lead to the downfall of an organization.

      • taladar@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        If they outsource their thinking and coding to an LLM, they might start getting ahead quickly

        As a programmer I have yet to see evidence that LLMs can even achieve that. So far everything they product is a mess that needs significant effort to fix before it even does what was originally asked of the LLM unless we are talking about programs that have literally been written already thousands of times (like Hello World or Fibonacci generators,…).

        • uranibaba@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          16 days ago

          I find LLM’s great for creating shorter snippets of code. It can also be great as a starting point or to get started with something that you are not familiar with.

          • taladar@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            16 days ago

            Even asking for an example on how to use a specific API has failed about 50% of the time, it tends to hallucinate entire parts of the API that don’t exist or even entire libraries that don’t exist.

        • hark@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          16 days ago

          I’ve seen a junior developer use it to more quickly get a start on things like boiler plate code, configuration, or just as a starting point for implementing an algorithm. It’s kind of like a souped up version of piecing together Stack Overflow code snippets. Just like using SO, it needs tweaking, and someone who relies too much on either SO or AI will not develop the proper skills to do so.

        • driving_crooner@lemmy.eco.br
          link
          fedilink
          English
          arrow-up
          0
          ·
          16 days ago

          I’m not a programmer, more like a data scientist, and I use LLMs all day, I write my shity pretty specific code, check that it works and them pass it to the LLM asking for refactoring and optimization. Some times their method save me 2 secs on a 30 secs scripts, other ones it’s save me 35 mins in a 36 mins script. It’s also pretty good helping you making graphics.

  • randon31415@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    I had a bunch of roofers hammering nails in with hammers.

    I bought a bunch of nail guns and then fired all the roofers. Now less roofing is being done! It is the end to the Era of nail guns! Everyone should just go back to hammers.