LOOK MAA I AM ON FRONT PAGE

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    8 days ago

    Why would they “prove” something that’s completely obvious?

    The burden of proof is on the grifters who have overwhelmingly been making false claims and distorting language for decades. Unfortunately the grifters and these researchers are the same people.

    • yeahiknow3@lemmings.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      8 days ago

      They’re just using the terminology that’s widespread in the field. In a sense, the paper’s purpose is to prove that this terminology is unsuitable.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        8 days ago

        I understand that people in this “field” regularly use pseudo-scientific language (I actually deleted that part of my comment).

        But the terminology has never been suitable so it shouldn’t be used in the first place. It pre-supposes the hypothesis that they’re supposedly “disproving”. They’re feeding into the grift because that’s what the field is. That’s how they all get paid the big bucks.

    • TheRealKuni@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      Why would they “prove” something that’s completely obvious?

      I don’t want to be critical, but I think if you step back a bit and look and what you’re saying, you’re asking why we would bother to experiment and prove what we think we know.

      That’s a perfectly normal and reasonable scientific pursuit. Yes, in a rational society the burden of proof would be on the grifters, but that’s never how it actually works. It’s always the doctors disproving the cure-all, not the snake oil salesmen failing to prove their own prove their own product.

      There is value in this research, even if it fits what you already believe on the subject. I would think you would be thrilled to have your hypothesis confirmed.

        • Hoimo@ani.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 days ago

          I think if you look at child development research, you’ll see that kids can learn to do crazy shit with very little input, waaay less than you’d need to train a neural net to do the same. So either kids are the luckiest neural nets and always make the correct adjustment after failing, or they have some innate knowledge that isn’t pattern-based at all.

          There’s even some examples in linguistics specifically, where children tend towards certain grammar rules despite all evidence in their language pointing to another rule. Pure pattern-matching would find the real-world rule without first modelling a different (universally common) rule.

      • limelight79@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        8 days ago

        Yep. I’m retired now, but before retirement a month or so ago, I was working on a project that relied on several hundred people back in 2020. “Why can’t AI do it?”

        The people I worked with are continuing the research and putting it up against the human coders, but…there was definitely an element of “AI can do that, we won’t need people” next time. I sincerely hope management listens to reason. Our decisions would lead to potentially firing people, so I think we were able to push back on the “AI can make all of these decisions”…for now.

        The AI people were all in, they were ready to build an interface that told the human what the AI would recommend for each item. Errrm, no, that’s not how an independent test works. We had to reel them back in.

  • minoscopede@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    7 days ago

    I see a lot of misunderstandings in the comments 🫤

    This is a pretty important finding for researchers, and it’s not obvious by any means. This finding is not showing a problem with LLMs’ abilities in general. The issue they discovered is specifically for so-called “reasoning models” that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.

    Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that’s a flaw that needs to be corrected before models can actually reason.

    • theherk@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      Yeah these comments have the three hallmarks of Lemmy:

      • AI is just autocomplete mantras.
      • Apple is always synonymous with bad and dumb.
      • Rare pockets of really thoughtful comments.

      Thanks for being at least the latter.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 days ago

      There’s probably alot of misunderstanding because these grifters intentionally use misleading language: AI, reasoning, etc.

      If they stuck to scientifically descriptive terms, it would be much more clear and much less sensational.

    • Zacryon@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      Some AI researchers found it obvious as well, in terms of they’ve suspected it and had some indications. But it’s good to see more data on this to affirm this assessment.

      • jj4211@lemmy.world
        cake
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        Particularly to counter some more baseless marketing assertions about the nature of the technology.

      • kreskin@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 days ago

        Lots of us who has done some time in search and relevancy early on knew ML was always largely breathless overhyped marketing. It was endless buzzwords and misframing from the start, but it raised our salaries. Anything that exec doesnt understand is profitable and worth doing.

        • wetbeardhairs@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          7 days ago

          Machine learning based pattern matching is indeed very useful and profitable when applied correctly. Identify (with confidence levels) features in data that would otherwise take an extremely well trained person. And even then it’s just for the cursory search that takes the longest before presenting the highest confidence candidate results to a person for evaluation. Think: scanning medical data for indicators of cancer, reading live data from machines to predict failure, etc.

          And what we call “AI” right now is just a much much more user friendly version of pattern matching - the primary feature of LLMs is that they natively interact with plain language prompts.

        • Zacryon@feddit.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 days ago

          Ragebait?

          I’m in robotics and find plenty of use for ML methods. Think of image classifiers, how do you want to approach that without oversimplified problem settings?
          Or even in control or coordination problems, which can sometimes become NP-hard. Even though not optimal, ML methods are quite solid in learning patterns of highly dimensional NP hard problem settings, often outperforming hand-crafted conventional suboptimal solvers in computation effort vs solution quality analysis, especially outperforming (asymptotically) optimal solvers time-wise, even though not with optimal solutions (but “good enough” nevertheless). (Ok to be fair suboptimal solvers do that as well, but since ML methods can outperform these, I see it as an attractive middle-ground.)

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      When given explicit instructions to follow models failed because they had not seen similar instructions before.

      This paper shows that there is no reasoning in LLMs at all, just extended pattern matching.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        I’m not trained or paid to reason, I am trained and paid to follow established corporate procedures. On rare occasions my input is sought to improve those procedures, but the vast majority of my time is spent executing tasks governed by a body of (not quite complete, sometimes conflicting) procedural instructions.

        If AI can execute those procedures as well as, or better than, human employees, I doubt employers will care if it is reasoning or not.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 days ago

            Well - if you want to devolve into argument, you can argue all day long about “what is reasoning?”

            • technocrit@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              7 days ago

              This would be a much better paper if it addressed that question in an honest way.

              Instead they just parrot the misleading terminology that they’re supposedly debunking.

              How dat collegial boys club undermines science…

            • Knock_Knock_Lemmy_In@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              7 days ago

              You were starting a new argument. Let’s stay on topic.

              The paper implies “Reasoning” is application of logic. It shows that LRMs are great at copying logic but can’t follow simple instructions that haven’t been seen before.

    • REDACTED@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 days ago

      What confuses me is that we seemingly keep pushing away what counts as reasoning. Not too long ago, some smart alghoritms or a bunch of instructions for software (if/then) was officially, by definition, software/computer reasoning. Logically, CPUs do it all the time. Suddenly, when AI is doing that with pattern recognition, memory and even more advanced alghoritms, it’s no longer reasoning? I feel like at this point a more relevant question is “What exactly is reasoning?”. Before you answer, understand that most humans seemingly live by pattern recognition, not reasoning.

      https://en.wikipedia.org/wiki/Reasoning_system

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        Sure, these grifters are shady AF about their wacky definition of “reason”… But that’s just a continuation of the entire “AI” grift.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        I think as we approach the uncanny valley of machine intelligence, it’s no longer a cute cartoon but a menacing creepy not-quite imitation of ourselves.

      • stickly@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        If you want to boil down human reasoning to pattern recognition, the sheer amount of stimuli and associations built off of that input absolutely dwarfs anything an LLM will ever be able to handle. It’s like comparing PhD reasoning to a dog’s reasoning.

        While a dog can learn some interesting tricks and the smartest dogs can solve simple novel problems, there are hard limits. They simply lack a strong metacognition and the ability to make simple logical inferences (eg: why they fail at the shell game).

        Now we make that chasm even larger by cutting the stimuli to a fixed token limit. An LLM can do some clever tricks within that limit, but it’s designed to do exactly those tricks and nothing more. To get anything resembling human ability you would have to design something to match human complexity, and we don’t have the tech to make a synthetic human.

    • Tobberone@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      What statistical method do you base that claim on? The results presented match expectations given that Markov chains are still the basis of inference. What magic juice is added to “reasoning models” that allow them to break free of the inherent boundaries of the statistical methods they are based on?

      • minoscopede@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 days ago

        I’d encourage you to research more about this space and learn more.

        As it is, the statement “Markov chains are still the basis of inference” doesn’t make sense, because markov chains are a separate thing. You might be thinking of Markov decision processes, which is used in training RL agents, but that’s also unrelated because these models are not RL agents, they’re supervised learning agents. And even if they were RL agents, the MDP describes the training environment, not the model itself, so it’s not really used for inference.

        I mean this just as an invitation to learn more, and not pushback for raising concerns. Many in the research community would be more than happy to welcome you into it. The world needs more people who are skeptical of AI doing research in this field.

    • AbuTahir@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      Cognitive scientist Douglas Hofstadter (1979) showed reasoning emerges from pattern recognition and analogy-making - abilities that modern AI demonstrably possesses. The question isn’t if AI can reason, but how its reasoning differs from ours.

  • atlien51@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    Employers who are foaming at the mouth at the thought of replacing their workers with cheap AI:

    🫢

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      The funny thing about this “AI” griftosphere is how grifters will make some outlandish claim and then different grifters will “disprove” it. Plenty of grant/VC money for everybody.

    • jj4211@lemmy.world
      cake
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      Without being explicit with well researched material, then the marketing presentation gets to stand largely unopposed.

      So this is good even if most experts in the field consider it an obvious result.

  • hornedfiend@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    8 days ago

    While I hate LLMs with passion and my opinion of them boiling down to being glorified search engines and data scrapers, I would ask Apple: how sour are the grapes, eh?

    edit: wording

  • ZILtoid1991@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    Thank you Captain Obvious! Only those who think LLMs are like “little people in the computer” didn’t knew this already.

    • TheFriar@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      Yeah, well there are a ton of people literally falling into psychosis, led by LLMs. So it’s unfortunately not that many people that already knew it.

  • skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    What’s hilarious/sad is the response to this article over on reddit’s “singularity” sub, in which all the top comments are people who’ve obviously never got all the way through a research paper in their lives all trashing Apple and claiming their researchers don’t understand AI or “reasoning”. It’s a weird cult.

  • flandish@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    stochastic parrots. all of them. just upgraded “soundex” models.

    this should be no surprise, of course!

    • finitebanjo@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      That’s not really a valid argument for why, but yes the models which use training data to assemble statistical models are all bullshitting. TBH idk how people can convince themselves otherwise.

      • turmacar@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        8 days ago

        I think because it’s language.

        There’s a famous quote from Charles Babbage when he presented his difference engine (gear based calculator) and someone asking “if you put in the wrong figures, will the correct ones be output” and Babbage not understanding how someone can so thoroughly misunderstand that the machine is, just a machine.

        People are people, the main thing that’s changed since the Cuneiform copper customer complaint is our materials science and networking ability. Most things that people interact with every day, most people just assume work like it appears to on the surface.

        And nothing other than a person can do math problems or talk back to you. So people assume that means intelligence.

        • leftzero@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 days ago

          “if you put in the wrong figures, will the correct ones be output”

          To be fair, an 1840 “computer” might be able to tell there was something wrong with the figures and ask about it or even correct them herself.

          Babbage was being a bit obtuse there; people weren’t familiar with computing machines yet. Computer was a job, and computers were expected to be fairly intelligent.

          In fact I’d say that if anything this question shows that the questioner understood enough about the new machine to realise it was not the same as they understood a computer to be, and lacked many of their abilities, and was just looking for Babbage to confirm their suspicions.

          • turmacar@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 days ago

            “Computer” meaning a mechanical/electro-mechanical/electrical machine wasn’t used until around after WWII.

            Babbag’s difference/analytical engines weren’t confusing because people called them a computer, they didn’t.

            “On two occasions I have been asked, ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”

            • Charles Babbage

            If you give any computer, human or machine, random numbers, it will not give you “correct answers”.

            It’s possible Babbage lacked the social skills to detect sarcasm. We also have several high profile cases of people just trusting LLMs to file legal briefs and official government ‘studies’ because the LLM “said it was real”.

            • AppleTea@lemmy.zip
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 days ago

              What they mean is that before Turing, “computer” was literally a person’s job description. You hand a professional a stack of calculations with some typos, part of the job is correcting those out. Newfangled machine comes along with the same name as the job, among the first thing people are gonna ask about is where it fall short.

              Like, if I made a machine called “assistant”, it’d be natural for people to point out and ask about all the things a person can do that a machine just never could.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 days ago

          I often feel like I’m surrounded by idiots, but even I can’t begin to imagine what it must have felt like to be Charles Babbage explaining computers to people in 1840.

      • intensely_human@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        They aren’t bullshitting because the training data is based on reality. Reality bleeds through the training data into the model. The model is a reflection of reality.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          8 days ago

          An approximation of reality with more than a 1 in 20 error rate who produces massive amounts of tokens in quick succession is a shit representation of reality which is in every way inferior to human accounts to the point of being unusable for the industries in which they are promoted.

          And that Error Rate can only spike when the training data contains errors itself, which will only grow as it samples its own content.

      • Encrypt-Keeper@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        TBH idk how people can convince themselves otherwise.

        They don’t convince themselves. They’re convinced by the multi billion dollar corporations pouring unholy amounts of money into not only the development of AI, but its marketing. Marketing designed to not only convince them that AI is something it’s not, but also that that anyone who says otherwise (like you) are just luddites who are going to be “left behind”.

        • leftzero@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 days ago

          LLMs are also very good at convincing their users that they know what they are saying.

          It’s what they’re really selected for. Looking accurate sells more than being accurate.

          I wouldn’t be surprised if many of the people selling LLMs as AI have drunk their own kool-aid (of course most just care about the line going up, but still).

        • Blackmist@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 days ago

          It’s no surprise to me that the person at work who is most excited by AI, is the same person who is most likely to be replaced by it.

          • Encrypt-Keeper@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 days ago

            Yeah the excitement comes from the fact that they’re thinking of replacing themselves and keeping the money. They don’t get to “Step 2” in theirs heads lmao.

    • intensely_human@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      Computers are better at logic than brains are. We emulate logic; they do it natively.

      It just so happens there’s no logical algorithm for “reasoning” a problem through.