• I Cast Fist@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 hours ago

    Anthropic made lots of intriguing discoveries using this approach, not least of which is why LLMs are so terrible at basic mathematics. “Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95,” the MIT article explains.

    But here’s the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, “I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95.” But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.

    Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.

  • cholesterol@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    you can’t trust its explanations as to what it has just done.

    I might have had a lucky guess, but this was basically my assumption. You can’t ask LLMs how they work and get an answer coming from an internal understanding of themselves, because they have no ‘internal’ experience.

    Unless you make a scanner like the one in the study, non-verbal processing is as much of a black box to their ‘output voice’ as it is to us.

    • cley_faye@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      Anyone that used them for even a limited amount of time will tell you that the thing can give you a correct, detailed explanation on how to do a thing, and provide a broken result. And vice versa. Looking into it by asking more have zero chance of being useful.

  • Technoworcester@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    'is weirder than you thought ’

    I am as likely to click a link with that line as much as if it had

    ‘this one weird trick’ or ‘side hussle’.

    I would really like it if headlines treated us like adults and got rid of click baity lines.

    • BackgrndNoize@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      But then you wouldn’t need to click on thir Ad infested shite website where 1-2 paragraphs worth of actual information is stretched into a giant essay so that they can show you more Ads the longer you scroll

      • EpeeGnome@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 day ago

        Well, I’m doing my part against them by refusing to click on any bait headlines, but I fear it’s a lost cause anyway.

        • BeardedGingerWonder@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 day ago

          I try and just ignore it and read what I’m interested in regardless. From what I hear about the YouTube algo, for instance, clickbait titles are necessity more than a choice for YouTubers, if they don’t use them they get next to no engagement early and the algo buries that video which can impact the channel in general.

      • Tony Wu@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        It really is quite unfortunate, I wish titles do what titles are supposed to do instead of being baits.but you are right, even consciously trying to avoid clicking sometimes curiosity gets the best of me. But I am improving.

  • perestroika@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 days ago

    Wow, interesting. :)

    Not unexpectedly, the LLM failed to explain its own thought process correctly.

    • shneancy@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      tbf, how do you know what to say and when? or what 2+2 is?

      you learnt it? well so did AI

      i’m not an AI nut or anything, but we can barely comprehend our own internal processes, it’d be concerning if a thing humanity created was better at it than us lol

      • El Barto@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 hours ago

        You’re comparing two different things.

        Of course I can reflect on how I came with a math result.

        “Wait, how did you come up with 4 when I asked you 2+2?”

        You can confidently say: “well, my teacher said it once and I’m just parroting it.” Or “I pictured two fingers in my mind, then pictured two more fingers and then I counted them.” Or “I actually thought that I’d say some random number, came up with 4 because it’s my favorite digit, said it and it was pure coincidence that it was correct!”

        Whereas it doesn’t seem like Claude can’t do this.

        Of course, you could ask me “what’s the physical/chemical process your neurons follow for you to form those four fingers you picture in your mind?” And I would tell you I don’t know. But again, that’s a different thing.

  • dkc@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    The research paper looks well written but I couldn’t find any information on if this paper is going to be published in a reputable journal and peer reviewed. I have little faith in private businesses who profit from AI providing an unbiased view of how AI works. I think the first question I’d like answered is did Anthropic’s marketing department review the paper and did they offer any corrections or feedback? We’ve all heard the stories about the tobacco industry paying for papers to be written about the benefits of smoking and refuting health concerns.

    • StructuredPair@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      A lot of ai research isn’t published in journals but either posted to a corporate website or put up on the arxiv. There are some ai journals, but the ai community doesn’t particularly value those journals (and threw a bit of a fit when they came out). This article is mostly marketing and doesn’t show anything that should surprise anyone familiar with how neural networks work generically in my opinion.

  • Not_mikey@slrpnk.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.

    If the llm already knows the full sentence it’s going to output from the first word it “guesses” I wonder if you could short circuit it and say just give the full sentence instead of doing a cycle for each word of the sentence, could maybe cut down on llm energy costs.

    • funkless_eck@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 days ago

      interestingly, too, this is a technique when you’re improvising songs, it’s called Target Rhyming.

      The most effective way is to do A / B^1 / C / B^2 rhymes. You pick the B^2 rhyme, let’s say, “ibruprofen” and you get all of A and B^1 to think of a rhyme

      Oh its Christmas time
      And I was up on my roof when
      I heard a jolly old voice
      Ask me for ibuprofen

      And the audience thinks you’re fucking incredible for complex rhymes.

    • angrystego@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      I don’t think it knows the full sentence, it just doesn’t search for the words in the order they will be in the sentence. It finds the end-words first to make the poem rhyme, than looks for the rest of the words. I do it this way as well just like many other people trying to create any kind of rhyming text.

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    “Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95,” the MIT article explains."

    That is precisrly how I do math. Feel a little targeted that they called this odd.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 days ago

      But you’re doing two calculations now, an approximate one and another one on the last digits, since you’re going to do the approximate calculation you might act as well just do the accurate calculation and be done in one step.

      This solution, while it works, has the feeling of evolution. No intelligent design, which I suppose makes sense considering the AI did essentially evolve.

    • JayGray91@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      I think it’s odd in the sense that it’s supposed to be software so it should already know what 36 plus 59 is in a picosecond, instead of doing mental arithmetics like we do

      At least that’s my takeaway

      • shawn1122@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 days ago

        This is what the ARC-AGI test by Chollet has also revealed of current AI / LLMs. They have a tendency to approach problems with this trial and error method and can be extremely inefficient (in their current form) with anything involving abstract / deductive reasoning.

        Most LLMs do terribly at the test with the most recent breakthrough being with reasoning models. But even the reasoning models struggle.

        ARC-AGI is simple, but it demands a keen sense of perception and, in some sense, judgment. It consists of a series of incomplete grids that the test-taker must color in based on the rules they deduce from a few examples; one might, for instance, see a sequence of images and observe that a blue tile is always surrounded by orange tiles, then complete the next picture accordingly. It’s not so different from paint by numbers.

        The test has long seemed intractable to major AI companies. GPT-4, which OpenAI boasted in 2023 had “advanced reasoning capabilities,” didn’t do much better than the zero percent earned by its predecessor. A year later, GPT-4o, which the start-up marketed as displaying “text, reasoning, and coding intelligence,” achieved only 5 percent. Gemini 1.5 and Claude 3.7, flagship models from Google and Anthropic, achieved 5 and 14 percent, respectively.

        https://archive.is/7PL2a

        • Goretantath@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 day ago

          Its funny because i approach life with a trial and error method too, not efficient but i get the job done in the end. Always see others who dont and give up like all the people bad at computers who ask the tech support at the company to fix the problem instead of thinking about it for two secs and wonder where life went wrong.

      • sapetoku@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        A regular AI should use a calculator subroutine, not try to discover basic math every time it’s asked something.

      • Imgonnatrythis@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        Fascist. If someone does maths differently than your preference, it’s not “weird shit”. I’m facile with mental math despite what’s perhaps a non-standard approach, and it’s quite functional to be able to perform simple to moderate levels of mathematics mentally without relying on a calculator.

          • Imgonnatrythis@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 day ago

            Thought police mate. You don’t tell people the way they think is weird shit just because they think differently than you. Break free from that path.

            • Lemminary@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              1 day ago

              The reply was literally “*I* use a calculator” followed by “AI should use one too”. Are you suggesting that you’re an LLM or how did you cut a piece of cloth for yourself out of that?

              • GSV_Sleeper_Service@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                24 hours ago

                Calling someone a fascist for that is obviously a bit OTT but you’ve ignored the “do weird shit” part of the response so it wasn’t literally what you said. Taking the full response into account you can easily interpret it as “I don’t bother with mental maths but use a calculator instead, anyone who isn’t like me is weird as shit”

                That is a bit thought police-y

                • ClamDrinker@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  22 hours ago

                  Except as you demonstrated, it requires quite a few leaps of interpretation, assuming the worst interpretations of OP’s statement, which is why it’s silly. OP clearly limited their statement to themselves and AI.

                  Now if OP said, “everyone should use a calculator or die”, maybe then it would have been a valid response.

                • Lemminary@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  22 hours ago

                  I didn’t ignore it, I just interpret it differently as in, “I don’t need to do this unusual stuff everyone does without a calculator”. Calling something weird doesn’t necessarily mean it’s off-color or that it’s a trait the other person has. In my use case, weird just means unexpected or counterintuitive, and maybe complex enough that I can’t bother with describing it properly. I know because I use it that way too. Weird doesn’t have to mean a third eye on your face every time. I mean, doing the weird math thing is taught in school as a strategy.

                  I do want to mention that it’s not the first time I see a visceral reaction to a passing comment. I usually see this from marginalized groups, and I can assure you, both Kolanki and I are part of those too. And knowing his long comment history, I sincerely doubt he meant anyone is weird as shit.

                  And even if it’s a bit thought-policey, how does that warrant calling someone a fascist and going off on them like that? That’s also a bit weird (as in odd).

        • I am talking about the AI. It’s already a computer. It shouldn’t need to do anything other than calculate the equations. It doesn’t have a brain, it doesn’t think like a human, so it shouldn’t need any special tools or ways to help it do math. It is a calculator, after all.

        • artichoke99@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 days ago

          OK but the llm is evidently shit at math so its “non-standard” approach should still be adjusted

  • hersh@literature.cafe
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 days ago

    But here’s the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, “I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95.” But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.

    This is not surprising. LLMs are not designed to have any introspection capabilities.

    Introspection could probably be tacked onto existing architectures in a few different ways, but as far as I know nobody’s done it yet. It will be interesting to see how that might change LLM behavior.

    • kshade@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      I’m surprised that they are surprised by this as well. What did they expect, and why? How much of this is written to imply LLMs - their business - are more advanced/capable than they actually are?

    • Singletona082@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      Then take that concept further, and let it keep introspecting and inspecting how it comes to the conclusions it does and eventually…

  • moonlight@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    The math example in particular is very interesting, and makes me wonder if we could splice a calculator into the model, basically doing “brain surgery” to short circuit the learned arithmetic process and replace it.

    • Not_mikey@slrpnk.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      I think a lot of services are doing this behind the scenes already. Otherwise chatgpt would be getting basic arithmetic wrong a lot more considering the methods the article has shown it’s using.

    • Nougat@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      3 days ago

      That math process for adding the two numbers - there’s nothing wrong with it at all. Estimate the total and come up with a range. Determine exactly what the last digit is. In the example, there’s only one number in the range with 5 as the last digit. That must be the answer. Hell, I might even use that same method in my own head.

      The poetry example, people use that one often enough, too. Come up with a couple of words you would have fun rhyming, and build the lines around those words. Nothing wrong with that, either.

      These two processes are closer to “thought” than I previously imagined.

      • moonlight@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        3 days ago

        Well, it falls apart pretty easily. LLMs are notoriously bad at math. And even if it was accurate consistently, it’s not exactly efficient, when a calculator from the 80s can do the same thing.

        We have setups where LLMs can call external functions, but I think it would be cool and useful to be able to replace certain internal processes.

        As a side note though, while I don’t think that it’s a “true” thought process, I do think there’s a lot of similarity with LLMs and the human subconscious. A lot of LLM behaviour reminds me of split brain patients.

        And as for the math aspect, it does seem like it does math very similarly to us. Studies show that we think of small numbers as discrete quantities, but big numbers in terms of relative size, which seems like exactly what this model is doing.

        I just don’t think it’s a particularly good way of doing mental math. Natural intuition in humans and gradient descent in LLMs both seem to create layered heuristics that can become pretty much arbitrarily complex, but it still makes more sense to follow an exact algorithm for some things.

        • dual_sport_dork 🐧🗡️@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 days ago

          when a calculator from the 80s can do the same thing.

          1970’s! The little blighters are even older than most people think.

          Which is why I find it extra hilarious / extra infuriating that we’ve gone through all of these contortions and huge wastes of computing power and electricity to ultimately just make a computer worse at math.

          Math is the one thing that computers are inherently good at. It’s what they’re for. Trying to use LLM’s to perform it halfassedly is a completely braindead endeavor.

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    This is great stuff. If we can properly understand these “flows” of intelligence, we might be able to write optimized shortcuts for them, vastly improving performance.

    • LarmyOfLone@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      Better yet, teach AI to write code replacing specific optimized AI networks. Then automatically profile and optimize and unit test!

    • MTK@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      That has always been the case. Even basic programs need debugging sometimes, so we developed debuggers.

      • LarmyOfLone@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 days ago

        Not really. When you program you break down the problem into many smaller sub programs and then codify them. There are errors that need debugging. But never “how does this part of the program I wrote work?”. Reading code from someone else is less fun than writing, but you can still understand it.

        There are some cases like detergents, apparently until recently we didn’t know exactly how it works. But human engineered tools are not comparable to this.