AbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-21 month agoApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isexternal-linkmessage-square350fedilinkarrow-up11arrow-down10file-text
arrow-up11arrow-down1external-linkApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isAbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-21 month agomessage-square350fedilinkfile-text
minus-squareKnock_Knock_Lemmy_In@lemmy.worldlinkfedilinkEnglisharrow-up0·1 month agoThis paper does provide a solid proof by counterexample of reasoning not occuring (following an algorithm) when it should. The paper doesn’t need to prove that reasoning never has or will occur. It’s only demonstrates that current claims of AI reasoning are overhyped.
minus-squareCommunist@lemmy.frozeninferno.xyzlinkfedilinkEnglisharrow-up0·edit-21 month agoIt does need to do that to meaningfully change anything, however.
minus-squareKnock_Knock_Lemmy_In@lemmy.worldlinkfedilinkEnglisharrow-up0·1 month agoOther way around. The claimed meaningful change (reasoning) has not occurred.
minus-squareCommunist@lemmy.frozeninferno.xyzlinkfedilinkEnglisharrow-up0·1 month agoMeaningful change is not happening because of this paper, either, I don’t know why you’re playing semantic games with me though.
minus-squareKnock_Knock_Lemmy_In@lemmy.worldlinkfedilinkEnglisharrow-up0·1 month ago I don’t know why you’re playing semantic games I’m trying to highlight the goal of this paper. This is a knock them down paper by Apple justifying (to their shareholders) their non investment in LLMs. It is not a build them up paper trying for meaningful change and to create a better AI.
This paper does provide a solid proof by counterexample of reasoning not occuring (following an algorithm) when it should.
The paper doesn’t need to prove that reasoning never has or will occur. It’s only demonstrates that current claims of AI reasoning are overhyped.
It does need to do that to meaningfully change anything, however.
Other way around. The claimed meaningful change (reasoning) has not occurred.
Meaningful change is not happening because of this paper, either, I don’t know why you’re playing semantic games with me though.
I’m trying to highlight the goal of this paper.
This is a knock them down paper by Apple justifying (to their shareholders) their non investment in LLMs. It is not a build them up paper trying for meaningful change and to create a better AI.