AbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-21 month agoApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isexternal-linkmessage-square350fedilinkarrow-up11arrow-down10file-text
arrow-up11arrow-down1external-linkApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isAbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-21 month agomessage-square350fedilinkfile-text
minus-squareKnock_Knock_Lemmy_In@lemmy.worldlinkfedilinkEnglisharrow-up0·1 month agoOther way around. The claimed meaningful change (reasoning) has not occurred.
minus-squareCommunist@lemmy.frozeninferno.xyzlinkfedilinkEnglisharrow-up0·1 month agoMeaningful change is not happening because of this paper, either, I don’t know why you’re playing semantic games with me though.
minus-squareKnock_Knock_Lemmy_In@lemmy.worldlinkfedilinkEnglisharrow-up0·1 month ago I don’t know why you’re playing semantic games I’m trying to highlight the goal of this paper. This is a knock them down paper by Apple justifying (to their shareholders) their non investment in LLMs. It is not a build them up paper trying for meaningful change and to create a better AI.
Other way around. The claimed meaningful change (reasoning) has not occurred.
Meaningful change is not happening because of this paper, either, I don’t know why you’re playing semantic games with me though.
I’m trying to highlight the goal of this paper.
This is a knock them down paper by Apple justifying (to their shareholders) their non investment in LLMs. It is not a build them up paper trying for meaningful change and to create a better AI.