schizoidman@lemm.ee to Technology@lemmy.worldEnglish · 1 month agoDeepSeek's distilled new R1 AI model can run on a single GPU | TechCrunchtechcrunch.comexternal-linkmessage-square20fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkDeepSeek's distilled new R1 AI model can run on a single GPU | TechCrunchtechcrunch.comschizoidman@lemm.ee to Technology@lemmy.worldEnglish · 1 month agomessage-square20fedilink
minus-squareIrdial@lemmy.sdf.orglinkfedilinkEnglisharrow-up0·edit-21 month agoOn my Mac mini running LM Studio, it managed 1702 tokens at 17.19 tok/sec and thought for 1 minute. If accurate, high-performance models were more able to run on consumer hardware, I would use my 3060 as a dedicated inference device
On my Mac mini running LM Studio, it managed 1702 tokens at 17.19 tok/sec and thought for 1 minute. If accurate, high-performance models were more able to run on consumer hardware, I would use my 3060 as a dedicated inference device