NinjaZ@infosec.pub to Technology@lemmy.worldEnglish · 17 hours agoChina scientists develop flash memory 10,000× faster than current techinterestingengineering.comexternal-linkmessage-square61fedilinkarrow-up11arrow-down10cross-posted to: technology@lemmy.ml
arrow-up11arrow-down1external-linkChina scientists develop flash memory 10,000× faster than current techinterestingengineering.comNinjaZ@infosec.pub to Technology@lemmy.worldEnglish · 17 hours agomessage-square61fedilinkcross-posted to: technology@lemmy.ml
minus-squaregravitas_deficiency@sh.itjust.workslinkfedilinkEnglisharrow-up0·12 hours agoYou can get a Coral TPU for 40 bucks or so. You can get an AMD APU with a NN-inference-optimized tile for under 200. Training can be done with any relatively modern GPU, with varying efficiency and capacity depending on how much you want to spend. What price point are you trying to hit?
minus-squareboonhet@lemm.eelinkfedilinkEnglisharrow-up0·11 hours ago What price point are you trying to hit? With regards to AI?. None tbh. With this super fast storage I have other cool ideas but I don’t think I can get enough bandwidth to saturate it.
minus-squaregravitas_deficiency@sh.itjust.workslinkfedilinkEnglisharrow-up0·10 hours agoYou’re willing to pay $none to have hardware ML support for local training and inference? Well, I’ll just say that you’re gonna get what you pay for.
minus-squarebassomitron@lemmy.worldlinkfedilinkEnglisharrow-up0·9 hours agoNo, I think they’re saying they’re not interested in ML/AI. They want this super fast memory available for regular servers for other use cases.
You can get a Coral TPU for 40 bucks or so.
You can get an AMD APU with a NN-inference-optimized tile for under 200.
Training can be done with any relatively modern GPU, with varying efficiency and capacity depending on how much you want to spend.
What price point are you trying to hit?
With regards to AI?. None tbh.
With this super fast storage I have other cool ideas but I don’t think I can get enough bandwidth to saturate it.
You’re willing to pay $none to have hardware ML support for local training and inference?
Well, I’ll just say that you’re gonna get what you pay for.
No, I think they’re saying they’re not interested in ML/AI. They want this super fast memory available for regular servers for other use cases.