Machine learning pays my bills, and I never had a choice on my graphics card brand. To be sure, I wanted an AMD for the open source drivers, but CUDA remains essential to me. RocM support from AMD is a joke, and isn’t anywhere close to an alternative. Reseachers release code that only runs on CUDA for a good reason. To say that I don’t get to complain is going too far
Exactly. You’d think with the two things they’re really competitive on being raw flops and memory, they’d be a viable option for ML and scientific compute, but they’re just such a pain to work with that they’re pretty much irrelevant.
Machine learning pays my bills, and I never had a choice on my graphics card brand. To be sure, I wanted an AMD for the open source drivers, but CUDA remains essential to me. RocM support from AMD is a joke, and isn’t anywhere close to an alternative. Reseachers release code that only runs on CUDA for a good reason. To say that I don’t get to complain is going too far
Exactly. You’d think with the two things they’re really competitive on being raw flops and memory, they’d be a viable option for ML and scientific compute, but they’re just such a pain to work with that they’re pretty much irrelevant.
You get to complain to Nvidia, not Linux developers and maintainers.
That’s true, but it also wasn’t fair to be a Wayland detractor then.
Nvidia needed to do stuff to make that combination viable, and their delay in doing so wasn’t anyone’s fault but Nvidia’s