Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The point is that and AMD GPU is far from useless. The only thing that it DOES lack is the out of the box from major Python/R/whatever libraries. Why? Not because AMD GPU does not work, but because most (perhaps all) of these high-level libraries rely of underlying performance libraries provided by Nvidia.

Despite all the talk about autodiff this or that, the stuff that matters is implemented by hand by Nvidia and Intel engineers and then high level libraries build on top. AMD is simply lagging in providing low-level C libraries and GPU kernels for that.

For example, let me chip in with the libraries I develop, in Clojure, no less. They support BOTH Nvidia GPU AND AMD GPU backends. Most of the stuff is equally good on AMD GPU and Nvidia GPU. With less fuss than in Julia and Python, I'd argue.

Check out Neanderthal, for example: https:neanderthal.uncomplicate.org

Top performance on Intel CPU, Nvidia GPU, AND AMD GPU, from Clojure, with no overhead, faster than Numpy etc. You can even mix all three in the same thread with the same code.

Lots of tutorials are available at https://dragan.rocks

I'm writing two books about that:

Deep Learning for Programmers: An Interactive Tutorial with CUDA, OpenCL, MKL-DNN, Java, and Clojure [1]

and

Numerical Linear Algebra for Programmers: An Interactive Tutorial with GPU, CUDA, OpenCL, MKL, Java, and Clojure [2]

Drafts are available right now at https://aiprobook.com

[1] https://aiprobook.com/deep-learning-for-programmers [2] https://aiprobook.com/numerical-linear-algebra-for-programme...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: