> Let's check in with OpenCL and see how far it got disrupting CUDA.
That's entirely the fault of AMD and Intel fumbling the ball in front of the other team's goal.
For ages the only accelerated backend supported by PyTorch and TF was CUDA. Whose fault was that? Then there was buggy support for a subset of operations for a while. Then everyone stopped caring.
Why I think it will go different this time: nVidia's competitors seem to have finally woken up and realized they need to support high level ML frameworks. "Apple Silicon" is essentially fully supported by PyTorch these days (via the "mps" backend). I've heard OpenCL works well now too, but have no hardware to test it on.
That's entirely the fault of AMD and Intel fumbling the ball in front of the other team's goal.
For ages the only accelerated backend supported by PyTorch and TF was CUDA. Whose fault was that? Then there was buggy support for a subset of operations for a while. Then everyone stopped caring.
Why I think it will go different this time: nVidia's competitors seem to have finally woken up and realized they need to support high level ML frameworks. "Apple Silicon" is essentially fully supported by PyTorch these days (via the "mps" backend). I've heard OpenCL works well now too, but have no hardware to test it on.