Dunno, almost all of the people I know anywhere in the ML space are on the C and Rust end of the spectrum.
Lack of types, lack of static analysis, lack of ... well, lack of everything Python doesn't provide and fights users on costs too much developer time. It is a net negative to continue pouring time and money into anything Python-based.
The sole exclusion I've seen to my social circle is those working at companies that don't directly do ML, but provide drivers/hardware/supporting software to ML people in academia, and have to try to fix their cursed shit for them.
Also, fwiw, there is no reason why Triton is Python. I dislike Triton for a lot of reasons, but its just a matmul kernel DSL, there is nothing inherent in it that has to be, or benefits from, being Python.... it takes DSL in, outputs shader text out, then has the vendor's API run it (ie, CUDA, ROCm, etc). It, too, would benefit from becoming Rust.
Yet it was created for Python. Someone took that effort and did it. No one took that effort in Rust. End of the story of crab's superiority.
Python community is constantly creating new, great, highly usable packages that become de facto industry standards, and maintain old ones for years, creating tutorials, trainings and docs. Commercial vendors ship Python APIs to their proprietary solutions. Whereas Rust community is going through forums and social media telling them that they should use Rust instead, or that they "cheated" because those libraries are really C/C++ libraries (and BTW those should be done in Rust as well, because safety).
> Dunno, almost all of the people I know anywhere in the ML space are on the C and Rust end of the spectrum.
I wish this were broadly true.
But there's too much legacy Python sunk cost for most people though. Just so much inertia behind Python for people to abandon it and try to rebuild an extensive history of ML tooling.
I think ML will fade away from Python eventually but right now it's still everywhere.
A lot of what I see in ML is all focused around Triton, which is why I mentioned it.
If someone wrote a Triton impl that is all Rust instead, that would do a _lot_ of the heavy lifting on switching... most of their hard code is in Triton DSL, not in Python, the Python is all boring code that calls Triton funcs. That changes the argument on cost for a lot of people, but sadly not all.
Okay. Humor me.
I want to write a transformer-based classifier for a project. I am accustomed to the pytorch and tensorflow libraries. What is the equivalent using C?
It could be written in mix of Cobol and APL. No one cares.
People saying "oh those Python libraries are just C/C++ libraries with Python API, every language can have them" have one problem - no other language has them (with such extensive documentation, tutorials etc.)
Scroll up this thread and the other poster was asking if you can use pytorch and tensorflow from C. Both are C++ libraries, so accessing them from C/C++ is pretty trivial and has first-class support.
> Okay. Humor me. I want to write a transformer-based classifier for a project. I am accustomed to the pytorch and tensorflow libraries. What is the equivalent using C?
Use C++ bindings in libtorch or tensorflow. If you actually mean C, and not C++, then you would need a shim wrapper. C++ -> C is pretty easy to do.
PyTorch also supports C++ and Java, Tensorflow also does C++ and Java, Apple AI is exposing ML libraries via Swift, Microsoft is exposing their AI stuff via .NET and Java as well, then there is Julia and Mojo is coming along.
TensorFlow is a C++ library with a python wrapping, yet nobody (obviously exaggeration) actually uses tensorflow (or torch) in C++ for ML R&D.
It's like people just don't get it. The ML ecosystem in python didn't just spring from the ether. People wanted to interface in python badly, that's why you have all these libraries with substantial code in another language yet development didn't just shift to that language.
If python was fast enough, most would be fine to ditch the C++ backends and have everything in python, but the reverse isn't true. The C++ interface exists, and no-one is using it.
>However people are definitely using it, as Android doesn't do Python, neither does ChromeOS.
That's not really a reason to think people are using it for that when things like onnxruntime and executorch exist. In fact, they are very likely not using it for that, if only because the torch runtime is too heavy for distribution on the edge anyway (plus android can run python).
Regardless, that's just inference of existing models (which yes I'm sure happens in other languages), not research and/or development of new models (what /u/airza was concerned about), which is probably 99% in python.
Well, onnxruntime is also having polyglot bindings, and yet another way to avoid Python.
Yes, you can package Python alongside your APK, if you feel like having fun making it compiled with NDK, and running stuff even more slowly in phone ARM chipsets over Dalvik JNI than it already is on desktops.