I have been playing around with slang, which is supposed to be more cross platform. They have a neural rendering slant, and I have yet to fully test on all platforms, but I think it's a welcome move to consolidate all these apis. https://shader-slang.org/
AFAIK a very large part of Slang are massively big 3rd party libraries written in C++, the Slang-specific Rust code would just be a very thin layer on top of millions(?) of lines of C++ code that has been grown over decades and is maintained elsewhere.
(fwiw I've been considering to write the custom parts of the Sokol shader compiler in Zig instead of C++, but that's just a couple of thousand lines of glue code on top of massive C++ libraries (SPIRVTools, SPIRVCross, glslang and Tint), and those C++ APIs are terrible to work with from non-C++ languages.
As far as developer friction for integration into asset workflows goes, that's exactly where I would prefer Zig over Rust (but a simple build.zig already goes most of the way without porting any code to Zig).
It is according to Khronos anyway, for those that aren't already deeply invested into HLSL.
Khronos has been quite vocal that there is no further development on GLSL, they see that as a community effort, they only provide SPIR-V.
This is how vendor specific tooling eventually wins out, they kind of got lucky AMD decided to offer Mantle as basis for Vulkan, LunarG is doing the SDK, now NVidia contributed slang, otherwise they would still be arguing about OpenGL vNext.
This is a longer and deeper conversation, but I think on topic for the original article, so I'll go into it a bit. The tl;dr is developer friction.
By all means if you're doing a game (or another app with similar build requirements), figure out a shader precompilation pipeline so you're able to compile down to the lowest portable IR for each target, and ship that in your app bundle. Slang is meant for that, and this pipeline will almost certainly contain other tools written in C++ or even without source available (DXC, the Apple shader compiler tools, etc).
There are two main use cases where we want different pieces of shaders to come from different sources of truth, and link them together downstream. One is integrating samplers for (vello_hybrid) sparse strip textures so those can be combined with user paint sources in the user's 2D or 3D app. The other is that we're trying to make the renderer more modular so we have separate libraries for color space conversion and image filters (blur etc). To get maximal performance, you don't want to write out the blur result to a full-resolution texture, but rather have a function that can sample from an intermediate result. See [1] for more context and discussion of that point.
Stitching together these separate pieces of shader is a major potential source of developer friction. There is a happy path in the Rust ecosystem, albeit with some compromises, which is to fully embrace WGSL as the source of truth. The pieces can be combined with string-pasting, though we're looking at WESL as a more systematic approach. With WGSL, you can either do all your shader compilation at runtime (using wgpu for native), or do a build.rs script invoking naga to precompile. See [2] for the main PR that implements the latter in vello_hybrid. In the former case, you can even have hot reloading of shaders; implemented in Vello main but not (yet) vello_hybrid.
To get the same quality of developer experience with Slang, you'd need an implementation in Rust. I think this would be a good thing for Slang.
I've consistently underestimated the importance of developer friction in the past. As a contrast, we're also doing a CPU-only version of Vello now, and it's absolutely night and day, both for development velocity and attracting users. I think it's possible the GPU world gets better, but at the moment it's quite painful. I personally believe doing a Rust implementation of the Slang compiler would be an important step in the right direction, and is worth funding. Whether the rest of the world agrees with me, we'll see.
> The pieces can be combined with string-pasting, though we're looking at WESL as a more systematic approach.
> To get the same quality of developer experience with Slang, you'd need an implementation in Rust. I think this would be a good thing for Slang.
WESL has the opposite problem: it doesn't have a C++ implementation. IMO, the graphics world will largely remain C++ friendly for the forseeable future, so if an effort like WESL wants to succeed, they will need to provide a C++ implementation (even more so than the need for Slang to provide a Rust one).
You're probably right about this. In the short to medium term, I expect that the Rust and C++ sub-ecosystems will be making different sets of choices. I don't know of any major C++ game or game-adjacent project adopting, say, Dawn for their RHI (render hardware interface) to buy into WebGPU. In the longer term, I expect the ecosystems to start blending together more, especially as C++/Rust interop improves (it's pretty janky now).
Long story short: you want to compose shaders at runtime and need a compilation pipeline for that. So what you really need is a C interface to the slang transpiler that is callable from rust.
Rewriting the whole slang pipeline in rust is a fool's errand.
I haven't looked into the source for your project, but am curious if you are integrating any kind of existing engine/backend (Polars is what I am thinking) into it, or if that is even possible.
Not as of now. We first want to be a first class spreadsheet engine that implements 90% of Excel functions and features like array functions, LAMBDA, ...
A goal of IronCalc is to make things like integrating Polars trivial for a developer.
Foundational maybe isn't the best label for this kind of model. My understanding of foundational models is that they are made to be a baseline which can be further fine tuned for specific downstream tasks. This seems more like an already fine tuned model, but I haven't looked carefully enough at the methodology to say.
I don't think it's a particular buzzword here. They claim it's useful across a range of tasks, and that's the key part imo.
Now, "predictions for parts of drug discovery" isn't the widest range, so perhaps you need to consider "foundation" as somewhat context dependent, but I don't think it's a wild claim. Neither "foundation" nor "fine tuned" are really better than each other, but those are probably the two ends of a spectrum here.
My get-out clause here is that someone with a better understanding of the field may say these are actually extremely narrowly trained things, and the tests are equivalent to multiple different coding problem challenges rather than programming/translation/poetry/etc.
It’s about like referring to a famous person’s red carpet attire as “off the shelf [designer name]”. It downplays the effort that went into it more than anything.