Wow, I'm really excited for this! I'm actually currently rendering a MapLibre-based web map of some drone orthomosaics I've built on my personal site[1], which is using Rust on the backend to dynamically assemble the Javascript code running on the page for each new dataset.
It's not the prettiest, and writing the initial Javascript code is that I'm using as the template for adding data is kind of a pain, so possibly being able to write the entire application in native Rust would be really awesome.
I find it interesting that the Web platform ended up being the catalyst to finally get all the OS and GPU vendors to sit down and write a single interoperable API. The existence of wgpu is a very nice outcome for all graphics programmers, even those who don't work on the Web.
Considering it's a web API for JavaScript, not something provided by the OS/drivers, WebGPU is hardly any different to WebGL. It still requires the native OpenGL/Vulkan/Metal/DirectX APIs to be built on top of. As such it really hasn't moved the needle whatsoever in regards to API interoperability, only provided yet another cross-platform abstraction (at least outside the web).
From an operational standpoint, however, WebGPU is a much better fit for Vulkan/Metal/DirectX than OpenGL is, since it (mostly) exposes abstractions that all three actually support efficiently (and more generally, was designed with modern GPUs in mind, so unless GPU design fundamentally changes it's unlikely that it will require doubling or tripling the API surface to keep up, unlike OpenGL). That represents a pretty significant improvement over OpenGL (particularly the older versions of OpenGL that you have to use if you actually care about portability), and also over solutions like MoltenVK (it's hard to efficiently implement a lower-level API on top of a higher-level one).
This doesn't invalidate your point, I just think when you say "yet another cross-platform abstraction" you're implying that this is an xkcd "now there are 15 standards" situation, when in reality the competition is basically just ancient versions of OpenGL. And OpenGL is effectively deprecated--Khronos has no plans to release any more major updates. Unfortunately, without the browser usecase bringing many of the stakeholders on board, there is not a lot of money in this kind of work, which is why there aren't any serious competitors.
WebGPU as proposed by Apple was basically Metal. Naturally other vendors weren't keen on implementing this, so they arrived at a compromise which is a bastard child of Metal and Vulkan.
They couldn't even adopt an existing shader language and invented a new one. That goes as far as invent new and unprecedented language constructs https://github.com/gpuweb/gpuweb/issues/569
Shader language squabbles and throwaway accusations aside (for my own purposes I don't really care about WGSL, since there are transpilers back and forth to other shader languages for native use, but loop/continue is not exactly an example of some crazy unprecedented language construct), Metal isn't a cross-platform standard of any sort. It pretty much explicitly exists so Apple can control it (it would be nice if it was Metal-on-Vulkan instead, but I think that ship has sailed). I don't really see how the existence of Metal is a relevant example of what I'm talking about: a portable modern graphics abstraction that is efficient on modern hardware. It's just not true that there are a bunch of those out there, when you're talking about production-quality stuff there's basically just WebGPU.
Even if your argument was that Vulkan was supposed to be that standard (which it isn't in practice), Vulkan is pretty much impossible to use safely without building a higher-level abstraction on top. It was desperately in need of a standard higher level API that programmers with limited resources (i.e. not Epic, browser vendors, etc.), or programmers who need to run in sandboxed environments (mostly browsers) can reasonably target, and I think WebGPU serves that purpose nicely. Exposing Vulkan (or DirectX 12, or any similarly low-level API) directly to browsers would have been pretty much a non-starter from a memory safety perspective, WebGPU does not exist solely because Apple didn't like Vulkan.
You argued that this isn't the xkcd situation.... and proceeded to literally describe "Situation: There are now 15 standards". But sure, you don't care because "transpilers exist". I wish people cared.
> WebGPU does not exist solely because Apple didn't like Vulkan.
Apple released first version Metal before Vulkan was even a thing on anyone's radar:
- Metal release date: June 2014.
- The Khronos Group began a project to create a next generation graphics API in July 2014
But sure, "Apple doesn't like Vulkan".
And the whole "WebGPU does not exist solely because Apple didn't like Vulkan" is a sentence that has literally zero sense. Pre-WebGPU prototype was literaly proposed as a joint effort by Apple and Mozilla, and it was Apple's idea to create a working group to work on the new graphics API.
It makes sense. The standardization effort is pretty involved, I don’t know what besides the Web would have been able to twist their arms into doing it.
WebGL 2.0 is a subset of OpenGL ES 3.0, meaning 2011 mobile GPUs.
WebGPU 1.0 is a subset of Vulkan/Metal/DX 12, meaning 2016 GPUs. And on top, you get a new C++/Rust flavoured shading language, throwing away all shaders created for WebGL.
If you want top graphics on the Web, the answer is surprise, surprise, server side rendering to textures with native 3D APIs alongside data streaming.
If you know of a way to portably expose ray tracing even on just the GPUs that actually support it, feel free to tell me (or anyone for that matter). AFAIK the problem of abstracting over the different stuff Nvidia and AMD call "ray tracing" is currently an open research problem. Of course you could respond that portability isn't important if you only care about delivering the best graphics you can on given hardware, but if you're really willing to write totally different pipelines for newer AMD cards, newer Nvidia cards, older cards, consoles, etc. you are probably not in the target audience for WebGPU.
(BTW, the biggest reason stuff like wgpu-rs doesn't support geometry shaders on non-Metal is that they're horribly slow on a bunch of GPUs, with vendors indicating that they found it super hard to profitably support them in hardware. That's why Metal removed them entirely. I think all the features of geometry shaders that are actually reliably fast, as well as the stuff required to integrate with tessellation shaders, will almost certainly come eventually in other forms. This is an example of why if you care about performance and portability you shouldn't just dump every feature under the sun into your abstraction layer).
First, you're talking about a full-fledged game engine (and one that's not actually free or usable by many projects, BTW!) by a huge team that can optimize individually for each platform (and doesn't actually support some platforms at all, e.g. Mac; you may not care about supporting such platforms, but that doesn't mean everyone can afford that luxury). I am talking about a portable low-level graphics API that smaller development teams can use to build their own stuff without needing to rewrite their pipelines for each new target platform.
Second, you're also talking about a game engine that does not portably expose ray tracing GPU features; stuff like Nanite that optimistically takes advantage of the hardware where it's available to accelerate ordinary software rasterization is not even close to the same thing. This is a really weak argument. The truth is that neither you, nor I, nor (right now) anyone else knows how to expose that hardware portably, it's all incredibly custom and will probably need big rewrites for each new hardware release. You're complaining about an API not doing something that nobody actually knows how to do yet.
Finally, not every application wants to or is able to be rendered in the cloud. It's incredibly expensive to rent out GPUs if you're not being subsidized, and in fact even most AAA attempts haven't worked out economically. Plus not every application can deal with the server latency, or wants to be beholden to an always-online server, or has clients who can tolerate the bandwidth required, or a myriad of other things. So that isn't a solution to a portable low-level graphics API either.
I'm not saying WebGPU is perfect, but driveby comments like yours that imply it's missing obvious stuff or handicapped or whatever are pretty annoying. It's a very good attempt at a very difficult goal (efficient, portable, low-level graphics) that has value to a lot of people. It isn't going to be everything to everybody (and neither is UE5, incidentally). When there is a portable way to expose raytracing functionality I have no doubt it will come to WebGPU, but until then it's basically a meaningless gotcha.
> mapr is a portable and performant vector maps renderer. We aim to support the web, mobile and desktop applications. This is achieved by the novel WebGPU specification.
badass, perfect. get that hardware cranking! another very excellent use for wgpu!
I'm guessing that's a placeholder for later. But I did find some sort of demo in the parent directory of the API docs site: https://maxammann.org/mapr/webgl/
`+ tr` selects the sibling tr element to the matched element. The HTML on the page uses three tr's per post. The first (with class "athing") contains the title. The second contains the post karma, submitter name, comments link. The third is a spacer. So you'd want to block all three.
It's not the prettiest, and writing the initial Javascript code is that I'm using as the template for adding data is kind of a pain, so possibly being able to write the entire application in native Rust would be really awesome.
[1] https://cmoran.xyz/geospatial