> The vast majority of consumer hardware will _never_[0] be exposed to this category of attack
Arbitrary code execution is common in the browser (Javascript and Webassembly). Really, this is any case where you don't entirely trust every program running on the device (e.g. smartphones).
> Thanks to a combination of reasonable software mitigations
That is time protection. Restricting access to system timers wasn't enough here; mitigations also need to prevent user-created high resolution timers, so useful features like SharedArrayBuffer had to be disabled, to prevent the creation of synthetic timers.
> The two options are performance or security, pick one.
That is not the findings of the paper: "Across a set of IPC microbenchmarks, the overhead of time protection is remarkably small on x86 [1%], and within 15% on Arm."
I think if there's a segmentation to be made, it's "general purpose, untrusted computing" and "trusted high performance computing". The second category would be the reserve of such projects as physics simulations and render farms.
> The second category would be the reserve of such projects as physics simulations and render farms.
or smoothly scrolling, 60fps rendered canvas apps in browsers. I think there can't be an apartheid between untrusted and trusted, because developers would push the user to make their software trusted to get the max performance (and the users would just agree).
> developers would push the user to make their software trusted
I don't see it going this way. This is comparable to virtual memory and MMUs. When there is support in hardware, the speed benefit of not using it is negligible (as shown by the 1% difference demonstrated in the research).
When it is not needed, there is a benefit of not implementing it in hardware, and saving power and die area. For example, GPUs (traditionally) and crypto mining hardware do not employ MMUs.
Arbitrary code execution is common in the browser (Javascript and Webassembly). Really, this is any case where you don't entirely trust every program running on the device (e.g. smartphones).
> Thanks to a combination of reasonable software mitigations
That is time protection. Restricting access to system timers wasn't enough here; mitigations also need to prevent user-created high resolution timers, so useful features like SharedArrayBuffer had to be disabled, to prevent the creation of synthetic timers.
> The two options are performance or security, pick one.
That is not the findings of the paper: "Across a set of IPC microbenchmarks, the overhead of time protection is remarkably small on x86 [1%], and within 15% on Arm."
I think if there's a segmentation to be made, it's "general purpose, untrusted computing" and "trusted high performance computing". The second category would be the reserve of such projects as physics simulations and render farms.