Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can, but the issue here is the use case and performance this is meant to be used by streamers hence any performance impact on the game must be neglibible.

The normal ALU's aka CUDA Cores and the Tensor cores on RTX cards can run concurrently without blocking, without require context switching and for the most part outsides of the register file / cache in that SM and GPC they aren't competing for resources.

Sure no model runs only on the Tensor Cores and probably requires some generic ALU usage but optimized models only require a fraction of their execution time to involve those standard cores.

Optimizing their model to work on a multiple generation of GPUs with different compute capabilities (if you look at CUDA compute you have native/multiple instruction operations constantly being switched in and out) without having a severe impact on performance that would lead to "OMG RTX VOICE MAKES YOUR GAME LAG!!!!!" posts on social media and streamers trashing the software and the brand isn't easy.

So you launch it first on hardware where you can guarantee that it would run well without adverse impact then release it to a broader audience with a huge disclaimer often to showcase just how better your new hardware is like in the case of DXR ray tracing which now runs on essentially all DX12 GPUs from NVIDIA.

People in general are clueless to how GPUs actually work, and they tend to sensationalize everything (especially when it comes to Team Green vs Team Red) just look at the nonsense that came out after Crytek demoed their "ray tracing" on VEGA people were basically saying RTX is a scam.

The reality is quite different, Crytek used a hybrid model, they weren't running GI via ray tracing they were using SVOGI and have implemented very rudimentary ray tracing for reflections with heavy limits on how many BVH instances you can have at the same time and at what range objects can fall into the BVH.

So yes it can run for example on the Xbox One X but at <30 fps, with low quality SVOGI and upto 5 objects being able to be reflected at any time via RT.

NVIDIA isn't limiting homegrade cards artificially, FP64 units aren't there in the smaller dies, AMD used to do it during the days of GCN as their big dies still had full FP64 ALUs.

The only feature that NVIDIA has been currently limiting on their GPUs is SR-IOV support which what would you know is also disabled on AMD consumer cards ever since SR-IOV support was introduced with VEGA.

Even the "it's-definitely-not-the-founder-edition" VEGA FE which was quickly re-marketed at launch as a prosumer card despite the fact that it's driver support was essentially killed within 6 months which was sold at a huge markup (compared to Vega 56/64 which came later) didn't enable it.

Posts like this is why we can't get good stuff, because people will always complain why something doesn't work and then go into conspiracy theories when something does work only on newer hardware.



AFAIK NVidia also limiting streams of NVEnc and patch to remove restriction is available.

https://github.com/keylase/nvidia-patch




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: