Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can change a single bit in RTX voice and it works on other cards.

Seems to pretty heavily imply that it was an entirely artificial requirement.

Also it's NVidia, all of the home grade cards are artificially limited by drivers.



I see people are responding about performance considerations, but the other aspect is support. It may just be that NVIDIA does not believe supporting RTX Voice on GTX cards is worth the time and resources. Because even if they put in a disclaimer about "We haven't tested this on every GTX card, so you may notice poor performance or bugs and we won't support that", they'll get plenty of people complaining to them about poor performance or bugs. It happens all the time, and it's often a big concern in choosing which products to support.

Yes, I agree that initially choosing not to support GTX cards was a money-based decision, but sometimes there are additional (somewhat more reasonable) factors that contribute to that money-based decision, which people often seem to leave out.


> Also it's NVidia, all of the home grade cards are artificially limited by drivers.

You know that all products with any electronics in them do this, right? It's not an Nvidia thing by any stretch of the imagination. Your phone and your computers all have software feature toggles for features meant for future hardware. Your OS has features disabled by design. Unless you're driving a 30 year old car, it's probably true that your car has features turned off that are meant for other models. Since you're on HN, there's a reasonable chance that the company you work for releases a product with "artificially limited" features. (Of course I have no idea what you do or who you work for, just making the point that hidden features are so common, the odds of me guessing right are quite good.)

There are good and legitimate reasons why products turn features off, especially when the features were designed for specific hardware, with specific specs in mind, and work best with specific hardware support. Since the fallback could be a drain on performance, battery, heat, noise, safety, and last but not least user experience, it's pretty understandable why some products have features disabled, especially when the features were developed after the product was released, right?


> Also it's NVidia, all of the home grade cards are artificially limited by drivers.

In some ways yes, but in the way this is usually meant (DP flops) this hasn't been true for many generations. GeForce chips simply only have one DP ALU for every 32 SP ALUs, while the HPC accelerators have one DP ALU for every two SP ALUs.


The parent is probably referring to the Quadro (cad workstation) vs Geforce (gaming) split, not the Geforce vs HPC. There's a history of limiting Geforce performance for CAD workflows in software, and flashing the firmware of Geforce cards to unlock much better "professional" performance.


The flashing didn't increase performance it allowed you to use Quadro certified drivers which are available now for Geforce (minus the official certification and support iirc) too (i think they are called creator drivers or something like that).

This also impacted only a tiny subset of CAD/professional imaging products which were out of reach of consumers anyhow often due to the massive PITA and cost that was certifying your hardware and drivers for those products.

Quadro wasn't running faster in Blender, 3DSM not even AutoDesk (in fact they were often slower due to lower frequencies), the apps were it made a difference when those drivers were finally released for Geforce and Titan cards were only the likes of Catia and Siemens NX hardly consumer/prosumer software.

I'm sorry but if you are a 7 figure a year license holder for Siemens NX spending $2000-3000 extra per on each GPU every 3 year isn't going to be an issue, not to mention that you aren't going to be using Geforce GPUs anyhow as the drivers are still not certified (which is required for the industry) and even more likely you'll be buying certified workstations from the likes of HP and Dell.


This is absolutely not true, gaming focused Geforce cards have intentionally firmware crippled 64bit float performance, and the Quadro firmware flashes unlocked it.


Gaming GeForce cards don’t have FP64 silicon, and neither do the Quadros which use the same dies.


Not currently, but this was true in the 6xx series for instance.


No it wasn’t GK104 the biggest GeForce 6XX die didn’t only had 1/16th on both the Quadro and GeForce cards.

The Titan / Titan Black had 1/3 FP64 just like the Quadro K6000 which used the same die.


Seems a bit of a weak argument to be talking about something released 8 years earlier?


You can, but the issue here is the use case and performance this is meant to be used by streamers hence any performance impact on the game must be neglibible.

The normal ALU's aka CUDA Cores and the Tensor cores on RTX cards can run concurrently without blocking, without require context switching and for the most part outsides of the register file / cache in that SM and GPC they aren't competing for resources.

Sure no model runs only on the Tensor Cores and probably requires some generic ALU usage but optimized models only require a fraction of their execution time to involve those standard cores.

Optimizing their model to work on a multiple generation of GPUs with different compute capabilities (if you look at CUDA compute you have native/multiple instruction operations constantly being switched in and out) without having a severe impact on performance that would lead to "OMG RTX VOICE MAKES YOUR GAME LAG!!!!!" posts on social media and streamers trashing the software and the brand isn't easy.

So you launch it first on hardware where you can guarantee that it would run well without adverse impact then release it to a broader audience with a huge disclaimer often to showcase just how better your new hardware is like in the case of DXR ray tracing which now runs on essentially all DX12 GPUs from NVIDIA.

People in general are clueless to how GPUs actually work, and they tend to sensationalize everything (especially when it comes to Team Green vs Team Red) just look at the nonsense that came out after Crytek demoed their "ray tracing" on VEGA people were basically saying RTX is a scam.

The reality is quite different, Crytek used a hybrid model, they weren't running GI via ray tracing they were using SVOGI and have implemented very rudimentary ray tracing for reflections with heavy limits on how many BVH instances you can have at the same time and at what range objects can fall into the BVH.

So yes it can run for example on the Xbox One X but at <30 fps, with low quality SVOGI and upto 5 objects being able to be reflected at any time via RT.

NVIDIA isn't limiting homegrade cards artificially, FP64 units aren't there in the smaller dies, AMD used to do it during the days of GCN as their big dies still had full FP64 ALUs.

The only feature that NVIDIA has been currently limiting on their GPUs is SR-IOV support which what would you know is also disabled on AMD consumer cards ever since SR-IOV support was introduced with VEGA.

Even the "it's-definitely-not-the-founder-edition" VEGA FE which was quickly re-marketed at launch as a prosumer card despite the fact that it's driver support was essentially killed within 6 months which was sold at a huge markup (compared to Vega 56/64 which came later) didn't enable it.

Posts like this is why we can't get good stuff, because people will always complain why something doesn't work and then go into conspiracy theories when something does work only on newer hardware.


AFAIK NVidia also limiting streams of NVEnc and patch to remove restriction is available.

https://github.com/keylase/nvidia-patch




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: