Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd really not call the Radeon VII a "fantastic card" by any stretch of the imagination, outside of specialized use cases like FP64 compute. Its only selling point is that and the enormous 1 TB/s memory bandwidth with the for the time large 16 GB, which a friend summarized as "I can push 1 TB/s of garbage to nowhere!".

Sure, if those things are relevant to you, it was a killer card, but most consumers judge GPUs first and foremost by their gaming performance, in which the VII was a total dud. The 5700 XT was nearly as fast in those with far less hardware because getting proper utilization on GCN/CDNA is a pain compared to RDNA.

RDNA2 completed the gaming oriented flip with Infinity Cache (which is also useful outside of games, but still has limited application in GPGPU) which allowed them to offer competitive gaming performance to Nvidia with far less hardware, a big difference from the Vega era.



> I'd really not call the Radeon VII a "fantastic card" by any stretch of the imagination

An argument about whether the Radeon VII was a "fantastic card" misses the point, it could have been another GPU. OP's main point was that AMD had a years-long policy of not officially supporting any of their consumer-grade GPUs for computing applications, as a result AMD had lost critical opportunities during this time period - even a small market share is better than none. OP mentioned Radeon VII because it was the only exception, and it only happened because there was strong support from some groups within AMD - which didn't last long. After RDNA was released, computing support on consumer GPUs was put on pause again until several months ago. There's a big difference between "not optimally designed for computing" and "no official support of computing at all".

The outcome is that it created a great barrier of entry for developers and power users with desktop GPUs. You have to figure out everything by yourself, which is not supposed to be particularly difficult as it's the norm in the FOSS world. The problem is that, in a community-driven project, you would have received help or at least a hint from your peers or project maintainers who's familiar with the system. Not so for AMD. if something doesn't work, no AMD developers will provide any help or even hints - and by not providing support, the "community" is never really established. The ROCm GitHub Issues page was (is?) basically a post-apocalypse world abandoned by AMD, full of user self-help threads with no input from developers. Almost nothing works and nobody known why. Even if the problem reported is a general one, it would still be ignored or closed if you're not running an officially-supported system and GPU.

One time, I needed to understand a hardware feature of an AMD GPU, I eventually had to read ROCm and Linux kernel source code before finding an answer. For most users, it's completely unacceptable.

> Sure, if those things are relevant to you, it was a killer card

Disclosure: I use the Radeon VII for bandwidth-heavy simulations so I have a pro-Radeon VII bias. But I don't think my bias affected my judgement here.


Also it's basically an Mi50 in a trench coat. You could buy a 32 GiB version labeled Mi60, initially with a single mini DP port making it IMO classify as "graphics card" not "compute accelerator".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: