Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At most points in the product stack, memory frequency increased by enough to compensate for the narrower bus. Dropping to a narrower bus and putting more RAM on each channel allowed for some 50% increases in memory capacity instead of having to wait for a doubling to be economical. And architecturally, the 4000 series has an order of magnitude more L2 cache than the previous two generations (went from 2–6MB to 24–72MB), so they're less sensitive to DRAM bandwidth.


Seems to have gone badly for the 4060 & 4060 Ti specifically though, as they're at the same performance level as the previous gen 3060.

People could buy a 2nd hand 3070 for less money.


Yeah, those are the chips where memory bandwidth actually regressed for two generations in a row. Going from 256-bit to 192-bit to 128-bit in the xx60 segment would have been reasonable if they'd used the faster DRAM that the more expensive cards get, but the 4060 also got a much smaller memory frequency boost than its bigger siblings.


With respect: Have you actually measured performance or are you merely quoting Nvidia marketing?


There's not much measurement necessary for peak DRAM bandwidth; bit rate times bus width is pretty much the whole story when comparing GPUs of similar architecture and the same type of DRAM. That's not to say that DRAM bandwidth is the only relevant performance metric for GPUs (which is why a DRAM bandwidth regression doesn't guarantee worse overall performance), but there's really no need to further justify the arithmetic that says whether a higher bit rate compensates for a narrower bus width.

If you were specifically referring to the performance impact of the big L2 cache increase: I don't know how big a difference that made, but it obviously wasn't zero.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: