125 - 190W thermals.... That seems like a lot, almost into server territory.
Sacrificing ~5% "game performance" (such a nebulous number given how varied the CPU/GPU load can be between games) for being able to sit comfortably in the same room seems like a no brainer.
This still feels like a halo product? Except does the i5 or such compete well enough with Ryzen for that to work?
I'm curious to see how AMD responds, I wonder if they're chiplet method lends itself well to launching their own hybrid E/P core architecture alongside "3D V-cache".
There are rumours going around that the GeForce 4090 is going to have a max TDP of 650W.
It seems insane but if true then consider that the combination of CPU and GPU is going to be 840W, and that is before the rest of the system is taken into account.
At some point we are going to need to start worrying about installing dedicated venting for computers to the outside of the house.
At some point you need to start integrating desktop computers into your HVAC designs.
At some point you need to start having dedicated electric circuits installed like for your oven.
My current computer (5950X with 3080) was already hot enough when gaming that I found it physically uncomfortable to have it under my desk and had to move it out from under there. My legs were burning up.
Undervolting the 3080 can help a lot with thermals and sacrifices minimal performance up to a point. I undervolted my 3090 and it improved thermals by like 5-10c depending on the workload which lowered the fan noise and as a bonus lowered the coil whine. I don't have numbers on the power draw though but it should be a significant difference
Undervolting can even improve performance: my stock 5700 xt runs at 1.2V and reaches 110°C hotspot. It then drops the frequency a few hundred MHz to stay below the temperature threshold.
By running it at 1.1V it reaches 90°C hotspot, below the 110°C threshold so it runs at maximum frequency constantly.
Many GPUs (and CPUs) have their stock voltage quite high because it improves yield for the manufacturer. Unless a GPU is really bottom of the barrel regarding silicon lottery, it can usually be undervolted without issues. This is however less true for flagship models which are already pushing the silicon as far as it can.
> Many GPUs (and CPUs) have their stock voltage quite high because it improves yield for the manufacturer.
The higher voltages don't improve yields since manufacturers usually bin parts by testing them at higher voltages, decreasing them until the part passes the whole test suite for a grade.
The primary pressure on stock voltage is headline numbers like core count and clock speed that consumers have been trained to compare by decades of marketing. This is especially obvious with laptops and desktops geared at gamers, which will often come with an even higher voltage than the recommended stock Intel profile at a detriment to performance and efficiency - doing it just to be able to list higher numbers on the specs page at the expense of the usability of the machine.
I ran a beefy (3 power inputs) 3080 undervolted and you don’t really lose more than a few percentage points of performance and it ran great on my 650W with a 3700X. I much prefer a little performance hit to loud fans
I did RMA it once. The new one had the same issue and it took a month to get to me. So I didn't want to go through that again. The coil whine also only happens on certain games. Most of the time there isn't any
Make sure you take the case cover off and get a paper towel roll and put it up to your ear. See if the sound actually is the GPU or the PSU. I've had this a time or two, swore it was the GPU and it wasn't. RMA'd the PSU and my "GPU coil whine" was gone.
It really is insane, and limits upgradability of even brand new machines.
Point in case, I have a newly built Ryzen 5900X tower that reuses a GPU from an old build. I chose a high quality 80+ Platinum 750W PSU thinking that would give more than enough headroom for upgrading the GPU at some point, but if the next generation of GPUs are as power hungry as rumored I’ll be limited to current generation cards unless I buy a new PSU.
I hope efficiency comes to be a focal point again soon, because if the trend continues you’ll need a 1kW PSU just to comfortably accommodate current enthusiast parts and have any hope of upgrading in the future, which is ridiculous.
With what the GPU will cost you will not see the price of a better PSU as more than a speedbump. You can also sell your old one to recoup some of it.
Even so, 1KW+ personal computers are a bit strange, back in the day the very largest workstations (think IRIX fridges) were in that domain.
What would be nicer would be if manufacturers aimed for minimal power consumption with the same performance as a previous generation to give people the option.
Something like a QX9770 + SLI 680s could use similar power to a 12900k + 4090, which is similar in market positioning as the "halo product" now vs 10 years ago. The difference now is as more and more people who grew up gaming are now in the employed professional category, there are more people looking at what would have been the ultimate top end setups back then.
More interesting than this is what's caused the performance expectation of gamers to rise faster than Moore's Law.
It took us almost 10 years to move the standard from 1024x768 to 1920x1080 (~2.5x increase in pixels/sec), yet in the past 5 years we've gone from 1920x1080@60Hz to 4k@120Hz (~8x increase).
I'm not sure if it's because graphics aren't naturally improving the same way they were in the 00s and early 2010s, or streaming culture showing things off with the highest resolutions and framerates possible, but it's absolutely a modern phenomenon and I'm not sure how long it can last.
I don't think 4k120 today is in the same market position as 1080p60 5 years ago. 5 years ago 1080p60 was the bar, if your device couldn't do 1080p60, it wasn't a gaming product, it was a business product. I don't think 4k120 is there, I think we're just at the bar where 1440p60 is considered at that level. In that time, the higher end target has moved from 1440p60 or 4k30 to 1440p120 or 4k60, which means the halo products now have to promise 4k120 to keep ahead of the merely high end.
I think the current increase in interest in high refresh rate gaming came from the shift away from TN panels. Once IPS became reasonably available, companies with TN products couldn't push for them as premium products from an image quality perspective, but high refresh rate was something they could still market with TN that there was a window where IPS and VA could not do that. So they did, gamers got exposed to high refresh rates and then moved those expectations to other products.
Also tech like gsync/freesync meant that if your monitor had a higher refresh rate than your hardware could produce, you no longer had to deal with tearing.
I think monitor tech is leading GPU tech at the moment, with 1440p240 and 4k120 monitors. There's certainly _some_ games where those resolution/framerate combinations are achievable, mostly esports titles and older/smaller titles, but for the most part, no (also for me at least, past like 90hz you're into dimishing returns territory). But now people have these monitors capable of X resolution and Y refresh rate, people want to have their cake and eat it, wanting games that have modern effects for higher fidelity and yet also higher fps to take advantage of their monitors.
>I don't think 4k120 today is in the same market position as 1080p60 5 years ago.
Yeah, I think you're actually right there.
People talk about high-end monitor technology much more than they used to, but I'm not sure how many ordinary people actually have them.
>Once IPS became reasonably available, companies with TN products couldn't push for them as premium products from an image quality perspective.
That's really interesting. I recently changed monitors from 4k60 to 1440p/144Hz under the assumption that the increased frequency would make more of a difference than the additional resolution.
Immediately noticed an inferior quality picture (less pixels aside) - perhaps that's why!
I bought my first 4k@120 monitor this year. Huge TV that doubles as PC screen. The first time I've bought a TV and the first time I've spent that much on a monitor. My graphics card can't keep up (I'd need a GPU as expensive as the screen) but playing games at 1080p works fine. The 4k resolution is most appreciated when I do desktop stuff.
I think 4k@120 monitors are entering more people's price range and many will choose one on their next upgrade cycle.
4k120 is interesting because gaming consoles have it, but an extremely expensive GPU from a few years ago like 2080Ti doesn't (with HDMI you need HDMI 2.1).
Well, consoles "have" it, in the same way that last gen consoles "have" 4k60 or PS3/360 had 1080p60. They have a video interface capable of transmitting such a signal, but not hardware capable of producing that in basically all titles. Also the 2080 Ti had a 4k120 output over DisplayPort 1.4a .
4k120 is not really the standard in anyway. The newest gen of consoles do 4k30 or lower with dynamic resolution adjustments to reach 60fps in some games. Yes HDMI 2.0 supports 4k120 but no modern system can reach that in a modern graphically intensive game.
1080p60-144 or 1440p60-144 is still what the majority of gamers play at, especially on consoles.
I think high res displays on smartphones and then laptops and TVs have ultimately driven this from the industry side, but most gamers still don’t care that much about 4K and there is probably not much coming after it because even now it’s hard to make out the differences in fast moving environments unless you sit really close to your big 4K TV.
I'm not sure which setup and games you are thinking about but if I'm not mistaken, right now we are still struggling with 1440p@60hz for ubisoft games and other AAAs
Maybe this is driven by the increase in disposable income.
Many avid gamers likely want the best possible rig (in terms of computing power) they can comfortably afford. Hardware vendors try to meet this demand.
Powerful setups are useless without games that benefit from power. This creates a demand for super highres graphics, even though those might not add much to the actual gameplay experience.
> There are rumours going around that the GeForce 4090 is going to have a max TDP of 650W.
This is kind of a move of desperation. GPUs are massively parallel so it's straightforward to make performance scale linearly with power consumption. They could always have done this.
Nvidia is used to not having strong competition. Now AMD and Intel are both gunning for them and the market share is theirs to lose. Things like this are an attempt to stay on top. But the competition could just do the same thing, so what good is it?
GeForce uses Samsung 8nm that is obviously behind to competitors but their GeForce is still competitive. It means their design itself is great. Nvidia is lose the fab race, or don't want to pay more for advanced TSMC fab.
They all just use whoever they want. Not long ago AMD was on Samsung/GloFo 14nm and Nvidia on TSMC 12nm. GeForce 4090 is supposed to be on TSMC 5nm.
The RTX 3080 (Samsung 8N) is slower than the RX 6900 (TSMC N7) and has a 20W higher TDP. The RTX 3080 Ti and RTX 3090 are faster but have a 50W higher TDP. Their design isn't magic, they're just compensating for the worse process technology by using more power.
I have been keeping my gaming rig on the outside terrace/balcony covered from sunlight or rain, for the last 6 years. I run cables under window sill, using powered USB hub for peripherals. For comfort, enable BIOS option for "wake from sleep with USB keyboard". This removes heat source from the room, and fan noise is gone too.
This starts to get really tricky when you pair the multi-thousand dollars class computer that pumps out a kilowatt of heat under load with equally good monitor(s). I had a hard enough time finding quality DP cables getting 4k144 HDR to go 2 meters to reach across the desk, none of the 3 meter ones I tried at the time worked. 8k120 HDR to the TV was the same on HDMI.
>10 Gbps USB/Thunderbolt are also a pain longer than a desk's length away but not the end of the world. Things like the power button are the easiest, just pop them into some single pin jumper extenders and you're good to go for any reasonable distance.
Oooh this is fantastic! I knew about optical TB/type C cables (with the downside they don't provide power but that can easily be worked around by using a self powered TB dock at the remote side) but I never thought to look for active optical DP cables! The price isn't even bad given the lengths - $90 for a 20 meter HBR3 cable is pretty reasonable really. Thanks!
My 3080ti actually heats up the large basement office about at the same rate as my 1.5kw solid state amateur radio amp. Pretty fast. Sucked in the summer but looking forward to the winter!
I have dedicated circuits for both my gaming pc (120v) and the amp (230v). It didn’t cost all that much while we were doing electric work anyway.
My 3080 with 3900 had a similar problem. I solved it by using two extra fans positioned outside the case, tucked between the side panel of my table and a wall, just under the top desk. It takes the hottest air (coz its right below the desk plane) and blows it along the wall to the side. I have them running at, I think, 350 rpm (14 cm fans), so you cant hear them at all and it doesnt feel like any significant amount of air is moving, but the temperature under my desk dropped to comfortable, almost ambient levels.
The fans are, via extension cables included with extra fans I bought, plugged directly into motherboard, zhe cable pulled trough opening for second gpu connectors.
Sadly, not rumours. Next gen video cards from AMD/nvidia are going to cost a lot of money and eat many, many watts. I mean, we can live with that waste heat during the winter but I am not looking forward to having a 1kw spewing PC next to me in my gaming room
This seems crazy to me. I run an entire home office, sometimes with a reasonably powerful main workstation (enough for things like 3D graphics work), a second PC or laptop, and various server and networking equipment all running at once, and unless we’re also using something power-intensive like a laser printer the entire room doesn't draw 1kW!
FWIW I consider the thermals a feature, not a bug. Helps save on the heating bill, plus my house is already very well insulated. I have noticed that my Radeon 5500 XT has better thermals when passed thru to a Windows VM. It would seem that Linux kernel 5.15 has better thermals/fan control for amdgpu, I’m on a slightly older version.
I don't think most consumers would really care about power consumption at full load. From looking at some benchmarks, Intel seems to have okay thermals for gaming or light productivity tasks (thanks to the P-E core split). If you're doing any sort of rendering or number-crunching you're in trouble (unless you have a liquid cooler, which to be honest really sucks), but most people aren't that kind of person.
Plenty of air coolers can dissipate 180W. For example, almost all graphics cards come with air coolers that are capable of dissipating their full TDP, which is often much higher than 180W. Something like this:
> Would be able to dissipate 180W probably without even throttling up the fans that much.
That's very likely. I have an NH-D14 with only the center 140 mm fan installed on an i7-3930k overclocked to 4.3 GHz, and it barely ramps up the fan. The loudest fan in my computer is the PSU (an old 600 or 650 W Seasonic). It runs in a "silence-oriented" Define R3 case with closed door.
Intel announces 135 W TDP for that CPU. Under load, OCCT says 160 W.
Ahaha yeah I have a Define S case. Unfortunately my current Noctua NH-U9S is not up to the task posed by the new processor, as my temps are now in the high 80s/low 90s while doing CPU-intensive gaming. So I've ordered a U14S which should be nice.
I only have one GPU and not a particularly hot one (rx5600) at that, but I've found that moving it to a lower slot helped a bit. On the first slot it practically touched the CPU's heatsink.
That's definitely something that should be taken into account, especially on newer "consumer" CPUs which may actually not even have enough lanes for a GPU and NVMe drive [0].
In my case, the CPU has 40 PCIe lanes, and all PCIe 16x slots on the motherboard are electrically 16x.
Not just Noctua! Heat pipes are an amazing technology and there are a bunch of companies that have made really exciting thermal management products with them.
> Intel seems to have okay thermals for gaming or light productivity tasks
That's not what you buy an i7 for though, is it?
An i7 is typically for "I have work to do when not gaming" or content creators starting out, where as an i5 is more the category for gaming and light productivity / "the family pc".
Getting an i7 exclusively for gaming seems more for bragging rights on "fastest CPU".
Depends on the type of game. There was a period of time in the 2010s where CPUs were rarely the bottleneck in gaming. But with the advent of better graphics techniques and more powerful graphics cards, along with more ambitious scenes and simulations, there are many cases today where a strong CPU is required for gaming.
A recent example is Battlefield 2042. When I first got the game I had an AMD 3700X, which is no slouch of a CPU. But it could only drive the game at about 70fps, no matter the resolution. After an upgrade to a 5900X, I can run the game at 110-120fps. The strongest i5s would likely struggle to hit a stable 60fps on this game.
So what you're saying really is that the 3700X had absolutely no issues whatsoever running that game. Yes it's nicer to play in 120fps, but it's not like you couldn't play the game because of the CPU. I know someone who had to upgrade from an older 4-core i7 to a modern CPU because Horizon 5 was actually stuttering. But the CPU "limiting" you when you're comfortably above 60fps, and you aren't playing competitive eSports, is not really a limit, just like how my car accelerating really poorly past 150mph is not really a limit in any real sense of the word.
I mean, that's your personal opinion on >60 FPS gaming, it's not universally shared. I for one enjoy even single player games at the highest FPS I can get, which is why I have a 1080p 240 hz monitor. If the parent likes their games higher than 60 FPS then to them their 3700x really was a limiter in their enjoyment, something which the 5900x would not be for them.
> So what you're saying really is that the 3700X had absolutely no issues whatsoever running that game.
Perhaps to specifications that would satisfy someone else. My requirements are ~120fps and 4k. For those requirements, no existing i5 would cut muster.
> But the CPU "limiting" you when you're comfortably above 60fps
I never used the word "limiting" in my comment. Not sure what you're quoting from. That being said, the 3700X was objectively limiting my framerates. It's not a value judgment, it's just an objective fact that such CPUs are inadequate to satisfy my preferences, and those of many other PC gamers. If you have different preferences, that's fine, but it's not really relevant when I'm talking about my own.
Of course, and I myself play at 144Hz and would probably do the same. I think I just took an issue with the statement that most gamers are CPU limited nowadays - and while in strictly technical sense that's true, I don't think that a few years old CPU being able to run games at solid 60fps+ is a problem in any sense. Again, it's not like it physically can't run the game, it just doesn't run it "well enough" for some people.
> I just took an issue with the statement that most gamers are CPU limited nowadays
I didn't say most gamers are CPU limited, though. I said "Depends on the type of game . . . there are many cases today where a strong CPU is required for gaming." I didn't say most cases.
Conversely, it would be fine to observe that many console gamers have historically been satisfied with 30fps, and therefore PC gamers ought to only "require" a strong -2 gen i5 processor. While you're at it, you could also say that console gamers game at 1080p, so PC players should be satisfied with that as well. And you'd be right, under a certain configuration of preferences, and a certain interpretation of the word "require."
70 fps would be "absolutely no issues" if it were a 95th percentile min. as an average figure, it leaves room for meaningful improvement in a multiplayer shooter (you are definitely dipping below the refresh rate of a 60hz monitor).
or from a different perspective, if a new CPU can give you a 70% framerate increase, you either spent too much money on your GPU, or you spent too little on the original CPU. bottlenecks that severe are a sign that you have misallocated your budget.
All the games I play are CPU bound. I have a 3090 and I still can’t play World of Warcraft (a 16 year old game) with the settings on max without serious FPS drops in major cities because most of the game is bound by single threaded cpu far more than gpu.
It's really concerning for Intel. Their high margin parts are in the data center space, which is extremely sensitive to perf/watt. If they're getting perf by throwing extra watts at the problem, are the data center parts going to be competitive? If not, it doesn't bode well for Intel, as that gravy train money is part of what let's them spend a stupid amount on R&D. They could starve themselves out of the investment money they need in an increasingly competitive market.
TSMC is set to have full production of N3 by then too, and they have actually been meeting their public statements about process timelines. Intel unfortunately has a increasingly uphill battle.
Can intel make it to the next node though.... They were stuck at 14nm for a heck of a long time, and I don't know if that was a one off, or something systemic.
My setup has a PSU of just 550W and a max power draw less than 200W (around 80W in regular office apps) while allowing me to play any title at the native resolution of the monitor. It is great to have the best eye candy in 4k, but is it worth the price in $$$ and climate warming?
This was hard to read. I felt like the content was a little repetitive - maybe for SEO purposes? And full of over the top superlatives, mostly focused on the performance per dollar, and mostly ignoring the increased costs of motherboards (DDR5 aside) over the prior generation or the current generation of the competitor's product. [0] Not to mention focusing on an OS that basically no one uses right now.
I'm sorry but it reads more like an ad. And just because it often needs to be said, I have no brand loyalty here, and mostly think brand loyalty in this space is kind of dumb.
[0]: An example:
```Given its more amenable $409 price tag, it is quite shocking to see the Core i7-12700K deliver such a stunning blow to the $549 Ryzen 9 5900X in threaded work, highlighting the advantages of the x86 hybrid architecture.```
I'd love to save $140 and get better performance, but where am I getting a motherboard that doesn't eat up most of that savings? Maybe it exists, but it's not mentioned. That's fine, it's a CPU review, but then leave the dollar figures out of it if they aren't complete.
The motherboard price situation was similar when Zen 3 CPUs came out. In a few more months, B- and H- series motherboards should come out with more reasonable prices.
At the moment, if you're living a US city with a Microcenter, you can apparently get Asus Z690M-Plus Prime DDR4 for $170 if you buy it together with a CPU.
True. This made me remember that I complained about the same thing when I founded and ran a competitor site in the late 90s/early 2000s. Of course, tomshardware is still around and I can’t even find mine on archive.org!
At least to me, it always seems like the differences in gaming performance between mid-to-high end CPUs are relatively small, and frequently dominated by the differences between GPUs. Given a fixed budget, does it actually make sense from a gaming point-of-view to put the money into a high-end CPU any more?
(Aside from gaming, there's definitely plenty of wins from going to higher end CPUs, but I'm curious about the gaming case specifically.)
It depends on what sort of games you will be playing. For some simulation and related games such as Factorio and the like the CPU can make a large difference.
At the high end, factorio is one of those games (the other being flight sim) that also gains a lot from faster memory iirc.
Though at the low end, factorio is also a really well optimised game and can run a 100spm factory at 60ups pretty comfortably on a mid range cpu. It's only the large multiplayer games or megabases where it starts to hit performance limits.
Cities Skylines is probably the best example of a game where the average player has big gains in performance in regular play they could get from a cpu upgrade.
I think you're underestimating factorio significantly. There are 20kspm bases that are run at 60 ups. I doubt most people will even feel the game performance start to degrade before 1kspm.
There are, but megabase (>1k spm) territory is where players start making design decisions around ups impact like avoiding large logistics zones or avoiding heat pipes.
I had the understanding that once you move from 1080p to 4K then the GPU becomes the bottleneck for most AAA games. Pro gamers will probably stick to 1080p and a 144hz monitor.
CS:GO is also almost 10 years old, running on the venerable Source engine. While it was a nice graphical update over CS: Source, it wasn't exactly winning any beauty contests.
Not really true in my experience for multiplayer games. Pretty much every multiplayer game is CPU and memory bound and any GPU limitation can be mitigated by just lowering settings. Nothing can be done to compensate for weaker CPU performance unfortunately. If I had to skimp on GPU or CPU I would pick GPU every time.
It depends on how much the game is CPU heavy, specially single core performance on some games makes a big difference. High end CPUs tend to have a higher single core performance.
Only true if you are not playing esports games and trying to push maximum frame rate, such as for a 360hz display, or one of the upcoming 480hz ones.
Fwiw my simple reaction time is 220ms on a basic 60hz display vs 170ms on a top end 360hz, so being able to push those frames has a significant quality of life delta, even though I’m unable to hit 360hz in most games due to cpu bottlenecks on my system.
Wow, this seems like an impressive step forward for the CPU industry and the kind of change we've been hoping for since Ryzen started giving Intel a run for its money. I can only hope that AMD continues to stay competitive and keep Intel on its toes.
I’m not sure about how much progress was actually made. To me, it seems like they pocketed some extra performance that was previously “unused” due to lower max TDP specs.
These benchmarks suggest they are much slower at bringing the chips to market than AMD, but they’re able to match or beat performance dollar-for-dollar.
Assuming the chips and motherboards are actually available (at normal prices), and that the production systems will perform in the same way as the benchmark systems.
It’s a bit of a complicated matter in terms of pricing as a measure of performance / dollar because with the current Ryzen CPUs AMD increased pricing relative to the previous generation and Intel being in its current position is lowering pricing to remain competitive. So AMD is reducing pricing accordingly and it’s not clear what the margins look like (the 5800X in particular is an awkward chip to produce and now may be forced to even sell at a loss).
There’s many factors to consider in TCO like performance / watt as well combined with motherboard ecosystem which has been a historical AMD weak point. So it’s tough to compare CPU pricing on an apples to apples basis even if the CPUs were otherwise exactly the same performance and pricing
AM4 _and_ prior. Motherboard quality has varied considerably across different manufacturers compared to Intel’s partners. One other example of recent annoyances is the launch situation with B450 and X570 and now with B550 and X570S existing years later basically. Gigabyte’s one manufacturer that has been swapping out parts to lower spec and they don’t get away with it in Intel’s partner network but somehow AMD isn’t penalizing Gigabyte at least publicly to keep this from happening.
For a long time AMD motherboards just had less features. Unsure if he means that specifically, but it was a pain point when I was building an AM4 system.
- The Intel processors with DDR4 are doing better than with DDR5. This is unlike some of the other benchmarks we have seen. What's happening here?
- In Blender, the 12900k impressively got really close in performance to the 5950x ($200 more expensive) and beat the 5900x (similar price). That's... somewhat unexpected.
- I wonder whether their cooling could keep up all the way through. They mention a custom water cooling loop. What is the performance with a 'typical' cooling setup? How much performance is lost if these CPUs have to throttle to something you can expect a typical boxed cooler to handle?
- Will you actually be able to buy these high-end CPUs? Or are they pure marketing that won't be profitable for Intel and will only be available in small quantities?
tomshardware is all about performing benchmarks exactly to their corporate masters specification in product briefings.
Here https://www.tomshardware.com/reviews/hot-spot,365-4.html they "tested" throttling in a somewhat famous burning Athlon video (later revealed they used board with disabled/non functioning thermal cutout) during Intel Pentium 4 payola timeframe. Some quotes:
"AMD did not bless the Thunderbird core with ANY thermal protection whatsoever."
In reality AMD socket certification required thermal cutout, same as Intel for Pentium 3. AMD processors do include thermal diode just like Intel ones.
"Intel's older processor is also equipped with a thermal diode and a thermal monitoring unit"
is a lie. Pentium 3 thermal throttling is performed by the Bios just like in Athlon case. Serendipitously tomshardware picked broken Siemens motherboard for AMD system as recommended by Intel, imagine that.
Tomshardware eventually published a non-retraction retraction after AMD pointed out Siemens lie/defect (not adhering to socket certification requirements) and showed proper AMD setup safely shutting down. Of course you cant read that one because its buried down, excluded from wayback machine and deadlinked http://www.tomshardware.com/column/01q4/011029/index.html
One possible explanation I haven’t seen in the comments (or gotten to yet) is the massive increase in revenue related to graphics/GPU, partly from greater GPU as compute (ai/deep learning/non-display gpu tasks), as well as crypto mining.
(And a larger gaming market/audience over the past four or so years- gaming on mobile / live game streaming)
More $ / TAM = more R&D and competition in the space. (And more power/capabilities available for gaming may just be a nice byproduct )
Feels to me like we're starting to hit the brick wall with x86-64. Only way we can squeeze more performance out of these chips to make them physically larger, squeeze the components into tighter space, and suck more and more power and generate more and more heat.
As much as I dislike Apple's overall business practices, they are 100% correct about ARM being the future of high performance, high efficiency computing.
And we need high efficiency in a world that is becoming starved of energy as more and more people get connected every day.
The M1 is no small chip itself but it is very good for power/efficiency while still having top class performance. AMD's solution also from last year (but on a worse TSMC node than the M1) isn't that far behind in the power/efficiency space and also holds about the same level per core performance.
Intel, which hit a brick wall about 10 years ago when the competition disappeared due to AMD's Bulldozer, is finally responding to both of those chips from last year and while it has been able to pull some decent performance improvements for once it definitely dipped into the power bucket for a lot of that gain. I wouldn't count that as proof x86-64 is unable to compete in high performance high efficiency computing anymore - I'd hold my judgement until Zen 4 and "M2" (or whatever it will be called) release around this time next year as it seems each will be a really big update for each chip. Who knows, maybe Intel will have something new by then too now that they have actual competition again.
Size (mm^2) TDP (w)
Apple APU
M1 119 18
M1 Pro 251 ??
M1 Max 425 90 (peak -- not TDP)
AMD CPU/APU
4800U 156
5800U 180
5800X 206
5950X 286
Epyc Rome 1008
Intel CPU/APU
Ice Lake 123
Tiger Lake 146
Comet Lake 206
Rocket Lake 276
Alder Lake S 209
Alder Lake M 215
AMD GPU
Vega 20 331 300
Navi 14 158 150 (80 mobile)
Navi 10 251 235 (120 mobile)
Navi 23 237 160 (100 mobile)
Navi 22 335 230 (145 mobile)
Navi 21 520 300 (no mobile)
Nvidia GPU
TU106 445 175
TU104 545 250
TU102 754 280
GA106 276 170 (115 mobile)
GA104 392 290 (150 mobile)
GA102 628 350 (no mobile)
Nothing comes close to the M1 when looking at the combination of performance, TDP, and die area.
M1 Pro has worse multi-threaded integer performance than a 5950, but better floating point performance. It does this in a 251mm^2 die vs a 286mm^2 die AND while most of the die is used up by the GPU.
M1 Max has the equivalent of GA104 tacked on to its 8 CPU cores (plus 2 little cores). AMD's equivalent would be a 5800H plus a Navi 22 mobile. Together, those would use more than double the power while being 100mm^2 larger.
Keeping this to a CPU vs CPU architecture performance conversation since the comment was about x86 vs ARM not who has the best overall chip:
- GPU on the M1 Pro/Max is really impressive but irrelevant
- M1 Pro/Max came out a year later than the CPUs they are being compared to here
- The M1 takes 133% the transistor count on a generation newer TSMC node to get as far past the 5800 as it does
So from a CPU architecture perspective I really can't write x86 as having hit a brick wall to the point it will never be able to keep up with ARM, that's just not what the M1 has shown.
From an overall perspective though yes the M1, particularly Pro/Max with their larger GPUs (can't wait for mine to arrive!), are hands down the best overall systems of their classes right now I just don't think that has as much to do with the CPU being ARM instead of x86 as GP made it out to be.
Yeah, this all looks pretty desperate to me. There is no way they can keep increasing the watt usage like this.
It makes their performance look good relative to M1 for now. But what happens when Apple push out their next chip design? Is Intel going to up the watt usage by a huge amount again?
Nah , this is a dead end. Alder Lake is a blind alley.
The A15 shows that M2 should be scaring AMD/Intel.
A15 performance cores had significant gains and really improved integer performance all while reducing power consumption slightly.
A15 efficiency cores are even more amazing. They got something like 30-40% faster. They are now roughly as fast per clock as the big cores in x86 systems while still using only a fraction of the power of the big cores. This roughly translates into 8 full-size x86 cores, but with 4 of those being even faster.
Then there's the massive cache (32mb on mobile and probably 48/64 on M2) to further push down power consumption. They'll also be bumping the GPU by 30% (more if it doesn't throttle).
AMD really needs to either pick back up their ARM designs or start heavy investment into RISC-V designs.
Apple is on N3 in risk production. AMD is releasing N5 chips in a handful of months. The only reason that AMD hasn't been on N5 for a year is that a new Intel competitor has a monopoly on the node. Not exactly a point in Intel's favor.
And I'll believe that they're off of their node formally known as 10nm when I see it. It's been close to ten years straight that Intel has been slipping their public deadlines for bringing a new node up.
So, yeah, for a company that used to be a full node ahead of everyone, there's some serious stagnation going on.
Intel's "7" is just their 10nm node that they renamed (that was originally supposed to launch in 2018 and was delayed to 2020). Intel "4" (their 7nm node) was supposed to be out already, but now looks like it will be 2023 at the earliest.
It proves that they keep shifting goalposts. Once Ryzen was starting to dominate, they were against benchmarks[1]. Now they released CPU that is only good in benchmarks (expensive platform, per watt performance, limited software support, further fragmentation avx-512).
Alder Lake performance numbers are for desktop with 3x the actual power usage. Alder Lake M goes down to 6P+8E cores for the big chip and 2P+8E cores for the low-power chip.
A15 has already been out and spoiled what's coming in M2.
Size
M1 is 119mm^2 and M2 is probably at most around 130-140mm^2 (depending on cache and RAM controllers) which is still smaller than 5800U at 180mm^2 and much smaller than Alder Lake mobile at 215mm^2.
Cache
A14 and M1 have 16mb of SLC (system level cache). A14 has 8mb L2 while M1 has 12mb. A15 doubled to 32mb SLC and jumped from 8 to 12mb L2. I suspect M2 keeps the 32mb. 12 or 16mb of cache will depend on the performance increase, but I suspect they'll stick with 12mb.
5800U has 4mb of L2 cache and 16mb of L3 cache. Alder Lake desktop has 14mb L2 cache and 30mb L3 cache (dropping to 9.5mb L2 and 20mb L3 on the 12600 and 7.5mb L2 and 18mb L3 on the leaked 12400).
Power
Peak power for M1 is around 15-20w absolute max power. A15 improved power efficiency 17% over A14, so we could actually see power usage improve. Those massive caches may slightly increase chip power, but they decrease memory accesses which radically decrease total system power. The chip running at far lower peak frequencies further helps to keep cache power consumption under control.
5800U can hit upwards of 50-60w peak power to keep the stated turbo speeds. Current Intel chips do this too and I suspect that Alder Lake won't be any different here.
CPU Performance
A15 P-core increases 8-10% (up to 37% on some things) and a lot of that was integer improvements. A15 E-core increased performance by about 30%. Their actual performance is close to Zen 2. Overall, performance in A15 was around 20% better than A14. GPU performance also jumped 32%.
M1 already handily beats 5800U in pretty much everything. I'd bet heavily that M1 beats the little Alder lake. I'm less sure about the bigger alder lake because I don't know what frequencies it will be able to sustain (I'd bet it still beats out M2 though).
GPU Performance
M1 shares SLC between the CPU and GPU. This works much like AMD's Infinity fabric to reduce necessary bandwidth and allows a pretty big increase in GPU performance over what would otherwise be possible.
AMD hit the iGPU wall a few years ago. They've slowly increased GPU resources as DDR bandwidth has increased, but that's about it. I'm really looking forward to Zen 4/5 + RDNA 2/3 + Infinity Cache, but I don't know when that will actually arrive. I've seen iGPU tests of the U770 on Alder Lake desktop and it's not even as fast as last-gen Iris Xe. The mobile variant will have 3x the GPU resources, but it still only looks to be 30% faster than last-gen and then only if it isn't bandwidth limited. This seems possible with DDR5, but I'd guess it's capped at last-gen performance on DDR4 systems.
It looks like to get the best performance benefits out of these newer CPU's, you need to run Windows 11. (Linux will have support for them in the near future I believe)
OK, Intel has reclaimed the performance crown, at least for the moment, though AMD will not be sitting still.
But Intel's QA has been slipping for a long time, with buggy instructions in shipped chips. (Anybody remember transactional memory?) Have they turned that around, yet? That seems harder than getting better performance on a new process node.
Energy prices aren't really a factor in high end gaming CPUs. It's $409 for the CPU alone but a game which maxes a single core out played 8 hours a day for 1 year is ~$30 to the average American for power to CPU+Motherboard+RAM
so even if the alternative were $0 of power it's not really a factor https://www.techpowerup.com/review/intel-core-i7-12700k-alde....
I mean yeah if you're on a budget and plan to sit there with a ~230 watt all core AVX stress test benchmark running 24/7/365 for a couple of years it might matter but in that case you're probably not looking for the fastest high frequency gaming CPU in the first place. Or you're building a custom laptop and would like a long lasting battery, sure bad idea to stick a desktop gaming CPU in. Same when it comes to picking a CPU for the cheap family computer that needs to cost in total what this CPU alone does.
Alderlake supports DDR4 and the performance difference is minuscule.
The downside of LGA 1700 is that budget boards don't exist right now. You can pick up a B550 board any day of the week for under $100, sales as low as $60. Z690 is all $200+.
The Z690 is a nicer platform for sure, especially for peripheral connectivity.
The timings on DDR5 are very loose right now. Don't expect much real gains for another year or two as it matures.
If you have a specific task that is extremely memory bandwidth constrained, it's great. But then again, you are probably looking at something with 8 channels if it is that big of an issue to you.
I'm using 8-channel DDR4-3200. Would I get any benefit from 4-channel DDR5-5xxx? Is 8-channel DDR4-3200 roughly equivalent to 4-channel DDR4-6400, or am I missing something?
You're not missing much. Alder Lake (2ch DDR5-4800) achieved pretty impressive memory bandwidth, but it is still eclipsed by 4ch DDR4-3200 on Threadripper. If you could get 2ch DDR5-6400, it would be the same bandwidth as the TR4.
What I don't understand well enough to know the impacts, is how latency plays into specific workloads.
Eventually yes but the kits available right now are not very impressive. Early adopter tax definitely. You’ll probably get at least 2x faster memory kits eventually, I’m not sure how high DDR5 can scale
You can get Z690 DDR4 boards. That's what I had to resort to so we could test our software on the i5-12600k. Getting a CPU cooler with an LGA 1700 compatible bracket was actually the hardest part.
One downside of this current Alder Lake release might be the fact that the Z690 motherboards are much more expensive than the AMD's current mainstream motherboards. The CPU might be a tad cheaper but overall it might not be worth it (well, until more affordable motherboards appear).
Sacrificing ~5% "game performance" (such a nebulous number given how varied the CPU/GPU load can be between games) for being able to sit comfortably in the same room seems like a no brainer.
This still feels like a halo product? Except does the i5 or such compete well enough with Ryzen for that to work?
I'm curious to see how AMD responds, I wonder if they're chiplet method lends itself well to launching their own hybrid E/P core architecture alongside "3D V-cache".