Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Intel Comet Lake Core i9-10900K, i7-10700K, i5-10500K CPU Review (anandtech.com)
83 points by willis936 on May 20, 2020 | hide | past | favorite | 182 comments


Money quote:

>"As mentioned, 10th Gen Comet Lake is, by and large, the same CPU core design as 6th Gen Skylake from 2015. This is, for lack of a better explanation, Skylake++++ built on 14++. Aside from increasing the core count, the frequency, the memory support, some of the turbo responses, and enabling more voltage/frequency customization (more on the next page), there has been no significant increase in IPC from Intel all while AMD has gone from Excavator to Zen to Zen 2, with sizable IPC increases and efficiency improvements. With Intel late on 10nm, Comet Lake does feel like another hold-over until Intel can either get its 10nm process right for the desktop market or backport its newer architectures to 14nm; so Intel should be trying its best to avoid a sixth generation of the same core design after this."


Intel iterates more on its naming scheme than CPU's these days. At least it fits well with the "its over 9000" meme, which would make a great commercial for these chips. It is impressive they managed to make a chip that uses 250 watts, I never managed to get my overclocks that high.

And I guess all those warnings they gave us about "overvolting" for decades were bullshit ¯\_(ツ)_/¯ . Everything comes overclocked from the factory now. They're using the same tech as the Haswell days. Back then they told us core voltages over 1.2 were unsafe and now they run 1.3 from the factory. Lol

Sorry I just love throwing shade on Intel, they've been a predatory monopoly for ages. Purposely turning off ECC on everything except ungodly expensive server chips. Changing sockets on a schedule to force you to buy more shit, sometimes by moving a single pin. Forcing vendors to sign agreements not to use AMD. Allying with MS to corner the market with Wintel. "Management Engine" can't be shut off. Attempted to segment 64 bit market with Itanium, leaving consumers on 32 bit seemingly forever. Encouraging developers to use their compiler that purposely cripples non-intel CPUs. Showing demos at shows that are secretly water cooled and overclocked.

Intel is a shitty company. I'm glad their foray into mobile devices flopped. I hope they lose tons of market share


While true, I am still impressed by Intel engineers in terms of performance they are able to extract out without arch and process change. i9-10900k beats i7-6950x by a good margin for most tests, both being 10C CPUs.


6950x was Broadwell-E not Skylake.. usually the prosumer chips on the bigger socket are the last gen architecture (or a generation ahead branding wise, depending on how you look at it). So despite 6700k being Skylake, 6950x was not.

The 7900x is a 10c/20t Skylake part and would be a better comparison in this case.


Same reason that 5775c, while being 14nm isn't Skylake either, and doesn't overclock past 4.2ghz and performs worse than a 6700k/7700k


Right. Same manufacturing node but different microarchitecture.


This is true in a way, but at the same time they mostly achieve it through overclocking. The cost is massive power consumption. I'm curious how one of their brand new chips would perform next to one from 5-7 years ago overclocked to the same core speed.


I would imagine that this is what Jim Keller is working day and night at Intel to avoid. I’m really looking forward to what comes out of his work at Intel. Competition is good.


I don't think so. I'm not very confident if he is given design or architect duties.

On last few employers, he was mostly a glorified project manager/lead first, and an engineer later.

Intel has more than enough big name architects, and designers to deliver new architectures.

The reason they cannot do it now is because they locked themselves into iterative mode of operation by years of low risk, cow milking projects.

If they can't stop iterating, they have no alternative to continuing milking Skylake just like AMD did with Nulldozer


Interesting! Do you have any anecdotes that you can share about his recent roles at AMD and Tesla? From public information, he was the designer of the Zen microarchitecture at AMD.


Zen architecture team lead was Mike Clark, engineering lead Suzanne Plummer


Here's an interesting profile on Jim Keller that I just appeared in Fortune (I read it using reader mode): https://fortune.com/longform/microchip-designer-jim-keller-i...


I don't think so. Intel's problem here is not that they were unable to design new architectures with all the cool features. Intel's problem here is that they were unable to actually manufacture that design.

Jim Keller is a chip designer. But even if he manages to design something that is leaps and bounds ahead of everyone else, it won't matter if Intel is unable to manufacture it.


Totally understand that Intel's manufacturing problems are a really important factor here, but GP was talking about their lack of IPC gains, which is exactly the kind of thing I would imagine they hired Jim to solve.


I think the confusion is because the chip design is tied to the manufacturing process. Intel likely has IPC gains in their new architecture for 10nm, but they can't manufacture that yet, and they can't just build it on 14nm instead without significant work to port the design to the new process node.


Thanks for clarifying things; as a software guy this is all in the realm of "magic" to me :) Do you have any pointers to layperson things that I can read about microarchitecture design improvements and process nodes?


For beginners, just go through the list of article on Anandtech would be good enough for most. From IPC, uArch to Fab, that would take at least a few weekends to get a meaningful understanding. After that Wikichip is a more intermediate+ level site.

I just want to add additional information to your parent, it is not quite Intel cant manufacture it yet, they have it on mobile, called Ice Lake, and Tiger Lake coming later this year. So that is 2 generation ahead of its desktop counterpart.


Well, this has been the discussion about porting it to 14++, but presumably they need a higher density to cram a bunch of extra transistors into the design. Hindsight at this point, but a big rich company like intel, should have had a fallback plan to port it to 7nm (or whatever) at TSMC a couple years ago to keep the pipeline full.

It would have been a huge black eye, but at least they would have been moving forward while they straighten out their own manufacturing story.


My outsiders perception seems to be that he is single handedly creating the competition for both teams.

Is he truly that good?


It's not all bad.

> On the face of it, the Core i9-10900K with its ability to boost all ten cores to 4.9 GHz sustained (in the right motherboard) as well as offering 5.3 GHz turbo will be a welcome recommendation to some users. It offers some of the best frame rates in our more CPU-limited gaming tests, and it competes at a similar price against an AMD processor that offers two more cores at a lower frequency.

If you don't mind your CPU drawing 254W of power.

> For recommendations, Intel’s Core i9 is currently performing the best in a lot of tests, and that’s hard to ignore. But will the end-user want that extra percent of performance, for the sake of spending more on cooling and more in power?

In my experience, I prefer cool running hardware as much as possible. I don't need the absolute most powerful if the costs, heat and noise are going to move the balance too far.

But I hope we see Intel succeed in improving their process and innovating a bit more instead of just shoving more cylinders in the engine and adding nitrous.

Disclaimer: Happy AMD Ryzen CPU owner


The whole situation is reminiscent of the FX9370, where AMD didn't have a new architecture yet and just cranked their existing chips up to insane amounts of power draw to eek out the last bit of performance.

Of course, the 9370 had all sorts of memes about how it would catch your house on fire. I wonder if the i9-10900k will have the same.

Hopefully, like with Ryzen following up on the 9370, the next line of chips from Intel after this will be game changers.


Please someone correct me if I'm wrong, but AFAICS the core voltage of the i9 1,52V. So with 254W, that thing really draws 167 amps? I guess every μΩ of resistance counts here or the board just explodes?


That's already been a problem for a long time. When CPU speed changes the transient currents are ridiculous, and the dI/dt is bananas, too. This is one reason why a few generations ago Intel had to go with a fully-integrated voltage regulator. The mainboard people were unable to build good, efficient VRMs off-chip.


Maybe its not possible to regulate such a high current at gigahertz speeds off chip? The board traces and pins might have inductance high enough to negate smoothing capacitors at these speeds?


Yeah I mean there's no way a part on the other side of a CPU socket is doing anything in the gigahertz range. Those frequencies are covered by layers and capacitors in or on the package itself. What's hard about modern VRMs is being able to step from Icc(max) to zero while overshooting < 25mv, or the forward load step from nothing to ~200A in an instant without dropping. The specs are in Intel's VRxx docs (eg VR13).


> I guess every μΩ of resistance counts

More importantly, every μH of inductance counts. Running 200 A into the chip is not that difficult, the real difficulty is the high di/dt - how to maintain a nice regulated voltage when the current consumption suddenly raises from 10 A to 200 A within a few microseconds.


> μH

It's a typo, should be nH. 1 μH is a huge value.


They do. CPUs tend to run at pretty high core voltages to achieve their high clock speeds compared to GPUs, so GPUs consume even more current: 300 W at ~1 V = 300 A.

In AMDs case the CPU is supplied with a common core voltage that's regulated by a per-core ultra-fast LDO. There is also a power capacitor (mimcap) on the core die.


can someone calculate the watts/meter^2 on this chip, for science?

Eyeballing the chip, it looks like ~5cm^2? This would mean it puts out 500,000watt-meter^2 or about 50 times the heat of the surface of the sun. Do we have cameras fast enough to see this explode if the heatsink falls off?


>This would mean it puts out 500,000watt-meter^2 or about 50 times the heat of the surface of the sun

Where are you getting this figure? A random website[1] says the heat on the surface of the sun is 62,499,432 W/m^2.

Also the die sizes are on page 2: 198.4 mm^2 for the 10 core part. Divide the total power (254W) by that and you get 1,280,236 W/m^2.

[1] https://www.pveducation.org/pvcdrom/properties-of-sunlight/t...


excellent thanks! And no that number was from a random website :) . Looking at it, I think its solar radiance at earth surface. 1000 watts per meter is way too low donno what I was thinking


This is the CPU power density infograph you are looking for. https://i0.wp.com/semiengineering.com/wp-content/uploads/201...


Its pretty funny that after 20 years we're off the right side of this graph by 100X right? Current chips are somewhere between "internal combustion engine" and "surface of the sun"


Oof, that CPU is going to be hot on full load. I imagine running an air-cooler is impossible on that one, making this a no-go for me.


The Prescott era chip used to get complaints about the amount of heat it produced. "PresHot", "Toaster" etc were slung about. A quick bit of searching dug up that it used to have a TDP of 115... and now we're talking about chips that more than double that. Yikes.


Prescott had one core though, this has ten. 115W was with an order of magnitude fewer transistors. Even back then, that was terrible performance.


The TDP on this chip is only 125W, its just that TDP has started to lose its meaning. These days the peak power consumption is "as much as we can get into the chip without exceeding temperature limits".


enthusiast cooling tech has advanced quite a bit, too.


Looking at the reviews of the Noctua air cooler, it does better than those giant water coolers.


Air coolers can do a better job than AIOs depending on the situation, for sure. I use a Noctua air cooler on my 3900X, and it's great. But when I had the same cooler on a 2700X, it was awful, and moving to a Corsair H60 (120mm radiator with a single fan) was a huge improvement.

...because the 3900X is in a full-tower Fractal Design case and the 2700X is in a cramped 4U HTPC case by Silverstone which is rackmounted into a Gator traveling case. There's just no airflow inside the case and the AIO specific heat transfer is critical.


Well it doesn't really bother me if they have an end to end pipeline. Why shouldn't the top of the line CPU consume 250 watts? If AMD "overclocked" their existing design up to 250W and 5+ ghz it would just be another part in the product line.

If you want the absolute best perf, they you buy it. Otherwise you buy one the less expensive, lower wattage parts.


Because Zen2 chips can't be overclocked so much, and Intel *lake chips can be overclocked and Intel needs to ship virtually overclocked chips to make it competitive to Ryzen.


Overclocking has always been about eating into the design/binning margins. So sorta by definition if a CPU doesn't "overclock" its because its already overclocked.

What intel is doing here isn't "overclocking" they are just tightening their CPU margins up to the same levels that AMD has.

AKA, intel isn't overclocking their cpu anymore than AMD is. The only real difference is what the fmax of the design is, which in this case appears higher for the intel parts.

Now if you want to argue about how efficient a design is at a certain power level, that is more logical. And its a semi interesting case (aka downclock the intel slightly so that the perf on a given benchmark is identical to the AMD, then measure the power utilization).


Indeed. The CPUs aren't that much off in terms of price now, but between motherboards, cooling solutions, and power consumed, I would guess an Intel build is going to cost hundreds of dollars more than an equivalent AMD build.


It's becoming really difficult to figure out whether an X-core i5 is better or worse (even for a specific purpose) than a Y-core i7 or any other combination of [model, clock_speed, num_cores].

Last time I bought a machine I cut this Gordian Knot by not giving a shit, which has to count as some kind of failure of branding.


After GHz stopped mattering (and maybe to some extent it never did)... I lost track of all things CPU and what matters.

Anytime I looked into it I felt like I got a lot of truisms and mixed advice, and the PC enthusiast crowd seems equivalent to the 'pixel peeper' crowd of photography, obsessed with misc stats that I'm not sure will matter to me, or anyone.


These days, choose a processor that meets your core count needs and budget. Some applications are primarily single-threaded, so for those you might want a processor with fewer, faster cores. For parallelizable tasks like compiling code (up until the linking stage), more cores is better.

It might also be worth buying an AMD just to support them, but OEMs still mostly favor Intel. Performance per dollar is usually a lot better on the AMD side.


And sometimes your new machine is slower than the old one. Ive seen that a couple times with laptops people purchased. They replaced a 2-3 year old laptop that cost $2500 with one that cost $1200 (but lighter thinner) and the new one had the same number of cores, and ran slower.


Depends on the form factor. On desktop yes AMD might be better for your money but as an owner of Ryzen 3500u laptop I'd recommend Intel to anyone. For some reason Chrome is more janky than my old i5 4300u and it lags when playing 1080p x265 files in every player I've tried.


Can you check chrome://gpu and make sure that Video Decode: Unavailable does not appear? Video decode should be handled by your GPU, not the CPU. If this is Linux, then you might not be using the GPU at all, which would explain both symptoms.


Hardware protected video decode and out-of-process rasterization are the only ones unavailable. Everything else is Hardware accelerated. Another thing I notice is that GPU stays a steady 20% and rarely every shoots up, which I believe is what contributes to video player lags. No matter what I try in Windows 10 - performance mode, gaming mode it still won't go beyond 20% when playing videos.


The new Ryzen 4000 laptop APUs are quite an improvement over last year. It's important to note that the APUs of each generation are always an architectural generation behind (so, the 3500U is actual a Zen+ architecture, not Zen 2).


They might be, but I am talking about the situation RIGHT NOW. LTT also talks about this here - https://www.youtube.com/watch?v=Nfz46HXvPLc It was very noticeably janky out of the box but with updates it became better but IMO it's still not at Intel's level yet. Also battery life is better on Intel side and will only improve with the 10nm chips they are shipping now. I have yet to try Linux distro on this machine but looking for something with decent touch support, so it's all Windows 10 right now.


> I lost track of all things CPU and what matters.

The things that matter vary based on your use case. Your best bet would be to find benchmarks for an application that matters to you.

If that's too much work, AMD's Ryzen platform is the best general-purpose platform at this point.


Some of the biggest factors for the average person that get disregarded are video cards and cooling. Modern CPUs absolutely need good cooling or they will throttle and create stutters and hiccups. A separate video card makes high res interfaces more fluid (4k hooked up to a monitor or TV) _and_ takes that heat off the CPU which is usually trying to boost and throttle inside a laptop or poorly cooled PC.


Noctua makes some really good heatsinks. It's my understanding that most users don't need liquid cooling. When would you say someone needs to switch to liquid cooling?


My guess is that a big noctua heatpipe heatsink would keep a CPU from throttling most of the time and do well cooling it, but it would probably be loud when it has to get rid of a lot of heat. The great thing about liquid coolers is they can get rid of more heat with less fan speed and end up quieter (if the pump isn't loud).


You also need good air flow inside the case. Something like big tower with plenty of empty space and additional case fans. If your case is tiny, liquid cooling might be better.


Intel's marketing isn't helping any. They have Comet Lake and Ice Lake parts all in the "10th gen" bag which is misleading at best.


when I last bought a laptop, notebookcheck's [1] cpu ranking was helpful.

[1] - https://www.notebookcheck.net/Mobile-Processors-Benchmark-Li...


I agree, but I thought it was just a factor of me getting older. 25 years ago we used to sit around at lunch and discuss the latest CPU technology and argue which one was better. Now it just doesn't matter (for the most part) apart from supporting one manufacturer over another or choosing based on price.

I don't have the time nor inclination for digging into the minutia around CPUs and their names. It's no longer fun or really even necessary, unless you're into HFT or some other activity that requires every last processor cycle. meh


I don't know why Intel has continued down this path so aggressively. The article says that they introduced 32 new processors. This is a small incremental update to keep the same architecture and process size from lagging behind the rest of the ecosystem and now there are 32 new cpus that don't perform much better than what came out half a decade ago.


What do you expect Intel to do? Keep selling the same CPUs? Declare themselves bankrupt? Switch to TSMC for chip production? They are doing what they can in this situation. And I'm sure that their sales are far from zero.


I think there is a middle ground where they don't release 32 brand new CPUs. Have you looked at the lineup of Intel CPUs? There are now hundreds with extremely minor differences.


I think that it's about silicon lottery. Good chips are going to high-end CPUs, bad chips are going to cheap CPUs.


it's really not that hard for the consumer parts. until you get into the hedt range, each processor has better single threaded performance than all the price tiers below it and possibly more cores too. this get more complicated if you want to compare across multiple generations, but that's always the case.


> Through our tests, we saw the Intel Core i9-10900K peak at 254 W during our AVX2-accelerated y-cruncher test. LINPACK and 3DPMavx did not push the processor as hard.

That's a lot of power. Granted, on non-AVX code, the max they measured looks closer to 190W but that's still dramatically more than the 140W max that the 16-core Ryzen needs.


I'd prefer CPU to draw as much power as it wants if that would allow to perform given task faster. So I'm not sure if that's really an argument against Intel. If it wants to draw kilowatt for extra 20%, go for it.


That is but not beyond what most PSU’s can do these days. Also it’s only when you’re stressing all 10 cores. But it is significant.


Certainly true, but as the article notes, it means that you're going to need to account for a good cooling setup when you build one of these systems. Last few systems I've built (Haswell i7 and Skylake i7) have used relatively inexpensive air-coolers which have been quiet and effective. That's just not going to cut it here.


A buyer might need to ask themselves if they're good with a liquid-cooled setup. Because moving past air-cooled is expensive & potentially messy.


Considering a NH-U14S can cool the 3990x, a 280W TDP chip, large air coolers won't have a problem with this chip.


I very much doubt these processors actually require liquid cooling.


Nah, they'll just throttle.


The test bed for this review uses a "Thermalright TRUE Copper" cooler which looks to be an air cooler from the mid-to-late 2000's. Needless to say Anandtech probably didn't run their benchmarks with insufficient cooling.


What do you mean? 250W is nuts.


The test bed in this review uses a decade old air cooler and the "real world" power draw was measured to be 125-150W peaking at 200W.


Liquid cooking is not expensive nor messy in the current age.

In Europe I can find a half decent 120mm AIO for less than $50 ATM.

It's only expensive and messy if you go ball to the walls custom loop.


I avoid liquid cooling because it is well known that even high quality CLC will lose coolant through tubing over the course of years. At about 5-7 years you will need to crack open the CLC and pour some more DI water in. Thermal paste replacement is a bit less messy, and even then the failure of thermal paste can be more graceful if you make good decision (ie avoid liquid metal).

I consider myself a power user and I'm closing in on six years on my current build. I'm not sure I want to bother with CLC when air has almost as good thermal performance for less money and maintenance.


While I agree with you, if you're building a small ITX system, 120mm AIOs are your only option that can fit in a tight space and offer you good cooling since ultra slim air coolers are too weak for any performance CPU.


A 1x120mm AIO will have trouble rejecting 250 watts of heat and staying under 80C.


I am not worried about the PSU, I'm worried about the VRM's (and cooling of them).

Maybe I'm also a little worried about PSU fans spinning more, as my current PSU doesn't spin it's fan unless it's over 45% load.


Then this CPU is not for you.

It's for people who want the maximum performance outside of a Cinebench and rendering videos, the presumption is you can get a motherboard that actually supports the CPUs it lists on the box, power demands and all.


This is a /relatively/ fair comment, in isolation.

But a couple of things stick out.

1) Intel generally hides power draw by talking averages not totals.

2) Power/Performance ratios are generally used to compare against AMD, and in this case, AMD is winning.


Power/Performance ratios were used against AMD when AMD was already lagging behind in performance, and power was actively hindering performance (via heat generated which was actively limiting headroom for clock speeds)

No one complains when a chip is performing extremely well and taking a lot of power like the i9 clearly is, they complain when a chip is taking a lot of power and underperforming

-

And honestly the whole TDP debate is a joke, everyone wants their own deeply flawed benchmark, some people want it to be TDP with AVX-512 instructions which is not a realistic workload for most people, some people want what Intel puts on the box.

TDP is like process nodes. It used to be easy to just compare two numbers, but with complex boosting rules, increased core counts, weird interactions with AVX and all those cores, it's pointless to just compare.

What matters in non-commercial consumer land is can it be cooled by a reasonable cooling solution. When you're talking about a $100 CPU "reasonable" is what usually comes in the box. When you're talking about $500+ I would say a 2 fan AIO or a high-end air cooler is what's reasonable (that's why there's no cooler in the box once you get to this performance point), and reviews are showing it performs just fine with both

Actually... as if to make my point about TDP, some reviewers were finding this not to be the case. Turns out some mobo manufacturers enabled "MCE", which essentially throws voltage at the CPU to try and get higher clocks with more cores enabled. It doesn't follow Intel's specs to do this either: https://www.anandtech.com/show/6214/multicore-enhancement-th...

So just like almost every "automatic overclocking" solution for a CPU that's been shipped, it throws the TDP completely out the window. And so if you did expect Intel's TDP to be correct you'd be sorely disappointed, but if you based the TDP on what those motherboards will dump you'd also have an unfair comparison, yet I promise people will be holering from the mountaintops that those numbers those reviewers found are the real numbers...


if people want different things then it makes sense to split out the term.

Using the same term for different things is obviously confusing.

For me, I want to know what kind of PSU/VRM solution I need if I want to use a CPU.


TDP is never really going to do that, you need to look at each component's quality relative to other components of the same type.

There are a lot of "1000W" PSUs out there that will blow up at any load within a year or two (or just come falling apart: https://ae01.alicdn.com/kf/U7f175c32ac9844f7ac869f1285e1c284...), meanwhile a quality 500W PSUs will hum along for years with a combined manufacturer stated TDP well above that number.

Same with motherboards, if it's a quality motherboard, and it supports a certain CPU, it's going to work well. Any crappy motherboard can claim it will handle any CPU.

VRMs are pretty disproportionately marketed anyways, unless you're chasing records with LN2, VRMs aren't going to matter very much. People like Buildzoid have gotten people whipped up in a frenzy over it, but the $50 - $100 extra dollars people are spending over VRMs are much better spent in so many other places (or just kept in your pocket), especially when you consider the boards that support overclocking will almost always have good enough VRMs in this day and age


I was humming and hawing over moving back to Intel until I read this. So the conclusion I took from this is to wait for Zen 3 to come out and then use the price dumping on Zen 2 cores to get a 65W TDP Ryzen 7 3700X for considerably less coin than a 9700K.

Then do the same with Zen 3 when Zen vNext comes out because I'm not having to buy a new board and socket just to get a miniscule increase in grunt this time or next time.


I guess you got to give the skylake architecture some credit. Their 5 year old uarch is still pretty competitive. This is is almost like Pentium 4 vs amd64 days. Wonder if intel will pull out a core2duo.


Skylake (and its derivatives) is only competitive if you disregard power usage.

For environments where performance/watt matters (e.g., servers, laptops, etc.), Skylake is significantly behind Zen 2.


Sure but none of these desktop processors are used in any of these environments.


Skylake (or Intel platform) is still has advantage in laptop/tablet area because of optimization for idle power usage than Ryzen.


The 4xxxHS parts seem to be rather competitive there as well. Note that these are monolithic Zen 2 APUs, I suspect this is the same silicon that will ship as 4xxxG parts on the desktop.


Skylake is 4 years older than Zen 2, though.


It doesn't really matter if the Skylake architecture is still what Intel is offering in its contemporary products.


> The new CPUs have the LGA1200 socket, which means that current 300-series motherboards are not sufficient, and users will require new LGA1200 motherboards. This is despite the socket being the same size.

Well that is disappointing.


With Intel's track record, I would expect nothing less.


seems like a dumb move, but I'm sure they have their reasons.

I bought z390 mobo last year with some very beefy VRMs. it was near the end of the cycle for the 9th gen parts, so I was thinking I would consider upgrading whenever they released the next line of chips. I would honestly consider buying the 10900K if it were compatible with my motherboard, especially now that I'm compiling code at home, but I'm not willing to also pay $200-300 for another high-end motherboard.

I sort of suspect they looked at the last gen motherboards and worried that they wouldn't all be able to handle the increased power demands. maybe they decided not to support that to avoid making people look up each SKU to see if it could handle the new processors.


> seems like a dumb move, but I'm sure they have their reasons.

Uhm, Intel has been doing this since always. One generation of boards for one generation of CPUs. At most two.


> but I'm sure they have their reasons.

Making money by selling new chipsets.


The list of reasons are pretty short when they are not adding support for any new interconnects or changing the microarchitecture.


How comes that the fastest desktop CPU's from Intel don't have AVX-512?

The Xeon CPU's have AVX-512.

Laptop CPU's like i3-8121U have AVX-512.

Why not desktop too? This has been going on for years.


Because the ring bus architecture doesn’t support avx512 but ring bus has better latency which is why many prefer it for gaming or audio production work.


This is still using the skylake microarchitecture from 2015. I'm sure if Intel were shipping the design they wanted to be shpping it would have AVX512.


Perhaps this tells us something about the relative size of the high-performance desktop market compared to the market for MacBook Pros, which have Ice Lake CPUs with AVX-512.


Why speculate? If we take i9s to be considered par equivalents to high-end CPUs we find in MacBook Pros, and the market distribution of cheap to expensive CPUs is similar in both desktop and notebook segments, then we can simply look at earnings reports.

https://s21.q4cdn.com/600692695/files/doc_financials/2019/20...

From page 34:

6% of Intel's Desktop platform volume is $705Mn, so total volume is $11.75Bn. 5% of Intel's Notebook platform volume is $1080Mn, so total volume is $21.6Bn.

So the desktop market is slightly larger than half the size of the notebook market, but $12Bn is a lot of money to leave on the table then turn your back to. Especially since an aged microarchitecture threatens all CPU-related revenue streams.


What desktop applications use AVX-512?


If it is more available, more would


Each time I see these kinds of reviews, I take comfort in knowing my eight year old i7 3770k is still chugging along at 4.7ghz OC. Its matched with 16gb ram and a 980 TI, and it handles most modern games at 60fps just fine albeit on lower settings. I also sometimes boot into Ubuntu to write some code while my girlfriend wants to use the macbook, and I just mostly use VSC and write Node or Go apps.

Maybe in the future I'll just get a newer GPU, but for my needs of primarily gaming and coding I don't yet see a reason to upgrade the CPU and motherboard.


This is not an unreasonable spot to be in, but it's going to change in the next year or so I think for most gaming stuff.

The Ivy Bridge stuff is a little on the pokey side now; my i5-3570K was in my HTPC until earlier this year when I replaced it with an i7-6700K. But a higher-end Ivy Bridge will still be OK for most games for a little while yet. And a 980Ti is still a great card; I upgraded to a 2070 Super because I needed Turing NVENC but the 980Ti was playing everything great at 1080p and most things very well at 1440p. As things start to target the PS5 and Xbox Series X as their lead platforms, however, I would expect that to change. The PS5 will field eight Zen 2 cores and that's going to be a significant step up. Modern game engines are already really happy with multicore systems (the 3900X in my desktop gets put to work by Doom Eternal) and this will only increase. Increases in expected throughput is going to put the 980Ti in a bad spot, too.

That said, Ivy Bridge and Haswell really have to take the crown for "longest-lived worthwhile CPUs". That HTPC (which was formerly my desktop, and had 32GB of RAM because of it) is now a NAS, and it's great. I wonder how long it'll last.


"That said, Ivy Bridge and Haswell really have to take the crown for "longest-lived worthwhile CPUs"".

Sandy Bridge era CPUs (i5-2500/k, i7-2600/k, etc) were the ones that really shook up the market. Much lower power envelope and a large jump in performance from Nehalem. Ivy Bridge was the Tick-Tock iteration of SB, providing higher power usage/heat output (counterintuitively), with about a 5% IPC increase IIRC, based on the famous 3D Tri-Gate process. It was widely considered a bust at the time.


Well this is the problem with intel. I'm typing this on a 5930K. A machine that happy overclocks to an all core 4.5Ghz without even bumping the voltage. I run it with custom turbo stops (minimum clock rate is now its previous 3.7Ghz factory turbo) up to around 4.7 with a liquid cooler.

Back when broadwell came out, I looked at upgrading just the CPU, but clock for clock it didn't add anything, so I didn't. Then intel changed the socket, effectively stranding me. So every year or so it gets a new GPU, faster NVMe, etc but the CPU is still a 5930k.

I did something similar with an AM2 motherboard, but AMD kept releasing cores that were backwards compatible to that socket, and after they stopped I went another generation before swapping it. Something like 8 years with the same motherboard (and in the end I was running a CPU that wasn't even officially supported on it) and copy of windows XP.

I don't really mind spending the money on faster hardware, but I really hate spending weeks stabilizing a complete motherboard/gpu/drivers, installing applications, etc. If I could plug something like the i7-10700K into this motherboard I would buy it right now.

Bottom line, I think intel is shooting themselves in the foot by changing the motherboard socket again. Sure people will buy it, but there is a subset of real enthusiasts that might have just upgraded a early low/midrange skylake for one of these newer higher end ones. Instead of being a $300-400 upgrade, its a new motherboard, ram, etc. So they might as well just buy an AMD.


IIRC Ivy Bridge is almost same CPU core archtecture as Sandy Bridge so almost no IPC improvement. Efficiency and GPU performance is mainly advertised.


I was thinking that, but I know a lot of folks still running Ivy Bridge and don't know anybody still running Sandy Bridge!


Ivy Bridge was a minimal upgrade over SB. Higher power consumption with a roughly 5% IPC improvement under certain workloads. It was the trial for the Tri-Gate process, although it ended up being a bust (for that gen, at least - not too up to date with the newer iterations).

Still running a slightly OC'ed i7-2700K here, and a Radeon 290X. In terms of performance, older games run fine in my eyefinity triple-monitor setup, but newer ones I've been playing only in one screen recently, which means it's time for an update. A 9 year old CPU and 7 year old GPU being this fantastic in performance so many years later just goes to show how slowly CPU and GPU improvements have iterated.


These CPUs don't have PCIe 4.

With Microsoft and Sony going all in with their consoles and high speed gen4 SSDs, I'm imagining there's going to be a lot of faster drives on the market soon.

Unless it's an upgrade, I feel buying this CPU for a new build is a bad idea.

If I needed Intel I'd wait until their next generation at least.


> These CPUs don't have PCIe 4.

I'm fine with PCIe 4 not hitting mainstream for a few more years. AMD X570 motherboards all integrate noisy and failure-prone active fans directly on the motherboard because the chips get so hot. The only one I know about that didn't was wildly expensive, basically all heatsink, and apparently no longer in production.


I think the demand for PCI express 4.0 is going to be a lot higher than you realize.

The new PS5 is going to have SSD speeds of over 5000 mb/s, and is coming out soon.

PC gamers are going to be jealous until they are able to have speeds like that. After all, they are supposed to be the "PC Master Race". But the only SSD devices on the PC that can come close to the PS5's SSD speed are PCI express 4.0 devices.


> The new PS5 is going to have SSD speeds of over 5000 mb/s, and is coming out soon.

That's not notably different than 3500 mb/s in any meaningful sense. Loading speed returns diminish very rapidly because you're looking at time, which is the inverse.


Motherboards used to have issues with the fan running when it didn't need to, but to my knowledge that's been fixed. Unless you're running massive amounts of data through the chipset it doesn't get hot and on most motherboards the fan should stay off.


If you're not running truly massive amounts of data through it, then PCIe 4.0 provides no benefit over PCIe 3.0 anyway.


The first PCIe slot, at least one of the nvme slots and a fair amount of the IO is connected directly to the CPU, they don't touch the chipset. There's plenty of benefit you can get by using just those. If you're really trying to get the most out of PCIe 4 then you're on the wrong platform anyway. Threadripper has way more IO going directly to the CPU and IIRC some Epyc motherboards have all IO going directly to the CPU.


My Asus X570 board's fan is not noisy. Even with the case open it doesn't stand out over my low speed, quiet case fans.


I noticed mine, and I'm definitely not alone. YMMV.


> The one I know about that doesn't is basically all heatsink.

I'd love to see that one.


That would be the Gigabyte AORUS X570 XTREME: https://www.gigabyte.com/Motherboard/X570-AORUS-XTREME-rev-1...


Not only are high speed PCIe4 SSDs starting to hit the market, but the newest Radeon 5600/5700 (XT and non-XT variants) GPUs support PCIe4 too. I imagine the next-gen Nvidia 3XXX GPUs will support it too. Given these new Intel CPUs require new mobos, it seems wild to me that Intel wouldn't include this. I agree, if you need an Intel CPU, it seems like waiting is your best bet right now.


I have a Corsair MP600 and it is fast. It benchmarked in the top 2% of all drive benchmarks. I wouldn't waste my money on an outdated CPU that doesn't support PCIe 4.


Its ridiculous that a brand new 10600K only has at best 33% better single-thread performance than my seven year old 4670K.


Outside of specialized instruction sets, I wouldn't expect major leaps in single-threaded performance from either Intel or AMD for the foreseeable future.


Is that because more cores for a given transistor count better addresses today's computing needs or because we've hit a limit where adding more transistors to a single core doesn't give us more performance anymore?


It's because the design techniques that allow a core to run more instructions per clock -- wider execution units, shorter pipelines, etc. -- also tend to reduce the maximum achievable clock speed.


I have a six year old i5-2500 and a 10700K is quicker but still not twice as fast as the 2500. Its really not worth the upgrade.

https://cpu.userbenchmark.com/Compare/Intel-Core-i5-2500-vs-...


I concur. Dual Xeon E5-2687W here (2×8 pcores, Sandy Bridge). Turbo 3.8 GHz to 3.4 all cores. [Ebay: ~$300 each, mid 2016]

It's still more expensive to buy a brand new Threadripper or 16-core Intel (sic) than this bargain I made some 4 years ago.

I will probably get a Zen 3 platform, once each CCX contains 8 cores (4 is just too limited for my virtualization need, I kinda like that my Xeons are "monolithic").

Obviously this is for programming and shit learning, not gaming, but I can still manage my 60 fps stat with a beefy GPU.


And interestingly enough, the 10600k has a 20% higher base clock than the 4670k (4.1GHz vs 3.4GHz), and a 26% higher boost clock (4.8GHz vs 3.8GHz). That alone probably accounts for the majority of the difference. It's a shame that Intel has barely been able to increase IPC at all in the past decade or so.


In what benchmark?


Cinebench. The Geekbench results are even closer.


Yeah frankly I’m sticking with my old 4th gen Intel too. I’m not doing anything hardcore that needs 10 cores. I’m sure glad about the AMD resurgence and expecting some response from Intel now.


I want to upgrade but there just isn't a reason to yet. I'm still on a 4930k


I mean I'd love to build a new computer, but there's just no point. The only genuinely difficult thing I want my CPU to do is game console emulation, and for that single-thread performance is king and I can't justify the expense for a 1/3 performance increase regardless of how long it's been.


Does anyone know of a good article that explains why intel is having these problems with crappy 5 year old CPUs? Is 10nm going to be competitive with AMD when it finally comes out?


The Intel part is on top of every single-threaded benchmark in this article, with a 5-year-old microarchitecture. Whatever you want to say about their failure to scale out the core count or to shrink the lithography, they are still in an enviable position of being able to spend the massive lead they had banked. What's going to be really interesting from here on is the cadence of innovation from both camps. Will Intel ship real product and leap ahead again? Will they ship a new product that's a turd? Will AMD ship another generation of improvements while Intel grinds out another Skylake SKU?


Considering we've been hearing that story for the last 5 years (and they have 10nm now, just apparently not at a volume/price point Intel is happy with), I wouldn't bet on 10nm turning things around for Intel. In the most optimistic scenario for Intel, AMD will be talking about 5nm by the time Intel can stop talking about 14nm.


For servers, maybe. I don't think it will be for frequency-bound tasks (even with the increased IPC). Intel expects to be in a better position when they ship their 7nm node in a couple years.


At this point I think that they'll just skip 10nm for desktop CPUs and move on 7nm.


Color me massively confused, but why on earth does the chipset have a 2.5G Ethernet interface that doesn't support 5G/10G?


They can do it this way and only use a single PCIe lane; 2.5g works over existing cat5e cable plants and 10g doesn't; their 10g and 40g controllers cost $40 and this one costs $2?


PCIe gen3 is nearly a GB/sec in each direction per lane (actual bandwidth obviously less). Which is nearly enough to get to line rate with 10Gbit ethernet, particularly if TSO is employed to remove the framing/etc overhead. Getting 750MB/sec out of a 10G interface because its constricted by PCIe is a lot better than getting 220MB/sec out of 2.5G. Consider the difference when running a VM image from a NAS.

Further, the point with the multimode 1/2.5/5/10 phy's is they do line quality detection and downgrade. So you plug in cat5, and it downgrades to 2.5 if the line quality is bad. Someone runs a 10 foot cat6A cable they are going to get the full speed. But in actuality, as I said on this board before, 10G requires cat6a for 200 meter runs. If your in a house, small office, etc where the runs are 5-10 meters or less cat5e works just fine. I've seen switch vendors mention that quality cat5e can be good up to 45M, which is farther than cat8 is specified for 40GbaseT.

It just looks like more product segmentation since the 10GbaseT spec has been around for about 15 years now.

edit: When it comes to UTP, the frequency response and cross talk specifications effectively end up being per foot. So the line attenuation and noise increases with cable length until it falls below the threshold required to reconstruct the signal at the far end given a standard compliant part. The length and cabling requirements also include the possibility of loss from additional wall jacks or patch panels. So many of the bulk cat 5e cable packs were sold for a long time as 350mhz (or similar) which in name is better than basic cat6 at 250mhz because its actually pretty hard to screw up the twist/etc requirements of the cable. The problems happen when it is poorly terminated. So the whole thing ends up being a bit of a crapshoot whether any given cable works to specification (most aren't) but because they are far below the maximum line lengths it never matters. Plus, in the decade(s) since the specs were initially created semiconductor advances have allowed advances in the receiver sensitivity and efficiency of the drive signal. Meaning if one went back at this point and tweaked the 1G spec its quite likely that the max transmission lengths could at least be doubled due to specifying it with cat5e and tighter margins.

This is also what allows 10GbaseT SFP's today, which weren't initially possible (that and mandating that they aren't compliant to the full 200M).


It's not product segmentation. They cut the NIC price to $2 and increased the speed by 2.5x. This is really the only part of the platform that is dramatically faster than it was two years ago. Most of the AMD motherboards are coming with 1gbps ethernet, no matter how many nonsense words are in the product name.


The pricing difference is part of the product segmentation, because 10G+ has been reserved for "enterprise" parts despite not being much different at this point. There are ARM SOC's out there, where the bare SOC is in the $40 ballpark and comes with 10G integrated. 10G isn't inherently more expensive, its just been a decision not to repeat the "mistake" that was the 100-1G transition where the price went from $$$$ to 1/10$ in the space of a year.


I don't understand what we are talking about. None of the CPUs in this article have integrated NICs. The NIC is a little pea-sized peripheral device on a PCIe port. If the motherboard makers thought anyone needed a desktop platform with 10gbe, they'd ship that. But 10gbe controllers cost a lot of money and draw a lot of power, and take more pci-e lanes. I think they are banking on the idea that people who want it will be happy enough with expansion cards or thunderbolt peripherals.


I was mixing cpu+chipset=arm soc.

A 10G port takes the same PCIe lane as a 1G port, particularly if you have PCIe gen4+ or are willing to potentially lose a bit with gen3. Similarly with power, EEE doesn't use the power if its not needed. Which for users on short runs, or with 1G switches the power utilization will be the same as a 1G port. And given that we are talking 200W parts, a watt or two for 10G isn't going to be missed.

And probably more of the motherboard manufactures would put 10G nics on board if there weren't a dearth of 10G pci nic chips from manufactures not already selling PCIe nics as high margin products. AKA intel selling its 10G part for more than the rest of the parts on the motherboard is going to keep motherboard manufactures from putting it on every motherboard.

Its sort of been a game of chicken, which will explode violently when the patents expire and a company like etron or asmedia starts producing them. (both of which have shown plenty of inhouse capability for producing high speed phy/etc interfaces).


I think you severely under estimated the cost difference in 2.5G/5G SerDes to 10G, and the cut throat nature of PC business, where $1 of BOM cost is a world of difference.

There is also the question of ecosystem like switches. The TCO of owning 10G is still pretty much out of reach for most consumers.


The difference between CPU's at similiar price points is getting to be negligible.

https://images.anandtech.com/graphs/graph15785/116023.png

For 50% of the price, you're only losing 30% in performance.


I'd always make a point of adding in the motherboard and RAM cost as a baseline here. Say an extra $300 (assuming we're not going completely low end).

So instead of $262 vs $488 your comparison should be $562 vs $788. Which means you pay ~30% more for ~30% more performance from your motherboard+RAM+CPU (the CPU being the dominant performance factor to these components).


Any clue on why that $1700 CPU is performing so poorly relative to the rest in the chart?


it's a 10 core from the time (2016) when that core count was somewhat exotic.


> We know that AMD’s Zen 2 has as a slight 10-15% IPC advantage over 9th Gen Coffee Lake,

How they calculated it? I had only heard that Zen2 IPC is 15% increased from Zen+. And from Single threaded benchmark, Zen2 not seems like 10% over Skylake.


I’m more interested in the Xeon W-1260P, which seems to be an i9-10900K with ECC enabled. It should work very well for my engineering software (which uses Intel math libraries that are slow on AMD). Hopefully they come out with it soon.


Those are pretty interesting, especially with 40 PCIE lanes.

I wonder if (like the z490) the W480 motherboards will support PCIE gen4, and whether there will be Rocket Lake Xeon W 1300 CPUs.


According to [0] it has 16 lanes. They probably are counting chipset lanes which are fake lanes.

0: https://ark.intel.com/content/www/us/en/ark/products/199336/...


Ah, I see. Their marketing material referred to "total platform lanes"[0]. That's less helpful.

[0] https://www.servethehome.com/wp-content/uploads/2020/05/Inte...


If you want more PCIe lanes in a Xeon, you need a Cascade Lake part like the W-2275, 48 lanes and 4-channel memory, too.


Oh certainly. 16 CPU lanes is perfectly reasonable. I just feel somewhat misled by the marketing material.


W480 is still PCIE 3.0, from what I have read. I’ll probably end up getting one of these:

https://www.asus.com/uk/Motherboards/Pro-WS-W480-ACE/


Nice. What interests you about these Xeon CPUs specifically?


The software I use, uses between 2 and 4 cores. I’d say 70% of the time it’s 1 core. Thus these new processors with 5.3 GHz on a single core should work well. I also have another application that can use 8, with diminishing returns past 2.

I thought I’d have to settle for the non-ECC i9-10900K, and risk memory errors for long simulations.

The Xeons have always been slow on clock speed, and high on cores. My ideal setup is a gamer PC with ECC, and this W-1290P seems to fit that.

I can’t use AMD as those Intel math libraries are 30% slower on AMD (equivalent clock and cores).


Cool, thanks for sharing.


Thanks for pointing it out. I was trying to figure out if we had to settle for the core i3-10100e if we want ECC. Apparently no. The 1290 has higher clocks than anything in this article, plus ECC.


You caught my typo. I meant W-1290P. Should be about $50 more than the i9-10900K.


$50 + a motherboard you can't buy yet.


The price is usually the problem.


Frankly these look competitive with AMD price wise.


Are they? The i5-10600K sells for $262. Its closest competitor is ryzen 3600/3600x, which currently sell for $172/$200 respectively. For $62 (33%) more, is it really 33% faster than AMD? There are lower tier skus (eg. i5-10500) that match AMD's pricepoint, but I doubt they match 3600/3600x's performance. At the high end, the i9-10900K ($488) is competing against the ryzen 3900x, which currently sells for $400 and has 20% more cores.


It's the motherboard prices that will make the difference. Intel motherboards are always super overpriced, and you have to buy a new mobo with every generation.


That and you also need to factor in the cooler. The consumer Ryzen chips all come with an adequate cooler, which can save you a significant amount over the Intel chips especially considering the extra heat output you'd need to handle.


Keep in mind that the prices quoted in the article for Intel are 1ku prices, I'd assume that the retail price would be higher unless Intel were doing some retail promotion.


Probably not but for certain use-cases Intel is just way easier and less trouble (building a Hackintosh with Thunderbolt 3 connections for a UA audio interface, for one). And yeah, not everyone has such special needs but some of us do even though we may have one or multiple "real" Macs at home.


That's because AMD prices in the article are from the corresponding processor release date, not today's.


The big price drops with the current generation did help a lot in getting their offerings up to par. But I'd worry that that might change if you factor in the extra cooling you need to keep these in turbo and the motherboard, where the cheapest Intel LGA 1200 I can find is $150 and the cheapest AM4 is $50.


The i9-10900T is interesting at 10C/20T at 35W tdp.

But I'm afraid that it's going to be one of those parts reserved to OEMs, like the 45W Ryzen 3600 (non-X).


I wonder if they will dare to go to i11?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: