If that's really true, a 16 core AMD having a higher performance than a 18 core intel processor at twice the price, that's a fabulous news for all consumers. Hopefully that will stop both intel from setting absurd price for mid end processor and generally push the industry forward.
If anything the 3D chiplet tech will make their chips even more expensive/lower yield/more likely to overheat.
HBM memory for instance is still quite expensive, and even 3D NAND was more expensive initially. Now it's harder to tell because the whole market crashed and they reached a limit for density anyway.
I agree with you, I don't see how Intel will cool a 3D chip unless the bottom package is extremely simple. Even then, it looks like that design would be more expensive than the multilayer mini-board + ball grid array AMD explained in their presentation.
In both this case and their delays in getting to 10nm due to wanting larger dies, it really feels like Intel's management is letting better be the enemy of good whereas AMD is making smarter choices about where to compromise.
By the time Intel releases their awesome 10nm 3D chiplet stack, AMD will likely have moved onto 5nm compute chiplets with a 7nm IO chiplet. It's not clear to me how Intel will catch up in the next 5 years or so.
Uh - Intel is releasing Foveros this year. Assume its going to be in a surface type of device. Little big core for x86. AMD will not be on 5nm. You seem confused about the status of the industry..
Point being you were wrong in the statements you have been making. 5nm will not be out by the time Foveros is introduced. I did not state Foveros is a competitor to a 128 thread epyc processor..
Wanna guess how much first ever 60 MHz Pentium chip cost in 1993? How about first 90 MHz Pentium in 1994? First Pentium II in 1997? Price of fastest Athlon K7 in August 1999? All more than $750 despite 20 years of inflation.
I think it's aimed at people who do specialised tasks such as video editing without the extra expense of moving to HEDT hardware.
Plus, these chips with two CCXs also has double the PCIe lanes 40! So a number of NVMe drives, GPUs, 10GbE etc... can run together without fighting over lanes (and that's without double bandwidth of PCIe 4.0).
It does still feel weired calling 16 core/ 32 thread CPU with 72MB of cache 'consumer'.
There is a little confusion whether it is 40 lanes or feels like 40 lanes. PCIe is a serial interface so they only need an additional wire per lane, per direction. So it is feasible as CPUs generally have spare pins.
As DDR5 is coming out next year, that will mean a new socket, limiting the upgrade path for the CPU, RAM & Motherboard. Although, 16 cores ~4.5ghz shouldn't be a problem for the near future (maybe 5 years even). Same goes with the PCIe bandwidth.
Edit: Just done some checking, I appears the 3950X has 24 PCIe lanes (16+4+4), but they are twice as fast, so not far behind the current 2nd generation ThreadRipper!
There is no confusion. It has x16 lanes for graphics/slots, x4 dedicated lanes for NVMe storage, and x4 lanes for the chipset.
The chipset multiplexes up to x16 lanes of "stuff" onto the x4 chipset lanes from the CPU.
All of this is physically determined by the pinout of the socket and none of this can change unless AMD moves to a new socket. What did change is the speed of the lanes - x4 lanes on 4.0 is twice as fast as x4 lanes on 3.0.
AMD, like Intel, likes to pretend that chipset lanes "count" as full CPU lanes, arriving at a total of 36 effective lanes. But that's nothing new either.
That's correct, however, considering that Zen 2 supports PCI-E 4.0, that's double the bandwidth of the previous generation, it means that those multiplexed 4x can theoretically now support double the bandwidth of the previous generation 4x and it's not like the "stuff" that does get multiplexed over that 4x (USB, SATA, some 1x cards like sound, wifi and ethernet) also suddenly needs twice the bandwidth meaning that in practice, that 4x works as an 8x in Zen 2 motherboards. Great deal I say :)
It's a halo product for their am4 platform. I've heard people recommend midrange Intel cpus over equivalent Amd parts because you could upgrade to a top-of-the-line cpu later on that platform, but you couldn't go higher on Ryzen.
Does anyone honestly upgrade CPUs? I’ve built machines with upgradability in mind for decades but I can’t say I’ve ever actually done it. Every time I want to upgrade, inevitably enough time has passed that there’s a new socket out and I have to replace the motherboard anyway.
Looks like I was for some reason confusing pcie 4 compatibility (which will not be supported even on top end mobos) with cpu compatibility. Can't edit the post though.
I've only done it once before with an early Pentium, I believe from 133 to 300 MHz? I plan to do it again with my Ryzen 7 1900 as soon as the 3000-series hits the market.
I'm probably going to buy it; I do 3D animation/simulation stuff that gets huge benefits from parallelism, and it sounds like the Ryzen 9 beats everything other than the top-line Threadripper 2990WX (which has 32 cores but only 3GHz base clock with 4.2 boost). The Threadripper isn't a clear winner (the base and boost clocks are quite a bit lower), and they're super pricey ($1.7k). $750 for 16 cores at 3.3/4.3GHz is incredible.
Gamer and programmer here! From the perspective of a gamer with a "large but not infinite budget" in the past say ~8 years. I play counterstrike where any fps stutter is unexceptable. I also enjoy prettier games like BF5, etc. My current system is an i7 8700k, 32GB ram (just because), and a 1080ti.
Intel has always been the go-to. The #1 Priority is thread performance, first and foremost. Second is at least 4 cores. Most modern games can utilize at least 4, but it's also important to give the OS and other programs like discord plenty of cores.
While the Ryzen Gen 1 and Gen 2 have been amazing values, for gaming performance Intel has still ben king. When you compare AMD to Intel FPS to FPS Intel nearly ALWAYS wins.
CSGO is especially thread performance reliant, but this goes for most games. It's worth noting too that while games can use multiple cores, I don't believe most engines scale to 8+ cores very well.
Historically the only reason Intel has won on absolute top performance gaming FPS is because their raw single-threaded performance has beaten AMD due to most games still being bad / ineffective with multiple threads. For the first time in many processor generations this may actually not be true because of Intel’s stumble in their 10 nm transition.
That changed slightly with Ryzen: AMD closed the gap on single-threaded IPC (close enough, anyway) but the new issue with Zen 1 and Zen+ was memory/cache/inter-CCX latencies. Zen+ solved most of the memory latency issues but hadn't fixed cache/CCX latencies much.
Supposedly Zen 2 solved most of that. (And some game benchmarks like CSGO suggest they really did) We'll see how it actually pans out since there's still the issue of inter-CCX latency (and now even cross-chiplet latency).
It doesn't solve all of it however. If your program has more than "$number_of_cores / 2" threads, you'll cross the CCX boundary at some point(s). On Zen 2, that instead changes to "$number_of_cores / 4" (CCX boundary) or "$number_of_cores / 2" (chiplet boundary).
Inter-CCX communication requires hopping over the Infinity Fabric bus, which (in case of Zen 1, no newer benchmarks) increases thread latency from ~45us to ~131us. I'm sure it was reduced in Zen+ and is probably closer to 100us by now. However, I'm not sure if inter-chiplet communication will be the same (e.g.: has its own IF bus) or worse (IO chip overhead).
Hopefully someone runs the same inter-thread communication benchmarks on Zen 2.
Recovering CSGO player here. I got a beefy box (TR1950X, dual 1070i’s, NVME, etc) for ML and crypto mining, and gaming inevitably followed. That plus low ping internet immediately boosted my ELO rankings and I started having more fun. Life in general became less fun since my sleep was suffering. That and the toxic CSGO community has kept me away, but I still relish the palpable advantage I enjoyed with better gear.
The trick is to play random matches and gradually add people you enjoy playing with. We started doing this a year ago, and now we have a small discord server with a few dozen people who are all fun to play with. It's best to recognize things are frustrating by not verbalize it 24/7 as it lowers the teams moral.
as another CSGO player, I am seriously considering one of the high end ryzen 3000 cpus. if the performance is as advertised, it looks like amd will be the single thread king for at least 6 months.
Before you pull the trigger wait to see what the latency between the chiplets/memory does to framerates. We'll know once benchmarks are out, but remember not just to look at average framerate but minimums too, you can have high framerate with terrible stuttering.
I doubt gamers will be a big market for that chip. You don't get a whole lot of increased capability / FPS with a high end chip compared to a mid range chip when the GPU is generally the limiter. But I do think they are going to sell a ton of 3600-3800 chips.
Came to say the same... the 3600 (non-X) is extremely competitive in gaming, and is pretty likely to have some good overclocking headroom with a good water cooler. Personally, I'm very much looking forward to the 3950X and will probably be my choice (even though waiting yet another 2 months to upgrade) unless something significant/soon happens in the next ThreadRipper, the 3950X is likely to be a very sweet spot carrying it for 5 years and more.
I've said in other comments my 4790K is getting a bit old at this point, not slow for most stuff, but definitely hungry for more cores for a lot of tasks, and looking to break past 32gb of ram. I'd also been considering Epyc or even Xeon, as older/used Xeons can be very well priced. Guess I'm waiting until September.
> I've said in other comments my 4790K is getting a bit old at this point, not slow for most stuff, but definitely hungry for more cores for a lot of tasks, and looking to break past 32gb of ram. I'd also been considering Epyc or even Xeon, as older/used Xeons can be very well priced.
I’m in nearly the exact some boat. I’d like to have ECC ram the second time around for my home server, which the Zen chips reportedly support though I don’t see people using. I’d also like better power usage. I think I’m going to wait one more year.
Just got a used Dell, dual 8-core CPUs and 128GB ECC ... main purpose is for a NAS and it'll sit in the garage because of the noise. I may look into what CPU upgrades are available and maybe throw some heavier workloads at it.
For now, planning on just playing around with it. I haven't decided if I'll be running Windows or Linux as the base OS yet.
It's a standard 2U enclosure... haven't tested the power usage, but it's a relatively current Intel CPU (E5000 series iirc), so should idle reasonably well.
Well, at first gamers said dual-core chips are useless. Then that quad-core chips are useless. Now they're testing waters with octa-core chips.
Game developers have always made a good use of the available resources. They'll use the extra power available. The newest techniques they have, like work stealing queues, can scale to a large number of cores.
So games and gamers will use the extra cores. It's much less of a jump from 4 cores to 16 than from 1 to 2.
In (recent) games made with Unity, a lot of workloads like scheduling the GPU and such are offloaded to separate threads with (almost) no developer intervention. Future games will extensively utilize the job system which provides safe and efficient multithreading. Not sure how Unreal and the remaining leading engines stand, but things seem to be looking very good for high core count CPU owners.
gamers will probably not be a big market for that chip, but it might be appealing for gamers with a large budget. unless intel has something big hidden up their sleeves (doubtful when they don't even plan to release their next mobile line until holiday 2019), that 16-core chip will likely have the best single-threaded performance on the market. plus it has to be a highly binned part to have the same TDP as the 12-core model even with a slightly higher boost clock. I for one am very interested to see overclocking results.
Just look at the cost of gaming GPUs (including the costs of watercooling?). Not to mention the fact that CPU can have a slower upgrade cycle than GPU (since a CPU upgrade will usually mean upgrading the motherboard, possibly the RAM, who knows what else while you're there), so getting a top GPU is not at all cheap in the long run.
No it's not worth it IMO, but some people spend crazy amounts chasing a few extra fps.
More recently, the 9900k's MSRP was $500 but it was sold for $600 at launch due to scarcity. People wondered who would even buy that given its price but gamers (myself included) happily did and it sold out for months.
Yeah but if you want the best consumer chip in the market, you're probably a consumer with special needs e.g. someone that encodes/renders videos; engineer/scientist who needs tons of computation/simulation/visualization; gamer who plays CPU-intensive games like factorio/dwarffortress/rimworld etc... so in that niche $750 is still a very much consumer product. The end computer setup will cost around $2k, $3k which is a pretty normal price for these kind of niche consumer computer.
What really matters (for people that do the CAPEX and OPEX math on their assets; not gamers) is the performance/power ratio. Without this I don't see AMD eating much of Intel's lunch (35B vs 208B market cap).
The Zen 2 16 core chip is 105 watt TDP. The chips its wiping the floor with are 165 TDP. TDP doesn't necessarily correlate with real world usage, but benchmarks show that AMD is much better at their chips running closer to TDP than Intel chips are, so the gap is probably actually wider. The strength of Intel chips is being able to pump a lot of power through them to hit higher clock rates.
It sounds like you're saying performance/power is a benefit for Intel, possibly based upon the history of AMD chips, but that line of thought has been wrong since the Ryzen architecture.
> benchmarks show that AMD is much better at their chips running closer to TDP than Intel chips are, so the gap is probably actually wider
AMD gives their TDP with enabled turbo (similar to real usage), Intel gives TDP at rest / no turbo enabled.
There is still some variance from both between given and real TDP, but the core of the difference is well assumed, and dates back to almost a dozen CPU generations back when Intel already had to guzzle power like crazy to superclock their chips in the vague hope that they could compete with AMD's products of the time (and then they never reverted it once they took the lead back with the core architecture)
It's kind of similar to the whole "Intel wants comparison dont with SMT off", due to the last 15 years being theirs, the whole thing is biased toward Intel, ... yet they still massively lose those comparison.
This is accurate, several "95W" TDP intel chips will happily guzzle upwards of 200W+ for sustained periods (providing they don't down clock due to heating)
No this is absolutely not accurate. This only happens due to motherboard defaults running all cores at turbo speeds simultaneously as well as automatic overclocking behaviors on by default.
The reasoning doesn't matter, what matters is what the average consumer sees. If most consumer motherboards do it "wrong", Intel should use those numbers instead of the less common, but "correct" case.
And almost every benchmark will run on one of those motherboards, or do you have a list of curated benchmarks where they were done with TDP limitations per spec?
This is wrong, intel gives their TDP with CORRECT turbo enabled. The problem is that nearly every popular motherboard out there enables turbo on all cores simultaneously ("enhanced multicore" for example on ASUS boards) which blows the TDP out of the spec massively.
As a Small Form Factor enthusiast, I can attest to this with utmost confidence. The chips will run at their expected TDP when configured as specified by the factory, that's just not the default on almost any enthusiast board from known companies. In the case of ASUS it can actually be a bit of a battle to get things to run as intel specifies, both with MCE and automatic overclocking behaviors.
> The problem is that nearly every popular motherboard out there enables turbo on all cores simultaneously ("enhanced multicore" for example on ASUS boards) which blows the TDP out of the spec massively.
If that's the case, then also the performance is "massively blown out", since essentially all the benchmarks around are based on popular motherboards.
Anantech did a test some time ago with a real, fixed, 95 W TDP[¹], and it ain't pretty.
It's definitely good for Intel that "every popular motherboard" is, uh, guilty of going out of spec, otherwise, the popular opinion of Intel chips would be significantly lower.
Regardless, I'm also not really convinced that this can be considered "cheating" by the motherboards. According to the official Intel page [²]:
> The processor must be working in the power, temperature, and specification limits of the thermal design power (TDP)
so ultimately, it's the CPU that sets the performance/consumption ceiling.
And you can do the same on quite a few AMD boards with "Precision Boost Overdrive", which gives you a 300W TDP on 1950x if your cooling can handle it.
Or you can ignore turbo and flat overclock the thing, which depending on workload will produce better results.
I think even on 1st gen Zen processor intel had performance/watt advantage though the gap was lower. It's Zen 2 that has completely obliterated intel in performance/watt. Which is almost a bigger shock than the AMD taking the performance crown. Taking both the performance and efficiency crown in a single generation specially when it's not even a full new generation is beyond impressive. Of course it was only feasible with the foundation build up by Zen 1 but it's still very encouraging after the major stagnation that was there in desktop processor for a long while.
It's going to depend heavily on which specific CPU you're talking about and which specific workload, but at least on Handbrake the Ryzen 2700 was the performance/watt king per legitreviews: https://legitreviews.com/wp-content/uploads/2018/05/performa...
Maybe Intel took that back with their lower clocked 8c/16t chips, dunno, this isn't something that comes up all that much in consumer reviews. But there's at least not a significant gap in either direction, it's pretty much a wash.
The dual EPYC 7601 used 100w less than the Xeon competition in povray while also being the fastest system by a substantial margin at povray, too. Which would put performance, power, and performance/watt all firmly in the EPYC 7601's domain on that one test. And Intel took it back on MySQL. So 50/50 split.
There are a lot of factors to unpack here, but the 8700K has 2 less cores than the 2700X, which is the reason the 8700K is coming out behind. The direct comparison here is the 9900K, but the 9900K ships with significantly higher stock clocks (4.7 GHz all-core), which also reduces its perf/watt.
When limited to its "official" 95W TDP, the 9900K does about 4.3 GHz and has a higher perf/watt than Ryzen (both higher performance and lower power consumption).
So basically you are in a situation where the Ryzen pulls less at stock, has slightly higher efficiency at stock, but has a much lower clock ceiling. While the 9900K ships with much higher clocks and worse efficiency, but has a much lower power floor if you pull the clocks back to 2700X levels.
Of note, the 2700X is actually pulling ~130W under AVX loads (33W more than the 95W-limited 9900K).
The Stilt noted that the default power limit AMD ships is 141.75W and the 2700X will run it for an unlimited amount of time (whereas Intel at least claims PL2 obeys a time limit, although in practice all mobo companies violate the spec and boost for an unlimited amount of time as well). So really "TDP" is a joke all around these days. Nobody really respects TDP limits when boosting, and it doesn't directly correspond to base clocks either (both 9900K and 2700X can run above baseclocks at rated TDP). It is just sort of a marketing number.
Epyc is a different matter and once again more cores translates into better efficiency than fewer, higher-clocked cores. But the gotcha there is that Infinity Fabric is not free either, the infinity fabric alone is pulling more than 100W on Epyc chips (literally half of the total power!).
Similarly, the 2700X spends 25W on its Infinity Fabric, while an 8700K is only spending 8W. So, Infinity Fabric pulls roughly 3x as much power as Intel is spending on its Ringbus. This really hits the consumer chips a lot harder, mesh on the Skylake-X and Skylake-SP is closer to Infinity Fabric power levels (but still lower).
Plus, GF 14nm wasn't as good a node as Intel 14nm. So Ryzen is starting from a worse node.
Moneyshot, core for core, power efficiency on first-gen Ryzen and Epyc was inferior, but of course Epyc lets you have more cores than Xeon. Ryzen consumer platform's efficiency was strictly worse than Intel though.
And that goes double for laptop chips, which are the one area that Intel still dominates. Raven Ridge and Picasso are terrible for efficiency compared to Intel's mobile lineup. And AMD mobile won't be moving to 7nm until next year.
Because of that whole "nobody obeys TDP and it doesn't correspond to base clocks or any other performance level", we'll just have to wait for reviews and see what Zen2 and Epyc are actually like. I am really interested in the Infinity Fabric power consumption, that's potentially going to be the limitation as we move onto 7nm and core power goes down, while AMD scales chiplet count up further.
I somehow completely missed this coverage of Infinity Fabric power usage. I wonder if IF power usage percentage remains the same in this generation or it has been reduced. If not improvement of IF power usage would remain a viable opportunity to make these chips even more power efficient. It seems that given IF power usage it's clear that I was even more uninformed about the power usage of first gen Zen cores.
This is good to know. Are there any reputable benchmarks that show those advantages? Something like FLOPS/watt on some LAPACK or Tensorflow test, or amount of joules to compile the Linux kernel, or anything of this sort?
> It's Zen 2 that has completely obliterated intel in performance/watt. Which is almost a bigger shock than the AMD taking the performance crown.
Why is this shocking? Zen 2 is 7nm and Intel's latest is at 14nm. It would be a far bigger shock if they didn't beat Intel in performance/watt. Zen 2 vs whatever Intel releases on 10nm in the next ~6-18 months is a much more interesting comparison.
AMD wasn't really a consideration but for budget until they launched the Athlon in the late 90s. The success of Athlon was as much about Intel's fumble with Netburst as it was with Athlon being a solid competitor.
It took Intel almost a decade to roll out Core and in that time AMD failed to capture the market despite making tremendous gains and legitimizing itself.
Ultimately AMD fumbled with the Bulldozer/Excavator lines of CPUs and lost almost everything they had gained.
The reasons AMD couldn't capture the market are complex but the short answer is that Intel influences every aspect of a computer from software, to compilers, to peripherals, to firmware.
> It took Intel almost a decade to roll out Core and in that time AMD failed to capture the market despite making tremendous gains and legitimizing itself.
And by AMD failed you mean Intel used illegal means to stop them from it, right ?
The US, Japanese and Korean fair trade comission equivalent all either blamed Intel or fined them. The EU was still too young in that area to be in time but in 2009 they gave one of their biggest fine ever at 1.45 billions € to Intel for what they did, along with an approriate "oh and if you do it again we won't be late, and won't be so nice".
Calling it "AMD failed to capture the market" is technically true, but that's one funny point of view.
> Ultimately AMD fumbled with the Bulldozer/Excavator lines of CPUs
I've heard this baseless assertion before but so far I've never heard any semblance of support. Why do you believe that AMD "fumbled" with their Bulldozer line?
Not Ryzen related, but seems you're pretty up to speed with AMD products. Does that include Radeon as well? I have a MBP and I am considering a Radeon VII for my external GPU (currently GTX 1080 but only usable in Windows. Thanks Mojave). My main concern though is thermals and noise. Does it perform on par with Nvidia there or little bit worse or considerably so? Power draw I'm not that concerned with.
thanks for that. That's a huge bummer. Really wish Apple wouldn't force the Metal issue with Nvidia. Yeah, it'd be nice and all, but as a user, I'm fine with the various scripts I have to run after macOS updates to get the card running again but they just nixed that outright. Oh well, hopefully AMD can solve the fan problems or Nvidia and Apple can work something out, either or.
The Radeon 5700 and 5700 XT are supposed to be competitive with the RTX 2060 and RTX 2070 at slightly lower prices. Only reference cards right now, but things might be looking up once OEMs have a chance to put better coolers on instead of AMD's reference blower.
I'm planning to hold out for next gen when they get ray tracing hardware to be a bit more future proof (my GTX 970's not dead yet), but since I'm thinking of trading my Wintendo out for a Mac + eGPU setup it's nice to see that AMD could actually be a good GPU option now.
Those were just announced this week, so keep an eye out for 3rd party benchmarks soon.
Will probably pull the trigger on a Radeon VII myself, only because of the better Linux drivers, and possibility of hackintosh usage. At least for my current system, I did a mid-cycle upgrade for the GPU (GTX 1080) and added NVME a couple years ago. Still running 4790K on 32gb ram, and does great for most stuff, but not so much for encoding or dev work (couple dbs and services in background).
Sadly they both appear to have a total board power 50W higher than NVIDIA's comparable model(s), so NVIDIA might still win out on power. But we'll have to wait for third-party benchmarks to confirm that.
I would wait the month or so for Navi cards to show up and see how they do on thermals and if the application performance is to your liking; Navi is intended for midrange cards(says the PR) but getting similar performance to your 1080 is possible.
AMD's recent releases have a reputation of releasing at "hot/high-power" stock and then doing much better when undervolted. Navi will get the die shrink, so the results for both power and thermals are likely to be even better, but benchmarking needs to be done before we have a full picture of what's changed.
It looks like the latest AMD cards are a bit more power hungry than NVidia counterparts. On performance, the Radeon VII seems to be closely aligned to the RTX 2080 (not TI). The RX 5700 XT is around the RTX 2070, and the RX 5700 is above the RTX 2060. Depending on your workload, and if it can leverage the AMD targets, it could be good to great. If you don't actually care about RTX features (and the slow framerates that comes with it), then you're better off with AMD for the price difference, even considering the extra power needs.
I guess you are not aware that at this point everything suggest that the upcoming AMD 7nm processors are significantly more efficient that similar performance intel processor.
The Ryzen processor is 105w vs. the significantly slower intel processor is 165w. Additionally also AMD's TDP numbers are much more accurate in terms of real peak usage than intel. So almost certainly Zen 2 processor will have a much better performance/power ratio than corresponding intel one moving forward. That was definitely not the case for AMD in their last generation.
In that case, Intel should be in big trouble, because the advertised TDP seems to be less than half the power required to reach the chips' advertised performance:
> In this case, for the new 9th Generation Core processors, Intel has set the PL2 value to 210W. This is essentially the power required to hit the peak turbo on all cores, such as 4.7 GHz on the eight-core Core i9-9900K. So users can completely forget the 95W TDP when it comes to cooling.
The Core i9-9980XE pulls from 199W->245W depending on the workload and AVX instructions being used under stock settings. The Ryzen is listed as a 105W part, although when overclocked, I'm sure it will pull more than that.
> AMD chip at 105w (and AMD give real tdp), while the Intel chip is at 185w (and Intel give tdp in non turbo mode).
Both AMD & Intel list TDP for all cores used at base clock frequencies. The major difference is Intel heavily leverages what they call all-core boost to never actually run at their base clock, allowing them to list rather ridiculously low base clock frequencies. For example the i9-9900K's base frequency is listed at 3.6ghz, but the all-core turbo frequency is a whopping 4.7ghz. That difference is how you end up with a CPU that expects a whopping 210W of sustained power delivery (the 9900K's PL2 spec) even though its TDP is only 95W.
AMD doesn't (didn't?) have an all-core boost concept, so their base clocks are just higher, making their TDP number closer to real-world. But still technically base-clock numbers and not boost numbers, and so you will still see power draw in excess of TDP.