Jump to content

AMD Ryzen early benchmark


Schatten

Recommended Posts

8 hours ago, xKiLlFrEnZyx said:

What are the 2 pictures showing?

I would say that what you are looking at is a comparison of CPUs using the i5 6600 as the "Zero" and everything else is a percentage less or more in performance. One is traditional cpu benchmarks, the other using game performance as the benchmark.

Link to comment
Share on other sites

4 hours ago, Gremlich said:

I would say that what you are looking at is a comparison of CPUs using the i5 6600 as the "Zero" and everything else is a percentage less or more in performance. One is traditional cpu benchmarks, the other using game performance as the benchmark.

Ok yeah i see it now.... i was staring at it and was like wait what?!?

Thanks!

Link to comment
Share on other sites

apparently, not having PCI-e 4.0 on a mobo is no big deal. Any benefit actually seems negligible, to the point where we won't notice. Read about PCI-e scaling here:

https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080_PCI_Express_Scaling/

last page of the report if you're lazy:

Although we approached this review with little hopes of a revelation, we must admit we're a bit surprised at just how little the situation has changed with high-end graphics cards saturating the PCI-Express bus. The GeForce GTX 1080 is about 30% faster than the Radeon R9 Fury X from last year's PCI-Express scaling review. We were expecting whatever little performance drops we saw last year as we lowered PCI-Express bandwidth to become more pronounced since the GTX 1080 is a faster card being tested on newer games. This, however, doesn't seem to be the case.

When averaged across all our games, you lose virtually no performance as you go down from PCI-Express 3.0 x16 to PCI-Express 3.0 x8, no matter the resolution. Performance doesn't even drop with newer DirectX 12 and Vulkan games, including titles like "DOOM," which are known to utilize virtual texturing ("mega textures," an API feature analogous to Direct3D tiled-resources). If anything, mega textures has reduced the GPU's bandwidth load on the PCI-Express bus. There is, similarly, no noticeable performance loss (1-2%) between PCI-Express 3.0 x16 and PCI-Express 2.0 x16. This should come as a relief to both those gaming on older platforms such as Intel "Sandy Bridge" and AMD FX and those considering the expensive Intel HEDT platform just for its PCI-Express 3.0 x16 assurance when installing two cards in SLI. There are certain tests that load the PCI-Express bus more than others. "Far Cry: Primal" posts steeper performance losses as we switch between gen 3.0 x16 and gen 2.0 x8 at 1080p resolution. 

Performance losses begin to be noticeable as you get down to PCI-Express 2.0 x8, PCI-Express 3.0 x4, and below. Even here, the frame-rate drops are within 5-10% of PCI-Express 3.0 x16. If that makes a difference between "playable" and "slideshow" for you, you have something to consider. PCI-Express 1.1 x16 still has sufficient bandwidth with performance similar to PCI-Express 2.0 x8. As you switch to gen 1.1 x8 and gen 1.1 x4, the performance loss begins to become more noticeable. Even in the slowest PCI-Express mode, the GTX 1080 isn't much slower than a GTX 1070 running at Gen 3.0 x16.

An interesting trend we noticed is that frame-rate losses are more pronounced at the lower 1920 x 1080 resolution rather than the higher Ultra HD resolution. This is due to the higher frame rate at the lower resolution, which requires more PCIe bandwidth. The frame rates in such cases, with the GTX 1080, are still too high for you to worry about. Perhaps it makes a difference for some if they're gaming on fast 144 Hz monitors. 

We expected the chipset-linked PCI-Express 3.0 x4 (physical x16) slot to be the weakest option for you since this setup is sub-optimal and feeds on your chipset's bus bandwidth. The performance loss for this option is there with 5-8%, which isn't that significant. We'd still not recommend using such a slot since it slows down other devices in your system, such as your SSD, and conversely, other bandwidth-hungry devices can slow down your graphics card, causing frame-rate drops. It's also important to mention that installing graphics cards in such slots could cause the graphics card to heat up by limiting airflow due to its position at the very bottom.

Just out of curiosity, we investigated the impact of switching between the three PCI-Express generations on power draw. We are happy to report that switching to older PCI-Express generations made no difference to power draw. The measurements only differ in percentage, and to an expected extent due to slightly different frame rates which have the GPU stay idle a bit longer.

We hope our data helps settle a lot of flame wars on the forums. PCI-Express 3.0 x8 is just fine for this generation of GPUs, and so is PCI-Express 2.0 x16. You lose about 4% in performance at PCI-Express 2.0 x8 and PCI-Express 3.0 x4, but that's no deal breaker. If you're still gaming on a PCI-Express 1.1 platform, congratulations, your future-proofing has worked for many years, but it may now be time to upgrade.

 

Link to comment
Share on other sites

Alright. So according to this article, AMD Ryzen outperforms Intel CPU price equivalents. By quite a lot even. In fact, this article from Forbes.com, and this article from TechPowerup even claim that the Ryzen 7 1700X outpaces the 5960X, and almost manages to keep up with the i7 6900K. That's a 389 dollar CPU keeping up with a 1000 dollar CPU. And the 1800X is apparently yet to come.

If this is true (it might still be a little less impressive with normal samples as opposed to engineering samples!) then Intel is in seriously hot water, and I cannot wait to see what AMD has for Vega. So well done AMD, you finally did it.

Link to comment
Share on other sites

No, They have give out 1700,1700X and 1800X. with pricing. However, it still doesn't give actual "workload".  However, still not looking to jump over. Intel ( and Nvidia for GPU) keeps the price high when they can.  They didn't increase the normal consumers from 4/8 as of yet, they have had 6/12 and 8/16 under the extreme for years(probably for years to come). 

 

Link to comment
Share on other sites

My Fire Strike stress test with an FX-8350 at 4,334MHz/4.334GHz, 2xGTX970 SSC @1440p and 32 GB RAM using an Arctic Cooling Freezer 240 closed loop cooler that I couldn't even hear running. The GTX970's ran at 69-75 deg C and the CPU never got above 35 deg C while running a steady 60 FPS

2017-02-26 15_25_46-4.3GHz.jpg

Link to comment
Share on other sites

1 hour ago, Gremlich said:

My Fire Strike stress test with an FX-8350 at 4,334MHz/4.334GHz, 2xGTX970 SSC and 32 GB RAM using an Arctic Cooling Freezer 240 closed loop cooler that I couldn't even hear running. The GTX970's ran at 69-75 deg C and the CPU never got above 35 deg C

2017-02-26 15_25_46-4.3GHz.jpg

Looks like intel has their work cut out for them... about damn time

Link to comment
Share on other sites

21 hours ago, Caldon said:

Uh, Tom? This particular bench was done with an FX8350 :D

But you're right. Intel finally has to get off it's arse and compete again.

I forgot to point out that it was running a steady 60 FPS, which is what I limit my CPU to, even on a 1440p monitor.

Link to comment
Share on other sites

Why do we care about an FX8350?  Its performance is well understood.

When you have Ryzen benchmarks, post em up.

It looks to me like the 1400X will be the one mid-range gamers should pick.  Lets see how it performs on stock and overclocked speeds in games.  Not video encoding, but games.  Although its video encoding performance looks very impressive, its not what most people are here for.

Link to comment
Share on other sites

23 hours ago, Boildown said:

Why do we care about an FX8350?  Its performance is well understood.

When you have Ryzen benchmarks, post em up.

It looks to me like the 1400X will be the one mid-range gamers should pick.  Lets see how it performs on stock and overclocked speeds in games.  Not video encoding, but games.  Although its video encoding performance looks very impressive, its not what most people are here for.

The reason I posted it is that there are STILL backers who cannot afford to get a brand new, current technology rig and may just want to wait a little while longer.

I found out that not all chipsets on the new AM4 mobos will support overclocking. Specifically the A320 series mobos (if they are even available), so do some research if you plan to OC.

http://www.pcworld.com/article/3175005/computers/amd-ryzen-motherboards-explained-the-crucial-differences-in-every-am4-chipset.html

from that article: B350 mobos are unlocked but do not support multiple GPUs and apparently, only AMD’s X370 and X300 chipsets support two PCIe 3.0 x8 graphics card slots

ryzen-am4-motherboar-features-100710797-

Link to comment
Share on other sites

Really, its as expected.  The good news is that modern games will get the most benefit, as they're far more multithreaded than older games.  And if you like to Twitch stream in a single-CPU solution, this should still end up faster than something like an i5, for simultaneous gaming and encoding on the same CPU.  If you want to make a dedicated Twitch box, well, this is still probably a better value: https://www.reddit.com/r/Twitch/comments/47bzdc/budget_friendly_secondary_streaming_pc_guide, so AMD has problems competing there, too.  These new CPUs are good for hybrid solutions, and when people want something new.

Also, it requires Windows 10, earlier Windows OSes aren't supported.

TL;DR: Ryzen is good for when gaming is a secondary concern, and passable if you'll only play GPU bound older games. That's still a huge market and I'd expect Ryzen and AMD to do really well.  But if you're going to play Star Citizen, an i5 or i7 will handily beat the equally priced AMD Ryzen alternatives.

 

Link to comment
Share on other sites

  • 4 weeks later...

Finally saw someone post their log file from an 1800X CPU on the Open Broadcaster forums.  Only 0.2% duplicated frames on Slow preset.  That's just as legit as advertised.  I asked them to post a log file from a longer encode to get more data but it looks good so far.  Not a gaming CPU (for the money) but beastly for a dedicated encoding PC.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...