Witcher 3

The Witcher 3: Wild Hunt is the third and last game detailing the story of Geralt of Rivia by CD Projekt RED. The game’s atmosphere is set in the aftermath of the events of The Witcher 2: Assassins of Kings. The plot combines several storyline elements – with the two main ones being Nilfgaard’s invasion into the  Kingdoms and Geralt’s own quest to eliminate the Wild hunt and other monsters roaming the lands.  The Witcher 3 is a visually beautiful game that is a real challenge for GPUs to max out. Wikipedia

Settings:

  • Nvidia Hairworks – Off
  • Shadow/Terrain/Water/Grass Density/Texture Quality – Ultra
  • Motion Blur – On
  • Blur – On
  • AA – Off
  • Bloom – On
  • Sharpening – Off
  • Ambient Occlusion – SSAO
  • Vsync – Off
  • Max FPS – Unlimited
  • Number of Background Characters – Ultra
  • Display – Fullscreen

First off note that in this test AA is off.  This is the first game we’ve seen where AA is off and the GPUs still can not handle it.  Nvidia does do better across the board, while AMD seems to show that triple CFX is a bit broken.  This is a shame because 2 way CFX is not at all close to 60 FPS and requires more power.

w3_fps

While the 2 way scaling is acceptable for both AMD and Nvidia the 3 way scaling for both is definitely subpar.

w3_scl

Given AMD’s performance I really expected VRAM to be the cause, however VRAM was very much in check:

w3_vram

Once we look at the usage plots we see some gnarlyness going on when crossfire is used:

w3_1f w3_2fw3_3fThe card utilization was dropping to 0 so often that it was really hurting FPS.  This showed through in the quality of the experience where stuttering was observed.  Quite simply put the AMD experience was unplayable.  Having said all of that while the Nvidia performance was better, it still wasn’t that great:

w3_1t w3_2t w3_3t

With SLI something starts limiting the FPS beyond gpu and cpu clock speed.  The game was essentially maxing out at 45FPS even though the CPU and GPU could push harder.  With quad channel memory on X99 it’s unlikely to be memory throughput so my thoughts go towards the SLI bus and the PCI bus.

Witcher 3 at 5K is simply going to need some settings turned down to be playable.

13 COMMENTS

  1. To be honest, I started reading this article with a thought that “well, how good can it be? ERs are watercooling guys so, nah! can’t expect too much”. But after reading the whole of it, I have to say, This is by far the one of the best comparative reviews I’ve ever seen, and really met all my expectations. Hats off to you guys!

    Say, FuryX did have the claim in its pre-release rumor to be a TITAN X competitor, but then AMD shrunk that note to 980 Ti. So I think comparison with 980 Ti would’ve been better comparison (and seat clenching brawl) than this, and the clock-to-clock performance metrics but nonetheless, this is wayy too good also!

    About capping the vRAM on FuryX in few games there, it also suffers from similar performance degradation on 4K and downwards. And as you may have seen on other reviews, FuryX does worse in sub-4K than the competition and even worse in 1080p. I’ve dug through every review out there yet haven’t found the reason behind this. What could be the reason?

    And that scaling on nVidia – you know nvidia does claim that their SLI bridge isn’t a bottleneck, and so it is proved here :P. When AMD did introduce XDMA, I had this belief – AMD took the right path, bridges will suffer from less bandwith sooner or later, and PCIe already has the room for accommodating the data of the bridges. So XDMA did (well, now it is “do” :D) makes sense!

    But it’s sad to see AMD not delivering smoother experience overall. If they could handle that thing, this should certainly deserved the choice over any of those higher average FPS of Titan X. But I think AMD is working on Win10 & DX12 optimized drivers and trying to mend it with bandages for now.

    My only complain was the choice of TitanX over 980 Ti and clock-to-clock performance, but other than this, this is a very great review! Hats off to you again!

    • Agreed with Frozen Fractal. I was more than pleasantly surprised with the quality of this review, and I hope you continue to do these more in the future. A 980 Ti would have been nice to see too given the price.

      Keep up the great work!

    • Thanks! I agree the 980 TI would have been a better comparison – then prices would have lined up. “Sadly” we only had Titan X’s and we weren’t willing to go out and buy another 3 GPUs to make that happen. However if anyone wants to send us some we’d gladly run em haha. I was simultaneously impressed with AMD’s results while saddened that after all the work on frame pacing that things still aren’t 100% yet. Hopefully

  2. Question…you overclocked the Titan X to 1495MHz which is “ok” for a water cooling build. I won’t complain…though I’m surprised at why that’s all you were able to achieve as I can pull that off on Air right now (blocks won’t arrive until next week). Main question though…why wasn’t the memory overclocked? A watercooled Titan X has room to OC the memory by 20%, bumping up the bandwidth to 400Gbps, which brings it quite a bit closer to the 512Gbps of the HBM1 in the Fury X.

    • Although we didn’t mention it the memory was running a mild OC at 7.4Gbps up from the stock 7gbps 6gbps – so yes about the same as your 20% 🙂

      • This is what concerns me about your understanding of overclocking. The Titan X memory is 7GHz at stock. At 7.4GHz you’re only running a 5.7% OC on the memory…

        • Hah you’re right, I was looking at titan stock memory settings not titan x. Yes it could have been pushed harder. Still though single Titan X was really good compared to Fury X – the issue it had was scaling. So unless scaling is significantly effected by memory bandwidth then I don’t think it changes the overall results much. When we re-run with overclocked furies we’ll spend a bit more time on the Titan X overclock too.

          • Don’t get me wrong…SLI scaling is definitely an issue and will still exist regardless. But you’d be surprised how important memory clocks can be depending on the test you’re running. I found this out when I was running the original GTX Titan Tri-Sli in the overclock.net unigine valley bench competition and came in second place. Leave the official redo for your next review of Fury X OC vs Titan X OC, as you mentioned. But to satisfy your own curiousity, try a higher memory clock and see what happens. If you were looking to squeeze out every last ounce out of your Titan X, you should check out http://overclocking.guide/nvidia-gtx-titan-x-volt-mod-pencil-vmod/ as well.

            My Phobya nanogrease should be coming in tomorrow so I’ll finally be putting my blocks on as well. I’m going to compare my performance results with yours next time you run your benches. So make sure you do a good job. 😉

  3. Excellent review. I am also running an Asus Rampage Extreme V with a 5960x OC’d to 4.4Ghz so your data really was telling. I’m running a single EVGA GTX980TI SC under water. I previously ran 2 Sapphire Tri-X OC R9 290s under water but opted to go with the single card.
    Did you use a modded TitanX bios? What OCing tool did you use to OC the TitanX. I would like to try to replicate the parameters and run the single 980TI to see how close I am to your single TitanX data. Thank you.

  4. Eh, i don’t really see the point of running AA in 5K 😛 Too bad it’s 5K btw, 4K is more reasonable. Too bad Fury X has problems with the Nvidia titles(W3 for example).
    But man, the scaling and texture compression on amd cards are absolutely amazing. If only they weren’t bottlenecked by the HBM1’s 4GB of VRAM.

Comments are closed.