Battlefield 4

Battlefield 4 is a 2013 first-person shooter video game developed by Swedish video game developer EA Digital Illusions CE (DICE) and published by Electronic Arts. It is a sequel to 2011’s Battlefield 3 and was released on October 29, 2013 in North America. We ran the game at “Ultra” settings with each respected resolution, MSAA off, and in single player mode. Wikipedia

  • Graphics Quality – Ultra
    • Texture Quality – Ultra
    • Texture Filtering – Ultra
    • Lighting Quality – Ultra
    • Effects Quality – Ultra
    • Post Process Quality – Ultra
    • Mesh Quality – Ultra
    • Terrain Quality – Ultra
    • Terrain Decoration – Ultra
    • AA Deferred – 4x MSAA
    • AA Post – High
    • Ambient Occlusion – HBAO
  • V-Sync – Off

Let’s take a look at the FPS performance across all 6 setups (min, average, max):

bf4_fps

At first glance we see a lot of Red.  However that is mostly due to the maximum frame rates, which is the set of data we care least about.  If we focus first on averages we see that the Titan X always beats the same number of Fury X cards.  This is a surprise as this is a game that generally has favored AMD in the past.

The other detail is just how low the Fury X’s minimum frame rates are.  Such low frame rates can sometimes indicate too little VRAM.  Compare this to Nvidia’s performance which is much more tightly grouped indicative of a more pleasant experience.

Let’s take a look at scaling (100% is ideal maximum for 2 way scaling, and 50% for 3 way):

bf4_scl

AMD clearly has a slight advantage here.  While initially behind due to the Titan’s overclock it’s ability to scale better claws some of that loss back.  However AMD really is held back by those minimum frame rates.  Let’s take a look at VRAM usage:

bf4_vram

Nvidia is using significantly more VRAM than AMD.  AMD is capped about 100MB below it’s total VRAM (windows keeps a portion for itself) and is therefore very likely to be VRAM limited and the stuttering that that produces shows up in the data as lag spikes (low minimum frame frates).

Let’s take a look at the GPU usage:

bf4_1f

With the single GPU we clearly see GPU utilization drop to 0 at the same time as the FPS drops to 0.  This is a clear indicator of a VRAM issue. bf4_2fIn Crossfire it seems quite common that GPUs will sometimes drop to 0 utilization – therefore not all of these red spikes are due to VRAM stuttering (though some of them are).

bf4_3f

Things get messier as we add in the third GPU.  It’s hard to see, but the FPS seems to have even more valleys than the single GPU.

As for Nvidia – the plots are much smoother:

bf4_1t

However 1 Titan X is not enough to delivery a playable game with these settings.

bf4_2t

With the 2nd GPU it’s clear that the first card here is doing the heavy lifting.  Roughly 10% of the total GPU resources are being wasted due to inefficient drivers.  What is clear however is just how good Nvidia performs with respect to avoiding low minimum frame rates.

bf4_3t

With a third Titan X there is still one card that is running 20% light.  There’s also a section where none of the cards are being fully used.  This is a little strange, but there’s also some strange behaviour with the CPU usage across the board so it may just be that isn’t a particularly “good” run.

We want to stress that these benches were run in campaign mode, your results may vary in multiplayer mode. It’s tough to announce a game as “playable” since everyone has different opinions on what FPS is required for a game to be playable. We think that 30FPS is the bare minimum you want with 60FPS being preferable.  For Battlefield 4 then we’d have to consider Nvidia the clear winner, and 2-3 GPUs recommended!

13 COMMENTS

  1. To be honest, I started reading this article with a thought that “well, how good can it be? ERs are watercooling guys so, nah! can’t expect too much”. But after reading the whole of it, I have to say, This is by far the one of the best comparative reviews I’ve ever seen, and really met all my expectations. Hats off to you guys!

    Say, FuryX did have the claim in its pre-release rumor to be a TITAN X competitor, but then AMD shrunk that note to 980 Ti. So I think comparison with 980 Ti would’ve been better comparison (and seat clenching brawl) than this, and the clock-to-clock performance metrics but nonetheless, this is wayy too good also!

    About capping the vRAM on FuryX in few games there, it also suffers from similar performance degradation on 4K and downwards. And as you may have seen on other reviews, FuryX does worse in sub-4K than the competition and even worse in 1080p. I’ve dug through every review out there yet haven’t found the reason behind this. What could be the reason?

    And that scaling on nVidia – you know nvidia does claim that their SLI bridge isn’t a bottleneck, and so it is proved here :P. When AMD did introduce XDMA, I had this belief – AMD took the right path, bridges will suffer from less bandwith sooner or later, and PCIe already has the room for accommodating the data of the bridges. So XDMA did (well, now it is “do” :D) makes sense!

    But it’s sad to see AMD not delivering smoother experience overall. If they could handle that thing, this should certainly deserved the choice over any of those higher average FPS of Titan X. But I think AMD is working on Win10 & DX12 optimized drivers and trying to mend it with bandages for now.

    My only complain was the choice of TitanX over 980 Ti and clock-to-clock performance, but other than this, this is a very great review! Hats off to you again!

    • Agreed with Frozen Fractal. I was more than pleasantly surprised with the quality of this review, and I hope you continue to do these more in the future. A 980 Ti would have been nice to see too given the price.

      Keep up the great work!

    • Thanks! I agree the 980 TI would have been a better comparison – then prices would have lined up. “Sadly” we only had Titan X’s and we weren’t willing to go out and buy another 3 GPUs to make that happen. However if anyone wants to send us some we’d gladly run em haha. I was simultaneously impressed with AMD’s results while saddened that after all the work on frame pacing that things still aren’t 100% yet. Hopefully

  2. Question…you overclocked the Titan X to 1495MHz which is “ok” for a water cooling build. I won’t complain…though I’m surprised at why that’s all you were able to achieve as I can pull that off on Air right now (blocks won’t arrive until next week). Main question though…why wasn’t the memory overclocked? A watercooled Titan X has room to OC the memory by 20%, bumping up the bandwidth to 400Gbps, which brings it quite a bit closer to the 512Gbps of the HBM1 in the Fury X.

    • Although we didn’t mention it the memory was running a mild OC at 7.4Gbps up from the stock 7gbps 6gbps – so yes about the same as your 20% 🙂

      • This is what concerns me about your understanding of overclocking. The Titan X memory is 7GHz at stock. At 7.4GHz you’re only running a 5.7% OC on the memory…

        • Hah you’re right, I was looking at titan stock memory settings not titan x. Yes it could have been pushed harder. Still though single Titan X was really good compared to Fury X – the issue it had was scaling. So unless scaling is significantly effected by memory bandwidth then I don’t think it changes the overall results much. When we re-run with overclocked furies we’ll spend a bit more time on the Titan X overclock too.

          • Don’t get me wrong…SLI scaling is definitely an issue and will still exist regardless. But you’d be surprised how important memory clocks can be depending on the test you’re running. I found this out when I was running the original GTX Titan Tri-Sli in the overclock.net unigine valley bench competition and came in second place. Leave the official redo for your next review of Fury X OC vs Titan X OC, as you mentioned. But to satisfy your own curiousity, try a higher memory clock and see what happens. If you were looking to squeeze out every last ounce out of your Titan X, you should check out http://overclocking.guide/nvidia-gtx-titan-x-volt-mod-pencil-vmod/ as well.

            My Phobya nanogrease should be coming in tomorrow so I’ll finally be putting my blocks on as well. I’m going to compare my performance results with yours next time you run your benches. So make sure you do a good job. 😉

  3. Excellent review. I am also running an Asus Rampage Extreme V with a 5960x OC’d to 4.4Ghz so your data really was telling. I’m running a single EVGA GTX980TI SC under water. I previously ran 2 Sapphire Tri-X OC R9 290s under water but opted to go with the single card.
    Did you use a modded TitanX bios? What OCing tool did you use to OC the TitanX. I would like to try to replicate the parameters and run the single 980TI to see how close I am to your single TitanX data. Thank you.

  4. Eh, i don’t really see the point of running AA in 5K 😛 Too bad it’s 5K btw, 4K is more reasonable. Too bad Fury X has problems with the Nvidia titles(W3 for example).
    But man, the scaling and texture compression on amd cards are absolutely amazing. If only they weren’t bottlenecked by the HBM1’s 4GB of VRAM.

Comments are closed.