For a long time if you wanted to cool both your CPU and GPUs there was a standard loop order that was suggested and it went something like this:

Radiator -> CPU -> GPU -> Reservoir -> Pump

If you had multiple GPUs or multiple radiators then things could get more complicated, but the idea was that the CPU got the coldest liquid it could possibly get.  Everything was run in series with the exception of GPUs which might be series or parallel.  In reality though performance often was secondary to ease of loop building and so the actual order sometimes changed even if the components were still in series.  Typically this only really yielded 1C or so of performance difference when the CPU was fed with “warm” coolant.

However recently we’ve been seeing a few more builds with the CPU and GPU running in parallel.  This can give some pleasing clean lines to a build which is more important for many users than performance.  Here is an example of such a beautiful build from Ruben Van Leusden:

Initially when I saw builds like this I loved the look but I presumed it was a bad idea.  A low restriction GPU coupled with a high restriction CPU block would mean that most of the coolant flow would go primarily through the GPU rather than the CPU.  However the overall flow rate itself would also increase as the restriction of the two parts of the loop that are probably the worst offenders, would drop significantly.

For example at 1GPM the restriction of the EK Supremacy is 0.65 PSI while the EK 980 block measures 0.6 PSI.  In series at 1GPM then they would have a combined restriction of 1.25PSI, while in parallel at 1GPM each their combined restriction would be ~0.3PSI.  Naturally of course it is more complicated because the flow rate of the pump would increase to compensate for the reduced overall restriction.

It wasn’t clear which factor would really win, so given that we know nothing, we should test some setups and find out some data before acting like our assumptions are fact. +1 for science.

For our setup we used the EK EVO X99 block and the EK 980TI GPU Block as this type of tubing setup seems to mainly occur around EK users.  To avoid complicating the setup and testing, no GPU was added to the GPU block.  For this test we are only concerned with CPU performance and not GPU performance.  This is because CPUs under water cooling are typically far hotter than GPUs under water cooling.  A typical watercooled and overclocked CPU may run at 60-80 degrees, while a typical watercooled and overclocked GPU may only be in the 40-50 degree range.

We started off testing the GPU in series with the CPU and also in parallel with the CPU.  Coolant temperature was measured at the inlet of the CPU block to avoid any loop order issues.  The CPU block was mounted once to an overclocked 5820K and remained mounted for the duration of the tests.  Any large performance differences therefore are attributed to the change in flow due to the different setups.  From previous data in 2012 we know that CPU block performance starts to drop off significantly below 1GPM:

In order to make this test more “worst case” our standard dual ddc pump – the MCP35X2 was turned down to 31% PWM.  This gave us roughly 0.8GPM when setup for series and 1.1GPM when setup for parallel.  However that 1.1GPM of flow is being split between the CPU and GPU so actual CPU flow is much less – roughly 50% less given that the restriction of both blocks is similar.

So here is the data – like our standard CPU block tests this was the average of 15 minutes worth of data logged every second.  The 15 minute window was selected from a 2 hour window of data by automatically choosing the best 15 min segment with the lowest standard deviation.

Here we see 2.5 degrees worse performance when running the CPU in parallel due to the lower flow inside the CPU block.  This is a similar performance hit to running the inferior orientation of the CPU block.  If you combine both then you might see a 5C hit (even more for lower flow rates or less if you run higher flow rates)

Some people will care about 2.5 degrees, some people wouldn’t.  At this point I decided to increase the amount of GPUs.  As I didn’t have an identical GPU block I used a Bitspower GTX 980 block with a restriction of 0.56 PSI at 1GPM.  This is roughly similar to the EK’s restriction and so for the parallel case the flow in the CPU block should drop from ~50% to ~33% of the total flow rate.  The total flow of course will increase slightly.  For the series case, flow should also increase slightly as the GPUs are still in parallel and so the total loop restriction decreases.

If we compare this data to the original, we see the effects are similar but greater:
Series is better still as flow rate is slightly higher while parallel is worse because the local flow through the CPU block is lower again.  The difference now is 4 deg C.

Conclusion

This all matches our expectations and now we have two data points about how much this might effect your CPU cooling.  So the question remains – is this a bad idea?  Our answer would be that it depends.  2.5C is not a bad enough result to really cause concern.  However it could be worse than that for example:

  • If you are running a weak pump, it would be a bad idea
  • If you are running a highly restrictive CPU block combined with low restriction GPU block, or multiple GPU blocks in parallel, then it would be a bad idea unless your flow rate is high enough to compensate

Of course – if you are not running a high overclock on a high power CPU then your CPU temps will be much lower and you wouldn’t care.

There may even be other reasons your CPU temperatures are not optimized that may be larger factors for example poor TIM choice or application, poor choice of water block, incorrect rotation of the CPU block, incorrect jetplate use, low flow rates, high coolant temperatures due to lack of radiators or airflow, a non soldered CPU IHS with stock TIM.

Different people also have different comfort levels with CPU temperature.  Some people are ok with 90C on their CPU and some people are concerned by 70C.  So in reality all that matters is your own setup and comfort with your temperatures.  So if you have a setup like this you can simply look at your CPU temps and see if they are acceptable to you and if not and you want to improve the performance then you can either change your loop design, or increase your pump power.  The bottom line for me is that we now know just how bad the performance decrease is, and personallyi it’s not enough for me to advise anyone against doing parallel builds.