This page is really to go into the details of the setup used for testing and also more detail of the thermal results. The individual block pages have a more simplistic analysis.
Like the CPU block roundup, the same 3930K @ 4.7GHZ with 16GB of DDR3 clocked at 1600 CL9 was used. The same Rampage IV Extreme motherboard was used with the EVGA Titan in PCIE Slot 1. The rig was powered by a Corsair AX1200, and cooling provided by an EK Supremacy block and a HWLabs Black Ice GTX 560 with 2150rpm Gentle Typhoon fans mounted with BGears 120mm -> 140mm adapters. Flow rates were measured with a King Instruements Rotameter, and plenty of Koolance QD4/VL4N disconnects were used to make life easy changing components. Flow rate was altered by changing the PWM control of a MCP35x2, and for the lower flow rates, one of the pumps was unpowered. The TIM used was Arctic Cooling MX2 because it it easy to use, quick to cure and relatively forgiving of a poor mount. In order to avoid curing issues, the Titan was burned in overnight before testing began.
For recording gpu core temperatures, titan core clocks and power usage, EVGA Precision X was used. Data was set to log every second. Each datapoint was allowed exactly 40 minutes to stabilize before data was logged for 20 minutes. Coolant temperature was logged using Dallas probes coupled with a Crystalfontz data logger. Data was again logged every second.
The titan suffered from throttling whereby overclocks would mysteriously reduce themselves despite the power and thermal limits not being reached. Moving to Naennon’s 150% power limit bios removed the throttling and enabled me to test at 1123MHz and run a higher than stock 123% power usage. Loading was provided by FurMark.
Thermal Results – GPU Core
Thermal results on the GPU cores were very similar, particularly at high flow. The major separation came at low flow. Looking at performance vs pump setting is the best way to really compare block performance as this takes into account the effect of varying restriction. Here it can be seen that the XSPC block is the clear winner across the range of interest:
It should be noted that pump setting 129 corresponds to a loop flow of approximately 1GPM, while the highest and lowest correspond to around 2GPM and 0.35GPM respectively. This can be seen if we plot the same data vs flow rate, however such plots can often lead to misinterpretations in the data because the best performing block may no longer correspond to the lowest or highest line:
For most builds it is often recommended to attain a loop flow of 1-1.5GPM in order to avoid the “knee of the curve” seen in CPU block performance data. Assuming such a loop and only one GPU block then it is useful to look at only one pump setting, for example if we plot only the data from pump setting 129 which equates to ~1GPMish:
It can be seen that the differences are small – less than 2C across the entire range of blocks. Given a margin of error of maybe 0.5 degrees then it can be said that for “most loops with a single GPU” that the choice of block makes little difference and the end user may choose a block based on secondary characteristics.
However not all users buy only one GPU. Some are known to run multiple cards in parallel. Running GPU blocks in parallel will cause major reductions in flow for each block. A single loop containing CPU and 4 GPUs in parallel that is runnning 1.2GPM will only be running 0.3GPM in each GPU. If we look at the lowest pump setting it can be seen that the performance spread has become much wider:
In this case the EK blocks are definitely performing 4.5 degrees worse than the XSPC block. However in a loop running four GPUs in series this would not be a problem.
Thermal Results – VRAM
VRAM and VRM results are hard to take data on accurately. In this test they were not even taken as accurately as they could have been. I experimented with an IR “Laser” temperature probe that was not data logged. In order to do this I had to measure the VRAMs on the back of the titan, it also meant that I could not fit any form of backplate. Because of all of this it was far more sensitive to error than the core temperatures. I also had no similar reference to compare to on the water temperature and so it was measured vs ambient temperature introducing even more error. This large error can be seen in the raw data:
However patterns can still be seen and if we average all the data points we get a clearer easier to read plot:
EK is at the top of this chart, owing I believe to the very thin thermal pads it uses. Because of this I expected Aquacomputer to perform the best as it does not use any thermal pads at all, instead it uses only TIM to interface the memory chips to the block. It is not known why it didn’t top the chart. The Swiftech block used very gummy thick pads and as expected does poorly. The difference is still not that large, but still larger than the difference in GPU core temperatures.
Thermal Results – VRM
Similarly to VRAM, the VRM temperatures had a much larger margin of error as they were measured in similar ways. However the VRMs were covered by the block and so were measured by recording the PCB temperature underneath whether the VRMs are soldered. This again is far from ideal and the nearby inductors which do get hot when there is no airflow can contribute significantly to this temperature. The results are similar to the VRAM results but due to the power dissipated being larger are of a greater magnitude:
Again if we average these results it makes it much easier to see what is going on:
Here again the EK shines, not just because of the thin thermal pads but also because EK supplies thermal pads to also cool the nearby inductors.
To measure restriction or in other words how hard it is for a pump to push water through a block it is taken out of the loop and run on a separate setup where flow is varied and both flow and pressure drop across the block are recorded and plotted:
If such a plot means nothing to you then you can either read this guide on how to use them or it can be summarized like so: the Swiftech is the least restrictive, with EK and Koolance in close 2nd and third places. The Aquacomputer block on the other hand is much more restrictive and would not be a good choice therefore for running multiple gpus in series. Both EK blocks are virtually identical.