Using the Maximus IV Extreme we're able to use the same board to test the difference between both setups. What this does is help us eliminate everything else from impacting our results. We've got exactly the same motherboard, chipset, CPU, VGA setup etc.; everything is identical except the slots our cards are in when we run the tests.
Above you can see the information in the ASUS Maximus IV Extreme manual regarding the VGA setup. This is page 47 and you can see when using one card they recommend you use PCIE_X16/8_1 (Slot 1) which will result in the card running at x16 via the "Native" chip, which is the Intel P67 chip in this case. Going for two cards, they say the same slot again and PCIE_X8_3 (Slot 3) which will result in a x8 / x8 setup via the "Native" P67 chip again.
If you're going for a triple VGA setup, you can see we continue to use the first slot PCIE-X16/8_1, but for the other two cards we need to use PCIE_X16_2 (Slot 2) and PCIE-X16_4 (Slot 4) to have each card run via the "NF200" chip.
Well, that seems kind of dumb; why don't we just use slot 2 and 4 and get x16 / x16 via the NF200 instead of x8 / x8 via the Native P67 chip? Exactly, why not? - That's exactly what we're going to do today.
What we've got is the most optimized dual card setup we've tested as far as I'm concerned; the HD 6990 and HD 6970. Three GPUs across two video cards. HD 6990 in CrossFire X while faster, is only ever so slightly faster for the most part and not worth the money over the HD 6990 + HD 6970 setup. The GTX 590 in SLI we haven't tested, hence why we're saying the most optimized dual card setup we have tested.
Sliding our cards into Slot 1 and Slot 3, what ASUS recommend we have our setup running at x8 / x8 via the "Native" Intel P67 chip. If we go into the BIOS we can see this via the GPU.Dimm option.
Once we've tested with that setup, we put our cards in Slot 2 and Slot 4, the setup that ASUS don't recommend; and the setup according to them which will run slower. Looking in the BIOS again, you can now see we're running at x16 / x16 via the NF200 chip.
To help remove any CPU limitations, we pushed our 2600k up to 5GHz via a 50x multiplier. Apart from that, there are no other real surprises; we're using the ultra-sleek CL7 Ripjaws X kit from G.Skill running at 2133MHz DDR and everything is installed on our Kingston SSD drive.
So with everything running as it should, it's time to check out our benchmark line up which today concentrates on some of the more intensive tests, including 3DMark 11, Heaven 2.5, Lost Planet 2 and Aliens vs. Predator. This helps make sure we're taxing our GPUs as much as possible. Along with these, we've included just two less intensive benchmarks; the older 3DMark Vantage and Street Fighter IV.
So let's find out, is running via the Native Intel chip at x8 / x8 faster than running through the higher speed x16 / x16 NF200? Or is it just a bunch of marketing mumbo jumbo?
Let's get started!