With any system you will want to see a combination of synthetic testing and real-world. Synthetics give you a static, easily repeatable testing method that can be compared across multiple platforms. For our synthetic tests we use Everest Ultimate, Sisoft Sandra, FutureMark's 3DMark Vantage and PCMark Vantage, Cinebench as well as HyperPi. Each of these covers a different aspect of performance or a different angle of a certain type of performance.
Memory is a big part of current system performance. In most systems slow or flakey memory performance will impact almost every type of application you run. To test memory we use a combination of Sisoft Sandra, Everest and HyperPi 0.99.
Version and / or Patch Used: 2010c 1626
Developer Homepage: http://www.sisoftware.net
Product Homepage: http://www.sisoftware.net
Buy It Here
The X58 Extreme6 is about average when it comes to memory performance, although we did get much better results with our Corsair memory at higher clock and memory speeds. Still, these will be different from system to system as each overclock is going to be a little different. But if we can combine this with good HDD performance then the Extreme6 could more than make up for this.
Version and / or Patch Used: 5.30.1983
Developer Homepage: http://www.lavalys.com
Product Homepage: http://www.lavalys.com
Buy It Here
Everest Ultimate is a suite of tests and utilities that can be used for system diagnostics and testing. For our purposes here we use their memory bandwidth test and see what the theoretical performance is.
Stock Memory Performance
Overclocked Memory Performance
Here we once again see that the memory performance scores are average. But that is not a bad thing. If we see good HDD scores then we could have balanced performance in the rest of our testing suite.
Version and / or Patch Used: 0.99
Developer Homepage: http://www.virgilioborges.com.br
Product Homepage: http://www.virgilioborges.com.br
Download It Here
HyperPi is a front end for SuperPi that allows for multiple concurrent instances of SuperPi to be run on each core recognized by the system. It is very dependent on CPU to memory to HDD speed. The faster these components, the faster it is able to figure out the number Pi to the selected length.
For our testing we use the 32M run. This means that each of the four physical and four logical cores for the i7 and the four physical cores of the i5 is trying to calculate the number Pi out to 32 million decimal places. Each "run" is a comparative to ensure accuracy and any stability or performance issues in the loop mentioned above will cause errors in calculation.
One thing I like about HyperPi is that it loads up the system. If you know the averages for a certain piece of the puzzle you can often find the weak link. What we are seeing here is an indication that our memory performance could be worse than Sandra and Everest were able to show. We can also expect to see issues in performance with applications like LightWave 3D and Perhaps AutoGK.