With any system you will want to see a combination of synthetic testing and real-world. Synthetics give you a static, easily repeatable testing method that can be compared across multiple platforms. For our synthetic tests we use Everest Ultimate, Sisoft Sandra, FutureMark's 3DMark Vantage and PCMark Vantage, Cinebench as well as HyperPi. Each of these covers a different aspect of performance or a different angle of a certain type of performance.
Memory is a big part of current system performance. In most systems slow or flakey memory performance will impact almost every type of application you run. To test memory we use a combination of Sisoft Sandra, Everest and HyperPi 0.99.
Version and / or Patch Used: 2010c 1626
Developer Homepage: http://www.sisoftware.net
Product Homepage: http://www.sisoftware.net
Buy It Here
You know, many people have claimed that there is no benefit to triple channel. Well, this might be true when you get down to day to day applications, but as you can see above, having the extra channel gives us some great memory performance. The P6X58D-E yields a little 22GB/s in raw memory bandwidth. This could bode very well for many of our more memory dependent tests later.
Version and / or Patch Used: 5.30.1983
Developer Homepage: http://www.lavalys.com
Product Homepage: http://www.lavalys.com
Buy It Here
Everest Ultimate is a suite of tests and utilities that can be used for system diagnostics and testing. For our purposes here we use their memory bandwidth test and see what the theoretical performance is.
Stock Memory Performance
Overclocked Memory Performance
Everest backs up that same performance we saw with Sandra. The P6X58D-E certainly does have some room to maneuver in terms of memory performance.
Version and / or Patch Used: 0.99
Developer Homepage: http://www.virgilioborges.com.br
Product Homepage: http://www.virgilioborges.com.br
Download It Here
HyperPi is a front end for SuperPi that allows for multiple concurrent instances of SuperPi to be run on each core recognized by the system. It is very dependent on CPU to memory to HDD speed. The faster these components, the faster it is able to figure out the number Pi to the selected length.
For our testing we use the 32M run. This means that each of the four physical and four logical cores for the i7 and the four physical cores of the i5 is trying to calculate the number Pi out to 32 million decimal places. Each "run" is a comparative to ensure accuracy and any stability or performance issues in the loop mentioned above will cause errors in calculation.
Ok, let's start off by saying that while the stock numbers here are not extraordinary, they are still good. Each CPU core is running two full instances of SuperPi 32M. That is a lot of numbers being crunched.
When we kicked the CPU up to 4.3GHz things really took off. We were shocked to see the extreme drop in time, so much so that we ran the test three additional times to be sure of the accuracy.