With any system you will want to see a combination of synthetic testing and real-world. Synthetics give you a static, easily repeatable testing method that can be compared across multiple platforms. For our synthetic tests we use Everest Ultimate, SiSoft Sandra, FutureMark's 3D Mark Vantage and PCMark Vantage, Cinebench as well as HyperPi. Each of these covers a different aspect of performance or a different angle of a certain type of performance.
Memory is a big part of current system performance. In most systems slow or flaky memory performance will impact almost every type of application you run.
To test memory we use a combination of SiSoft Sandra, Everest and HyperPi 0.99.
Version and / or Patch Used: 5.02.1789
Developer Homepage: http://www.lavalys.com
Product Homepage: http://www.lavalys.com
Buy It Here
Everest Ultimate is a suite of tests and utilities that can be used for system diagnostics and testing. For our purposes here we use their memory bandwidth test and see what the theoretical performance is.
Core i5 750 Stock
Core i5 750 @ 4230MHz
Core i7 870 Stock
Core i7 870 @ 4030MHz
The dual-channel memory controller on the Core i5 is good, but as we see below in other testing it cannot keep up with the triple-channel controller on the Core i7.
Version and / or Patch Used: 2009 SP3c
Developer Homepage: http://www.sisoftware.net
Buy It Here
Sandra is interesting when it comes to memory bandwidth testing. As we see here Lynnfield and the Core i7 9xx series score almost the same. However (and this is the reason we run both synthetic and real world) our later testing showed that in memory intensive applications the dual-channel memory controller in Lynnfield was not as robust as the triple channel one in Nehalem.
Version and / or Patch Used: 0.99
Developer Homepage: http://www.virgilioborges.com.br
Product Homepage: http://www.virgilioborges.com.br
Download It Here
HyperPi is a front end for SuperPi that allows for multiple concurrent instances of SuperPi to be run on each core recognized by the system. It is very dependent on CPU to memory to HDD speed. The faster these components the faster it is able to figure out the number Pi to the selected length. For our testing we use the 32M run. This means that each of the four physical and four logical cores for the i7 and the four Physical cores of the i5 is trying to calculate the number Pi out to 32 million decimal places. Each "run" is a comparative to ensure accuracy, and any stability or performance issues in the loop mentioned above will cause errors in calculation.
Very interesting numbers here, as we see the non-Hyper Threaded Core i5 leave everything behind. My thoughts on this is that the dual instances of HyperPi on each core is simply too much.