Test System Methodology
We are utilizing a new approach to storage testing here at TweakTown for our Enterprise Test Bench. This new approach will apply to both HDD and SSD testing. The inaugural launch of our new methods will be with this Toshiba HDD, so comparisons to other HDDs will be forthcoming as we test new models and HDDs from different manufacturers.
Our new approach centers on providing results that are outside of the typical average measurements recorded in most test scenarios. The problem with average results is that they do little to indicate the variability experienced during the testing period. Measurements of the maximum latency experienced also do little to measure the total distribution of read and write activity during the test. These results only list the single highest I/O, but do not give us an accurate picture of whether or not these outlying I/O's occur once or a number of times during the test session.
Predictability of service can best be measured and tested as a function of the performance over time. We will provide several measurements of each tested setting to provide a better view of the extended performance of the device. Each value is measured fifty times, with ten second intervals between each measurement. The line that extends between the individual measurements reflects the average speed during the test of the selected variable.
Above we can see an example of the 4K random performance of the Toshiba MK1401GRRB that we are testing today. The top line of results on this scatter chart measures the performance with the write caching enabled on the HDD, and the bottom line of results provides the measurements with the caching of the HDD turned off. This chart actually consists of 800 separate measurements of the HDDs performance.
Typical testing would reveal that the average speed is much higher with the caching enabled, represented in this chart as well. Not revealed in typical testing is the much tighter performance and less variability at the higher QD with write caching enabled. This lower amount of variability speaks volumes to the performance of the solution. Even with two devices with similar average speed, there can be a large difference in overall performance. This is illustrated well by observing how much 'scatter' there is in the returned measurements.
In deployment, many applications can hang or lag as they wait for one I/O to complete. This type of testing illustrates the performance variability expected in these types of scenarios. As we begin to compile more HDDs to provide comparisons, we will be able to look at many different aspects of performance that are not typically measured.
For instance, many would assume that the higher results achieved with caching enabled are merely the result of the write I/O being cached entirely. This is not entirely true, as the results remain the same even over an extended period, even when there is more data written than the cache can hold. The reason that we are experiencing much better performance is probably due to the HDD using the cache as a staging area for the random writes, and then converting them to sequential writes when copied down to the platters. This write combining results in much higher performance from the device when dealing with random workloads, typically the weakness of any HDD. All further tests of the Toshiba in this product evaluation are conducted with write caching enabled.
Toshiba does not release overall performance expectations for their HDDs in terms of IOPS or MB/s, instead relying upon the expected seek latency.
We can see that this is a 6GB/s SAS HDD with a power draw of 4.5 Watts for the 300GB and 4.3 Watts for the 147GB model.