RAID Performance Testing
It's possible to connect eight SATA SSDs to most modern enthusiast-class motherboards but utilizing sixteen requires special adapter cards. You could use a HBA or Host Bus Adapter card, but this option just adds the drives to your system, the software configuration and management comes from Windows.
A hardware RAID adapter on the other hand adds logic between the array of drives and the operating system. A processor on the card runs the RAID, and a large DRAM cache accelerates performance without taxing the other system components.
We extracted our Areca ARC-1883ix-24 RAID controller from storage for this review. The ARC-1883 series features a dual-core RAID-on-Chip processor with a PCIe 3.0 x8-lane host interface. Our card was fit with a 4GB DDR3-1866 buffer.
We tested arrays with eight and sixteen drives. The eight-drive array ran in RAID 0 and RAID 5 with a stripe size of 64KB. The sixteen-drive arrays ran in RAID 0 and 5 with a 64KB stripe size and a RAID 0 using a 1024KB stripe size.
Sequential Read Performance
In theory, the 1024KB stripe would deliver amazing sequential performance but that wasn't what we found in practice. The general rule with RAID configurations has always been to use a smaller stripe size for better random data performance and a larger stripe size for improved sequential performance. We saw better sequential performance in our HighPoiint SSD6540 enclosure review with a 512KB stripe.
The Areca RAID controller is the bottleneck in the storage system so that's essentially what we are testing today but the other components play a role. The additional hardware just shows what's possible with the Icy Dock enclosures.
Sequential Write Performance
The difference between eight and sixteen drives shows up writing data. There is a small difference reading data, but data writes show a much larger difference between the arrays and fully populated enclosures. The one thing you will notice is the return on investment. Eight drives in RAID 0 nets us 2,769 MB/s in sequential writes at queue depth 2 but doubling the drive count only gives us an additional 1,000 MB/s. The gap increases at higher queue depths. This is a case of having the right workload to take advantage of the performance available.
Random Read Performance
When there wasn't a solid state option, and the world ran on hard disk drives, strong random read performance was right around 200 IOPS at queue depth 1. It was easy to increase low queue depth random read performance; the bar was already very low.
With flash the base performance has increased dramatically. RAID doesn't increase random read performance over a single drive. In fact, RAID controllers are not as efficient as Intel's PCH SATA, so queue depth 1 performance actually decreases.
Random Write Performance
The random write tests allow us to see the RAID 5 "write hole". This is where RAID 5's redundancy decreases performance. The massive DRAM buffer in our Areca controller card helps the issue, so our loss is less than some other cards on the market.
70% Read Sequential Performance
Mixing reads and writes is a difficult workload for consumer grade storage. This is one of the tests where more drives give users access to higher performance. Even with just 20% of the workload coming from data written, the is a very large performance gap between the eight and sixteen drive arrays.
70% Read Random Performance
The number of drives has less of an impact in random mixed data than the RAID level used to build the array.
Last updated: Sep 24, 2019 at 12:26 am CDT
PRICING: You can find products similar to this one for sale below.
United States: Find other tech and computer products like this over at Amazon.com
United Kingdom: Find other tech and computer products like this over at Amazon.co.uk
Australia: Find other tech and computer products like this over at Amazon.com.au
Canada: Find other tech and computer products like this over at Amazon.ca
Deutschland: Finde andere Technik- und Computerprodukte wie dieses auf Amazon.de
- Page 1 [Introduction, Specifications, and Pricing]
- Page 2 [RAID Performance Testing]
- Page 3 [Final Thoughts]