CIFS Performance Testing
We tested the QSAN XN8012R in three different configurations. All three configurations start with a base of twelve Seagate IronWolf Pro 12TB drives. The first test uses just the twelve HDDs. The second test adds a Micron 9300 Series NVMe SSD as a read-cache. The third tests uses four Seagate IronWolf 110 SATA SSDs for read cache, and two Micron 9300 Series SSDs for a write cache.
The XN8012R will only build write cache pools in RAID 1, so you can't use a single NVMe SSD for a write cache. This eliminates the possibility of using two NVMe SSDs for both read and write cache pools. The internal PCIe 3.0 slots do not work for storage devices, just network, and Thunderbolt, so you can't add a drive like the Memblaze PBlaze5 C916 with a PCIe 3.0 x8 interface.
We used a Thecus N8880U-10G for the comparison product in the charts today. The N8880U-10G was our former system used to compared NAS-focused storage products, so it's appropriate to compare it with our new system for testing the products going forward.
Sequential Read Performance
The sequential read test shows strong, predictable performance from the QSAN system. At 4 OIO the system outperforms the older Thecus N8880U-10G by around 100 MB/s. Both systems and all configurations peak at right around 1,200 MB/s.
Sequential Write Performance
The dots on the chart show the IO rate for every second. Dots appearing below the median line are outliers where the performance decreased. The QSAN configurations show less outliers than the older Thecus system. The QSAN configurations also write sequential data around 200 MB/s more than the Thecus in the preconditioning and steady-state phases of the test.
Sequential Mixed Workloads
With several workers operating on the NAS at the same time, the workload moves to mixed workloads as data comes and goes simultaneously. The QSAN's powerful processor allows the system to take advantage of the bidirectional nature of Ethernet. The workloads surpass the 1,200 MB/s limits that really only apply per direction, ingress and egress. Ethernet can push data in both direction at the same time, and that raises your throughput ceiling.
We wanted to see some extra performance coming from the two cache configurations with the XN8012R. The SSD cache doesn't improve sequential performance.
Random Read Performance
The two cache configurations instantly increase random read performance in the XN8012R. The system performs really well even without the flash drives compared to systems with only EXT4 or BTRFS file systems. The Thecus rides the floor of the chart in this test configured in EXT4. It also fails to scale as we increase the workload, and that's just the opposite of the QSAN system.
If you recall, the read cache configuration uses an NVMe SSD, and the read/write cache configuration moves the NVMe drives to the write cache, and the read cache comes from four Seagate IronWolf 110 SATA SSDs. That different read-cache technology is why we have such a large gap in the random read test. This will carry on through a few other tests as well.
Random Write Performance
We expected more from the read/write cache configuration in this series of charts. The Micron 9300 Series is a wicked NVMe SSD, but the XN8012R doesn't take full advantage of that power writing random data.
Random Mixed Workloads
With the random mixed workload test, we see more separation between the read and write cache configuration and the base configuration with just the twelve HDDs. The read-only cache with the NVMe SSD walks over the other configurations in many of the mixes.
Last updated: Sep 24, 2019 at 12:29 am CDT
- Page 1 [Introduction, Specifications, and Pricing]
- Page 2 [CIFS Performance Testing]
- Page 3 [Server Workloads]
- Page 4 [Final Thoughts]