The Bottom Line
Pros
- Up to 2.8 million RR IOPS
- Low Queue Depth performance
- Power efficiency
Cons
- None
Should you buy it?
AvoidConsiderShortlistBuyIntroduction and Drive Details
The Haishen5 H5100 series is DapuStor's PCIe flagship high-performance series and one that we are already familiar with. Our first encounter with the China-based storage company's performance flagship offering came via its 7.68TB U.2 model H5100 7.68TB we reviewed back in February of 2024. That drive proved to be the top performing in its class at low queue depth random read transactions, or performance where it matters most, most of the time. At the time we crowned it "The Random Read Champion", even though it could not achieve its quoted 2.8 million random read IOPS on the top end.
That was almost a year ago and a lot has changed since then. In the span of less than a year, Artificial Intelligence and Machine Learning have exploded, changing where storage performance matters most through GPU directed storage vs conventional CPU directed storage. Before the AI revolution took hold, low queue depth performance was the most important performance metric, but now that has dramatically changed. Now with ML and AI we are commonly seeing queue depths in the thousands, making top-end performance just as important as low-end performance.
With this new reality setting in, DapuStor have augmented its Haishen5 H5100 Series SSDs with high queue depth performance enhancing firmware. The new firmware offers enhanced performance across all queue depths stretching into the thousands. This means that now we should have no issue attaining the drive's quoted 2800k random read IOPS; a full 400K IOPS more than previously attainable.
DapuStor touts its PCIe Gen5 eSSD as ideal for datacenter's ideally suited to meet the demands of different industries, like IT, Internet, Finance Operators, Smart manufacturing, AI, ML, as well as the Energy industry.
The Haishen5 H5100 Series supports key features including NVMe 2.0, NVMe MI 1.1, OCP 2.5, TCG OPAL 2.0 security standards, NVMe Sanitize, Secure Boot, hot-swapping, online updates, out-of-band updates, multi-namespace support, end-to-end data protection, power loss protection, full-path data protection, T10 DIF/DIX, WRR, Flash RAID 2.0, Latency Monitor, DSM, SMART, telemetry, device power management, atomic write, over-temperature protection, universal clock (RefClk), multiple sector formats (VSS), multi-stream, NAND dynamic offset tuning, FDP, SGL, CMB, MDTS, and more.
The Haishen5 H5100 variant we chose for this review is the 3.84TB E1.S model. The reason we chose it is because the E1.S form factor is ideally suited for AI servers and is quickly becoming the favored form factor for GPU directed storage. The E1.S model is ideally suited for read-intensive applications as it has reduced write performance in comparison with the U.2 and E3.S variants, due to formfactor constraints inherent to the E1.S form factor.
Specs/Comparison Products
Item | Details |
---|---|
Model | DapuStor H5100 3.84TB |
MSRP | N/A |
Model Number | DPHV5504T0 |
Interface | PCIe Gen5 x4 |
Form Factor | E1.S |
Sequential BW | Up to 14,000 MB/s |
Random IOPS | Up to 2800K IOPS |
Warranty | 5-Years Limited |
DapuStor Haishen5 H5100 3.84TB NVMe PCIe Gen5 x4 E1.S SSD
DapuStor currently offers its Haishen5 series at capacity points ranging from 3.2TB -30.72TB across three form factors including 2.5-inch U.2, E1.S and E3.S. The drive we have in hand is E1.S form factor, Marvell Bravera SC5 16-channel controlled and arrayed with 162-layer Kioxia BiCS 6 eTLC flash. The firmware is DapuStor customized. These SSDs are compatible with major operating systems such as RHEL, SLES, CentOS, Ubuntu, Windows Server, and VMware ESXi.
Test System Specs & Enterprise Testing Methodology
Enterprise SSD Test System
Item | Details |
---|---|
Motherboard | ASUS Pro WS W790E-SAGE SE (Buy at Amazon) |
CPU | Intel Xeon w7-2495X (Buy at Amazon) |
GPU | GIGABYTE GeForce GTX 1650 (Buy at Amazon) |
Cooler | Alphacool Eissturm Hurricane Copper 45 (Buy at Amazon) |
RAM | Micron DDR5-4800 RDIMM (Buy at Amazon) |
Power Supply | be quiet! Dark Power Pro 12 1200W (Buy at Amazon) |
Case | PrimoChill's Praxis Wetbench (Buy at Amazon) |
OS | Ubuntu 24.04.1 LTS |
Prior to the AI revolution, datacenter SSDs' normal operating range would typically never exceed QD32. With AI data pipeline storage being directed by GPU, high queue depth performance has become paramount. Queue depths in the thousands are now commonplace, which is why we've changed our test platform, methodology, and operating system. Our charted upper queue depth range has been revised from QD256 to QD4096 for random data and up to QD1024 for sequential testing.
Testing Methodology
TweakTown strictly adheres to industry-accepted Enterprise Solid State Storage testing procedures. Each test we perform repeats the same sequence of the following steps:
- Secure Erase SSD
- Write the entire capacity of SSD 2x (2 loops) with 128KB sequential write data, seamlessly transition to the next step (sequential testing skips step 3)
- Precondition SSD by filling the drive twice with 4K or 8K random writes
- Run test-specific workload with a 30-second ramp up for 5 minutes at each measured Queue Depth, and record average result
Today | 7 days ago | 30 days ago | ||
---|---|---|---|---|
$249.99 USD | $249.99 USD | |||
$381.83 CAD | $398.11 CAD | |||
£229.99 | £214.99 | |||
$249.99 USD | $249.99 USD | |||
* Prices last scanned on 1/22/2025 at 10:23 pm CST - prices may not be accurate, click links above for the latest price. We may earn an affiliate commission from any sales.
|
Benchmarks - Sequential
128K Sequential Write/Read
We precondition the drive using 100 percent sequential 128K writes at QD256 using 1-thread for 2-drive fills, receiving performance data every second. We plot this data to observe the test subject's descent into steady-state and to verify steady-state is in effect as we seamlessly transition into testing at queue depth. A steady-state is achieved after 1-drive fill. Average steady-state 128K sequential write performance at QD256 is approximately 4,800 MB/s.
DapuStor specs its H5100 3.84TB E1.S SSD as capable of delivering up to 4,800 MB/s 128K sequential write throughput. We are getting up to 5,000 MB/s, so factory spec seems to be spot on. Due to its formfactor, its write throughput is roughly 1,400 MB/s less than the 7.68TB H5100 we previously reviewed.
Like its predecessor, our test subject fails to achieve the drive's quoted up to 14,000 MB/s 128K sequential throughput. However, our updated model gets much closer at 13,100 MB/s vs 12,500 MB/s.
Benchmarks - Random
4K Random Write/Read
We precondition the drive using 100 percent random 4K writes at QD256 for 2-drive fills, receiving performance data every second. We plot this data to observe the test subject's descent into steady-state and to verify steady-state is in effect as we seamlessly transition into testing at queue depth. A steady-state is achieved after 1-drive fill. Average steady-state 4K random write performance at QD256 is approximately 205K IOPS.
The 3.84TB E1.S drive is rated at up to 200K for 4K random write IOPS, and we are getting up to 210K, so again, pretty much spot on. As the chart demonstrates, our test subject is very much intended for read-intensive applications.
Performance here is exactly where the drive is designed to deliver. Our 3.84TB test subject's performance curve is tremendous, easily eclipsing its 7.68TB predecessor. The drive is delivering the third best overall performance curve we've attained to date for any flash-based SSD. Impressive. The move to faster flash is paying dividends.
4K 7030
The red line representing our test subject is hiding the orange line that represents its 7.68TB predecessor. Here they perform identically. Overall, we will take what our 3.84TB test subject has to offer over what Samsung's PM1743 can give us. Additionally, at queue depths of up to 8 our DapuStor contender is outperforming everything except the 7A46 and the PS1030.
4K 5050
As we add more programming into the mix, naturally our read-intensive specialist takes a performance hit especially at queue depths of 32 or more. However, at very low queue depths our 3.84TB test subject is again the third highest performer of the bunch.
8K Random Write/Read
We precondition the drive using 100 percent random 8K writes at QD256 for 2-drive fills, receiving performance data every second. We plot this data to observe the test subject's descent into steady-state and to verify steady-state is in effect as we seamlessly transition into testing at queue depth. A steady-state is achieved after 1-drive fill. Average steady-state 8K random write performance at QD256 is approximately 105K IOPS.
We expect 8K random to track pretty much the same as 4K random here, just at a lower IOPS rate because it's moving twice the amount of data. Here we are getting exactly half the IOPS that we got at 4K. There is little variation across all measured queue depths, demonstrating excellent consistency.
Now this is what a superior performance curve looks like. As we see it, only the PS1030 is delivering a better overall curve here. Additionally, at QD128 our test subject is churning out more IOPS than any flash-based SSD we've encountered to date. Magnificent.
8K 7030
8K 7030 is representative of a common database workload. Here again, we find our test subject capable of delivering stellar performance at lower queue depths and once again outperforming Samsung's PM1743 across the board.
8K 5050
Third best at QD1 fourth best at QD2-4, but by QD32, our write intensive test taxes our test subject into last place.
Final Thoughts
DapuStor's firmware enhancements have really ticked up the H5100 Series a notch. Its overall read performance is even better than when we crowned its predecessor the King of Reads. Even in its performance constrained E1.S formfactor, our test subject showed itself to be a superior choice for read intensive applications - especially at low queue depth transactions.
Based on its read prowess, power efficiency and form factor, we award DapuStor's Haishen5 H5100 3.84TB E1.S SSD our highest award. Editor's Choice.