Layers and Layers of Cache
We've spoken extensively over the last several months about the latency issues Micron has with new 128Gb flash that came to market with the new 20nm process. Samsung wasn't immune to increased latency at 128Gb and like Micron, the second generation 19nm flash has latency spikes that reach into the 1ms range. To help elevate latency when writing data to the drive, Samsung implemented two new technologies.
On the drive, Samsung implemented TurboWrite, a designated area on each flash die that works in SLC mode. By writing only one bit to each cell with a predefined space limit, Samsung increased performance on the flash itself, which in turn makes for a low cost, non-volatile cache built into the SSD.
The capacity of TurboWrite space varies between each capacity size (charted above). Data writes to the pseudo-SLC layer first and later pushed to the TLC portion of the flash. Extremely large writes will fill the pseudo-SLC layer and at that point, performance moves to TLC levels. In our full disk span tests, we observed both SLC and TLC performance, but the SLC area is so large that most end users will never (or at least rarely) experience TLC-like performance.
Since only one third of the flash capacity for the cell is used, the 12GB SLC like area actually uses 36GB of TLC flash, using the 1TB model as an example. For the last two plus years we've talked about buying more SSD capacity than you need because drives get slower as you increase the amount of data on the flash. With the 840 EVO, the distance between good performance and worst case scenario is large, so you really do need to purchase a larger drive or work at keeping the volume of data on the drive down.
To put it bluntly, RAPID Mode changes everything. That doesn't mean the software is best used by everyone or in every environment, but a majority of consumers will benefit from the technology. RAPID software essentially increases the user experience by increasing the perceived performance level of the SSD. With the 840 EVO, performance virtually doubles for a period of time. That period of time is determined by how fast your DRAM is, 1333MHz DRAM will have a longer duration or performance increase time than 2400MHz DRAM because the limiting factor is capacity and not speed. When flushing the data from the cache to the SSD, the SSD is the bottleneck, so regardless of how fast your DRAM is, the flush takes place at the speed of the drive.
First included in Samsung's Magician 4.2 software, RAPID uses up to 25% of your DRAM, but is limited to just 1GB of capacity. I think nearly everyone reading TweakTown has 4GB or more of system memory, so the 1GB rule applies almost universally. Sadly, end users cannot change the amount of DRAM used by RAPID. Those with only 4GB of system RAM may not want to dedicate 25% to RAPID and those with 16GB or 32GB (hurray for two years of low cost DDR3!) of DRAM may benefit from more resources dedicated to the technology.
There is a dark side to using DRAM black magic to cache data reads and writes to and from the SSD - host power failure. Although RAPID doesn't permanently store data in volatile DRAM, data does reside there for a short period. The side I would be worried about is data written to the DRAM and for how long that data sits there before flushing it to the SSD. Any power lose for the system with data in route to the TLC portion of the NAND could mean data loss and for some users that just isn't acceptable.