Tech content trusted by users in North America and around the world
7,296 Reviews & Articles | 53,185 News Posts

Seagate 600 Pro 200GB 2-Drive SSD RAID Report

By: Jon Coulter | RAID in Storage | Posted: Mar 31, 2014 2:05 pm

Iometer - Disk Response

 

Version and / or Patch Used: 1.1.0

 

We use Iometer to measure disk response times. Disk response times are measured at an industry accepted standard of 4k QD1 for both write and read. Each test is run twice for 30 seconds consecutively, with a 5 second ramp-up before each test. The drive/array is partitioned and attached as a secondary device for this testing.

 

Write Response

 

seagate_600_pro_200gb_2_drive_ssd_raid_report_27

 

Read Response

 

seagate_600_pro_200gb_2_drive_ssd_raid_report_28

 

Average Disk Response

 

seagate_600_pro_200gb_2_drive_ssd_raid_report_29

 

Write response times benefit most from RAID 0 because of write caching. There is a slight latency increase in read response times for an array versus a single drive. Write response times are excellent, second only to our EVO array. Our test is operating within our EVO array's emulated SLC "Turbo Write" layer, so keep in mind that an EVO array will only have this kind of write response as long as what you are transferring fits within this layer.

 

 

DiskBench - Directory Copy

 

Version and / or Patch Used: 2.6.2.0

 

We use DiskBench to time a 28.6GB block (9,882 files in 1,247 folders) of mostly incompressible random data as it's transferred from our OS array to our test drive/array. We then read from a 6GB zip file that's part of our 28.6GB data block to determine the test drive/array's read transfer rate. The system is restarted prior to the read test to clear any cached data, ensuring an accurate test result. This is a pure transfer test; no workload is running simultaneously.

 

Write Transfer Rate

 

seagate_600_pro_200gb_2_drive_ssd_raid_report_30

 

485 MB/s is a good performance. We expected to see a little better performance, but then again, this is just a transfer test with no workload involved, so this test doesn't exactly fall into the 600 Pro's wheelhouse.

 

Read Transfer Rate

 

seagate_600_pro_200gb_2_drive_ssd_raid_report_31

 

669 MB/s is one of the slower transfer rates we've achieved when running this test. It's still good, but it's nothing to write home about.

 

seagate_600_pro_200gb_2_drive_ssd_raid_report_32

 

This is a perfect example of a capacity-based performance advantage. Our 600 series array has faster overall transfer rates due to its larger capacity, despite being the slower of the two arrays in a workload setting.

 

 

Futuremark PCMark 8 Extended - Consistency Test

 

Heavy Usage Model:

 

We consider PCMark 8's consistency test our heavy usage model test. This is the usage model most enthusiasts, gamers, and professionals fall into. If you do a lot of gaming, audio/video processing, rendering, or have workloads of this nature, this test will be most relevant to you.

 

PCMark 8 has built-in, command-line-executed storage testing. The PCMark 8 Consistency Test measures the performance consistency and degradation tendency of a storage system.

 

The Storage test workloads are repeated. Between each repetition, the storage system is bombarded with a usage that causes degraded drive performance. In the first part of the test, the cycle continues until a steady degraded level of performance has been reached (Steady State).

 

In the second part, the recovery of the system is tested by allowing the system to idle and measuring the performance with long intervals (TRIM).

 

The test reports the performance level at the start, the degraded steady-state, and the recovered state, as well as the number of iterations required to reach the degraded state and the recovered state.

 

We feel Futuremark's Consistency Test is the best test ever devised to show the true performance of solid state storage in a heavy usage scenario. This test takes on average 13 to 17 hours to complete and writes somewhere between 450GB and 7000GB of test data depending on the drive(s) being tested. If you want to know what an SSD's performance is going to look like after a few months or years of heavy usage, this test will show you.

 

Here's a breakdown of Futuremark's Consistency Test:

 

Precondition phase:

1. Write to the drive sequentially through up to the reported capacity with random data.

2. Write the drive through a second time (to take care of overprovisioning).

 

Degradation phase:

1. Run writes of random size between 8*512 and 2048*512 bytes on random offsets for 10 minutes.

2. Run performance test (one pass only).

3. Repeat 1 and 2 for 8 times, and on each pass increase the duration of random writes by 5 minutes.

 

Steady state phase:

1. Run writes of random size between 8*512 and 2048*512 bytes on random offsets for 50 minutes.

2. Run performance test (one pass only).

3. Repeat 1 and 2 for 5 times.

 

Recovery phase:

1. Idle for 5 minutes.

2. Run performance test (one pass only).

3. Repeat 1 and 2 for 5 times.

 

Storage Bandwidth

 

PCMark 8's Consistency test provides a ton of data output that we can use to judge a drive's performance. Final calculated storage bandwidth results are given 2 ways: Worst Bandwidth and Best Bandwidth. Worst Bandwidth (Steady State) equates to the lowest measured steady state achieved during the test. Best Bandwidth (TRIM) equates to the best bandwidth achieved during the recovery state.

 

seagate_600_pro_200gb_2_drive_ssd_raid_report_33

 

We are going to consider steady state bandwidth (the orange bar) as our test that carries the most weight in ranking a drive's performance. The reason we consider steady state performance (Worst Bandwidth) more important than TRIM (Best Bandwidth) is that when you are running a heavy duty workload, TRIM will not be occurring while that workload is being executed. TRIM performance (the blue bar) is what we consider the second most important consideration when ranking a drive's performance.

 

Trace-based consistency testing is where SSDs like our 600 Pro excel and SSDs like the 840 Pro crumble. As you can see, our 600 Pro array's enterprise pedigree allows it to outperform the rest of the arrays on our chart in a steady state. Notice how the 600 Pro array has very little performance drop off in a steady state. This is an example of how over-provisioning benefits performance. Our non-overprovisioned Q Series Pro array performs great, but as you can see, without the benefit of over-provisioning, its performance drops off much more significantly in a steady state.

 

Disk Busy Time

 

Disk Busy Time is how long the disk is busy working. We measure the total time the disk is busy while replaying all 18 trace iterations.

 

seagate_600_pro_200gb_2_drive_ssd_raid_report_34

 

seagate_600_pro_200gb_2_drive_ssd_raid_report_35

 

Our 600 Pro array is able to spend less time working than our Q Series Pro while running the same tasks. Over-provisioning gives our 600 Series Pro the advantage once again. Compare the disk busy time of a single 600 Pro to our two-drive 600 Pro array. Notice the single drive is busy for more than 2.5 times the array. This over scaling shows you the benefit of write caching. Our Disk Busy Time by category chart looks out of whack due to the poor performance of our 840 Pro array.

 

Total Access Time

 

Access time is the time delay or latency between a request to an electronic system and the access being completed or the requested data returned. Access time is how long it takes to get data back from the disk. We measure the total time the disk is being accessed when replaying all 18 trace iterations.

 

seagate_600_pro_200gb_2_drive_ssd_raid_report_36

 

seagate_600_pro_200gb_2_drive_ssd_raid_report_37

 

Coming as no surprise, our 600 Pro array easily wins this test, too. We're starting to see why a drive designed for an enterprise workload provides superior performance in a steady state. Clearly, the 840 Pro was not designed to excel when bombarded with heavy duty random workloads.

 

Data Written

 

We measure the total amount of random data that the drives are capable of writing during the degradation phases of the consistency test. The total combined time that degradation data is written to the drive is 220 minutes. This can be very telling. The better the drive can process a continuous stream of random data, the more data will be written.

 

seagate_600_pro_200gb_2_drive_ssd_raid_report_38

 

In my opinion, nothing I've ever seen more clearly shows the benefit of over-provisioning than this chart does. In the same amount of time, our 600 Pro array was able to write over three times more random data than our Q Series Pro array despite their almost identical steady state storage bandwidth. Even a single 600 Pro can write almost twice the amount of data as the Q series Pro array.

 

The 600 Pro is over-provisioned 27 percent, so a large part of its NAND array is dedicated to on-the-fly garbage collection resulting in random write steady state performance that's vastly superior to drives with little or no over-provisioning. Our 840 Pro array is again gasping for air, unable to write even one-tenth the amount of random data in a steady state that our 600 Pro array can.

    PRICING: You can find products similar to this one for sale below.


    United States: Find other tech and computer products like this over at Amazon's website.

    United Kingdom: Find other tech and computer products like this over at Amazon UK's website.

    Canada: Find other tech and computer products like this over at Amazon Canada's website.

    We at TweakTown openly invite the companies who provide us with review samples / who are mentioned or discussed to express their opinion of our content. If any company representative wishes to respond, we will publish the response here.

Related Tags

Got an opinion on this content? Post a comment below!
Subscribe to our Newsletter

Latest News Posts

View More News Posts
View Our Latest Videos

Forum Activity

View More Forum Posts

Press Releases

View More Press Releases
loading