public inbox for gentoo-user@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-user] Comparing RAID5/6 rebuild times, SATA vs SAS vs SSD
@ 2013-09-20 11:20 Tanstaafl
  2013-09-20 22:43 ` Paul Hartman
  0 siblings, 1 reply; 4+ messages in thread
From: Tanstaafl @ 2013-09-20 11:20 UTC (permalink / raw
  To: gentoo-user

Hi all,

Being that one of the big reasons I stopped using RAID5/6 was the 
rebuild times - can be DAYS for a large array - I am very curious if 
anyone has done, or knows of anyone who has done any tests comparing 
rebuild times when using slow SATA, faster SAS and fastest SSD drives.

Of course, this question is moot if using ZFS RAID, but not every 
situation or circumstance will allow it...

Thanks


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [gentoo-user] Comparing RAID5/6 rebuild times, SATA vs SAS vs SSD
  2013-09-20 11:20 [gentoo-user] Comparing RAID5/6 rebuild times, SATA vs SAS vs SSD Tanstaafl
@ 2013-09-20 22:43 ` Paul Hartman
  2013-09-21 16:56   ` Tanstaafl
  0 siblings, 1 reply; 4+ messages in thread
From: Paul Hartman @ 2013-09-20 22:43 UTC (permalink / raw
  To: Gentoo User

On Fri, Sep 20, 2013 at 6:20 AM, Tanstaafl <tanstaafl@libertytrek.org> wrote:
> Hi all,
>
> Being that one of the big reasons I stopped using RAID5/6 was the rebuild
> times - can be DAYS for a large array - I am very curious if anyone has
> done, or knows of anyone who has done any tests comparing rebuild times when
> using slow SATA, faster SAS and fastest SSD drives.
>
> Of course, this question is moot if using ZFS RAID, but not every situation
> or circumstance will allow it...

I don't have an all-out comparison, but at least a data point for you
with somewhat cheap and recent hardware. I have a new (2 months old)
home RAID6 made out of:

6 Western Digital Red 3TB SATA drives
LSI 9200-8e SAS JBOD controller
Sans Digital TR8X+B SAS/SATA enclosure w/ SFF-8088 cables

I created a standard linux software RAID6 using mdadm, resulting in
11TB of usable space (4 data drives, 2 parity).

A couple weeks ago one of the drives died. I hot-swap replaced it with
a new one (with no down-time) and the rebuild took exactly 10 hours.

Under normal operation, the speed of the array for contiguous
read/writes is about 600MB/sec, which is faster than my SSD (single
drive, not RAIDed).

FWIW


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [gentoo-user] Comparing RAID5/6 rebuild times, SATA vs SAS vs SSD
  2013-09-20 22:43 ` Paul Hartman
@ 2013-09-21 16:56   ` Tanstaafl
  2013-09-23  5:05     ` Paul Hartman
  0 siblings, 1 reply; 4+ messages in thread
From: Tanstaafl @ 2013-09-21 16:56 UTC (permalink / raw
  To: gentoo-user

On 2013-09-20 6:43 PM, Paul Hartman <paul.hartman+gentoo@gmail.com> wrote:
> A couple weeks ago one of the drives died. I hot-swap replaced it with
> a new one (with no down-time) and the rebuild took exactly 10 hours.
>
> Under normal operation, the speed of the array for contiguous
> read/writes is about 600MB/sec, which is faster than my SSD (single
> drive, not RAIDed).

Thanks...

But... RAID read/writes under normal operating conditions has nothing 
whatsoever to do with REBUILD speeds/times.

Again, the reason I'm interested in this is, if the rebuild times are 
'blindingly fast' (as compared to the times for SATA or even fast SAS 
drives - ie, 1 hour vs your 10 hours)), then maybe a RAID6 with SSDs is 
back in the realm of doable, since you don't lose 50% of available 
storage with RAID6...


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [gentoo-user] Comparing RAID5/6 rebuild times, SATA vs SAS vs SSD
  2013-09-21 16:56   ` Tanstaafl
@ 2013-09-23  5:05     ` Paul Hartman
  0 siblings, 0 replies; 4+ messages in thread
From: Paul Hartman @ 2013-09-23  5:05 UTC (permalink / raw
  To: Gentoo User

On Sat, Sep 21, 2013 at 11:56 AM, Tanstaafl <tanstaafl@libertytrek.org> wrote:
> On 2013-09-20 6:43 PM, Paul Hartman <paul.hartman+gentoo@gmail.com> wrote:
>>
>> A couple weeks ago one of the drives died. I hot-swap replaced it with
>> a new one (with no down-time) and the rebuild took exactly 10 hours.
>>
>> Under normal operation, the speed of the array for contiguous
>> read/writes is about 600MB/sec, which is faster than my SSD (single
>> drive, not RAIDed).
>
>
> Thanks...
>
> But... RAID read/writes under normal operating conditions has nothing
> whatsoever to do with REBUILD speeds/times.

Of course, I just added that as additional info.

Doing the numbers, my actual rebuild speed was roughly 83MB/sec average.

> Again, the reason I'm interested in this is, if the rebuild times are
> 'blindingly fast' (as compared to the times for SATA or even fast SAS drives
> - ie, 1 hour vs your 10 hours)), then maybe a RAID6 with SSDs is back in the
> realm of doable, since you don't lose 50% of available storage with RAID6...

Mathematically, a 256GB drive will take 1/12th as long as a 3TB drive
with all other factors being equal. Using the speed of my rebuild
above, that would require less than an hour to rebuild a 256GB drive.
I can only imagine SSDs or high-class HDDs would be even faster. So I
think your goal of a 1-hour rebuild is a definite possibility,
depending on your capacity needs and CPU/controller capabilities.

You may need to tweak the RAID speed limit, cache settings, disable
NCQ, enable read-ahead, etc. to realize the maximum speed depending on
your particular hardware. There are dozens of pages online explaining
how to speed up RAID syncs like that. Many people report seeing a 5x
speed increase after making those adjustments, compared to linux
default values.

I searched for, but could not find, definitive references about SSD
RAID build times. I found a lot of tweaker/overclocker type of sites
bragging about their 3000MB/sec SSD RAID read speeds but no mention of
replacing a failed drive.

With SSDs, writes are considered the enemy, so using a pair of new
SSDs in a mirrored raid is considered bad practice, because both
drives will suffer the same number of writes, causing them both to
reach their limit at around the same time. In that case it's
considered safer to replace one of the working drives early, perhaps
rotating a few extra working drives in and out every so often, to keep
both sides of the mirror different ages or from varied manufacturing
batches.

Using SSDs in RAID5/6 also causes extra writes to occur for the
parity, of course, but not as bad as mirroring, and you get the speed
benefits from striping. One good thing with SSDs is that when they
fail, it tends to fail on a write, so the chances of it failing to
read when rebuilding in a RAID5/6 should be very small -- for HDDs
that is the biggest fear during a rebuild.

Enterprise SSDs can have 10x as many rated write/erase cycles than
consumer SSDs (for nearly 10x the price), but even the cheapest SSD
with 3000 write cycle lifetime should last you a hundred years if you
write less than a few dozen GB a day to it. In a RAID5/6 you's
spreading those writes out so it should last even longer, even with
the parity overhead.

If your RAID setup does not support passing the TRIM command to your
SSDs it could cut your speed and lifetime down significantly.

In the end it depends on your particular requirements and use case...
as always. :)

Here is a calculator that lets you plug in different drive sizes and
rebuild speeds to see how long it will take, along with some other
info:
https://www.memset.com/tools/raid-calculator/

Good luck!


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2013-09-23  5:06 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-09-20 11:20 [gentoo-user] Comparing RAID5/6 rebuild times, SATA vs SAS vs SSD Tanstaafl
2013-09-20 22:43 ` Paul Hartman
2013-09-21 16:56   ` Tanstaafl
2013-09-23  5:05     ` Paul Hartman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox