From: Wol <antlists@youngman.org.uk>
To: gentoo-user@lists.gentoo.org
Subject: Re: [gentoo-user] NAS and replacing with larger drives
Date: Wed, 21 Dec 2022 20:03:36 +0000 [thread overview]
Message-ID: <49c8e857-0ed9-0dfd-341b-af955635337c@youngman.org.uk> (raw)
In-Reply-To: <Y6KljYAZUxBmQf3i@tp>
On 21/12/2022 06:19, Frank Steinmetzger wrote:
> Am Wed, Dec 21, 2022 at 05:53:03AM +0000 schrieb Wols Lists:
>
>> On 21/12/2022 02:47, Dale wrote:
>>> I think if I can hold out a little while, something really nice is going
>>> to come along. It seems there is a good bit of interest in having a
>>> Raspberry Pi NAS that gives really good performance. I'm talking a NAS
>>> that is about the same speed as a internal drive. Plus the ability to
>>> use RAID and such. I'd like to have a 6 bay with 6 drives setup in
>>> pairs for redundancy. I can't recall what number RAID that is.
>>> Basically, if one drive fails, another copy still exists. Of course,
>>> two independent NASs would be better in my opinion. Still, any of this
>>> is progress.
>>
>> That's called either Raid-10 (linux), or Raid-1+0 (elsewhere). Note that 1+0
>> is often called 10, but linux-10 is slightly different.
>
> In layman’s term, a stripe of mirrors. Raid-1 is the mirror, Raid-0 a (JBOD)
> pool. So mirror + pool = mirrorpool, hence the 1+0 → 10.
Except raid-10 is not a stripe of mirrors. It's each block is saved to
two different drives. (Or 3, or more, so long as you have more drives
than mirrors.)
Linux will happily give you a 2-copy mirror across 3 drives - 3x6TB
drives will give you 9TB useful storage ...
>
>> I'd personally be inclined to go for raid-6. That's 4 data drives, 2 parity
>> (so you could have an "any two" drive failure and still recover).
>> A two-copy 10 or 1+0 is vulnerable to a two-drive failure. A three-copy is
>> vulnerable to a three-drive failure.
>
> At first, I had only two drives in my 4-bay NAS, which were of course set up
> as a mirror. After a year, when it became full, I bought the second pair of
> drives and had long deliberations by then, what to choose. I went for raid-6
> (or RaidZ2 in ZFS parlance). With only four disks, it has the same net
> capacity as a pair of mirrors, but at the advantage that *any* two drives
> may fail, not just two particular ones. A raid of mirrors has performance
> benefits over a parity raid, but who cares for a simple Gbit storage device.
>
> With increasing number of disks, a mirror setup is at a disadvantage with
> storage efficiency – it’s always 50 % or less, if you mirror over more than
> two disks. But with only four disks, that was irrelevant in my case. On the
> plus-side, each mirror can have a different physical disk size, so you can
> more easily mix’n’match what you got lying around, or do upgrades in smaller
> increments.
>
> If I wanted to increase my capacity, I’d have to replace *all* drives with
> bigger ones. With a mirror, only the drives in one of the mirrors need
> replacing. And the rebuild process would be quicker and less painful, as
> each drive will only be read once to rebuild its partner, and there is no
> parity calculation involved. In a RAID, each drive is replaced one by one,
> and each replacement requires a full read of all drives’ payload.
If you've got a spare SATA connection or whatever, each replacement does
not need a full read of all drives. "mdadm --add /dev/sdx --replace
/dev/sdy". That'll stream sdy on to sdx, and only hammer the other
drives if sdy complains ...
> With older
> drives, this is cause for some concern whether the disks may survive that.
> That’s why, with increasing disk capacities, raid-5 is said to be obsolete.
> Because if another drive fails during rebuild, you are officially screwed.
>
> Fun, innit?
>
They've always said that. Just make sure you don't have multiple drives
from the same batch, then they're less likely statistically to fail at
the same time. I'm running raid-5 over 3TB partitions ...
Cheers,
Wol
next prev parent reply other threads:[~2022-12-21 20:03 UTC|newest]
Thread overview: 135+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-12-08 12:37 [gentoo-user] NAS and replacing with larger drives Dale
2022-12-08 13:31 ` Mark Knecht
2022-12-08 13:58 ` Dale
2022-12-08 17:16 ` Mark Knecht
2022-12-08 23:35 ` Daniel Frey
2022-12-09 0:34 ` Dale
2022-12-10 0:10 ` Paul Colquhoun
2022-12-08 18:36 ` Wols Lists
2022-12-08 20:05 ` Mark Knecht
2022-12-08 20:44 ` Frank Steinmetzger
2022-12-09 13:13 ` Michael
2022-12-09 13:38 ` Frank Steinmetzger
2022-12-09 14:27 ` [OT] " Peter Humphrey
2022-12-09 14:38 ` Frank Steinmetzger
2022-12-10 0:41 ` Peter Humphrey
2022-12-09 15:28 ` Dale
2022-12-10 9:20 ` Wols Lists
2022-12-10 16:19 ` Frank Steinmetzger
2022-12-10 16:30 ` Wols Lists
2022-12-10 17:27 ` Michael
2022-12-10 18:17 ` David Rosenbaum
2022-12-11 4:45 ` David Rosenbaum
2022-12-09 14:15 ` Rich Freeman
2022-12-08 23:09 ` Dale
2022-12-08 13:52 ` Rich Freeman
2022-12-08 23:30 ` Dale
2022-12-08 23:38 ` Rich Freeman
2022-12-09 0:03 ` Dale
2022-12-09 0:17 ` Peter Humphrey
2022-12-09 0:45 ` Dale
2022-12-09 8:27 ` Wol
2022-12-09 10:34 ` Peter Humphrey
2022-12-11 11:34 ` [OT] " Peter Humphrey
2022-12-13 17:36 ` Laurence Perkins
2022-12-09 8:24 ` Wol
2022-12-09 0:06 ` Frank Steinmetzger
2022-12-09 1:15 ` Dale
2022-12-09 7:56 ` Wol
2022-12-09 9:15 ` Dale
2022-12-09 6:22 ` William Kenworthy
2022-12-08 13:59 ` Frank Steinmetzger
2022-12-08 14:11 ` Rich Freeman
2022-12-08 16:56 ` Laurence Perkins
2022-12-08 23:26 ` Rich Freeman
2022-12-09 14:11 ` Dale
2022-12-10 20:41 ` Dale
2022-12-10 21:28 ` Mark Knecht
2022-12-10 23:54 ` Dale
2022-12-11 3:31 ` Mark Knecht
2022-12-11 4:35 ` Dale
2022-12-11 14:07 ` Mark Knecht
2022-12-11 15:01 ` Dale
2022-12-11 15:44 ` Mark Knecht
2022-12-11 23:43 ` Frank Steinmetzger
2022-12-12 0:32 ` Dale
2022-12-12 1:55 ` Dale
2022-12-12 23:29 ` Mark Knecht
2022-12-13 0:43 ` Dale
2022-12-11 2:46 ` David Rosenbaum
[not found] ` <CAL+8heNN7CCQcTrhjuJboAnxvi7ACWVAgPuXqj3bwTPNaNQ94A@mail.gmail.com>
2022-12-11 2:49 ` David Rosenbaum
2022-12-11 4:38 ` David Rosenbaum
2022-12-16 4:08 ` Dale
2022-12-16 12:56 ` Frank Steinmetzger
2022-12-16 22:35 ` Dale
2022-12-16 19:12 ` Mark Knecht
2022-12-16 22:43 ` Dale
2022-12-16 23:49 ` Frank Steinmetzger
2022-12-17 3:50 ` Dale
2022-12-17 4:47 ` Frank Steinmetzger
2022-12-17 6:49 ` Dale
2022-12-17 13:54 ` Frank Steinmetzger
2022-12-17 15:15 ` Mark Knecht
2022-12-17 15:51 ` Dale
2022-12-17 16:09 ` Mark Knecht
2022-12-17 16:42 ` Dale
2022-12-17 17:18 ` Mark Knecht
2022-12-17 22:56 ` David Rosenbaum
2022-12-17 19:10 ` Wol
2022-12-17 20:03 ` Mark Knecht
2022-12-17 23:41 ` Dale
2022-12-18 14:04 ` Mark Knecht
2022-12-18 15:12 ` Dale
2022-12-18 15:27 ` Michael
2022-12-18 18:38 ` Dale
2022-12-18 18:55 ` Mark Knecht
2022-12-18 19:20 ` Dale
2022-12-18 19:25 ` Mark Knecht
2022-12-18 22:08 ` Dale
2022-12-19 21:13 ` ralfconn
2022-12-18 18:56 ` Frank Steinmetzger
2022-12-18 15:29 ` Frank Steinmetzger
2022-12-18 18:59 ` Dale
2022-12-18 19:53 ` Wol
2022-12-18 22:11 ` Dale
2022-12-19 11:08 ` Wols Lists
2022-12-20 6:52 ` Dale
2022-12-26 8:01 ` David Rosenbaum
2022-12-26 8:00 ` David Rosenbaum
2022-12-18 15:29 ` Mark Knecht
2022-12-18 15:38 ` Mark Knecht
2022-12-18 15:48 ` Living in NGL: was: " Jack
2022-12-18 16:17 ` Mark Knecht
2022-12-18 19:00 ` Jack
2022-12-18 19:07 ` Dale
2022-12-18 19:22 ` Mark Knecht
2022-12-18 19:30 ` Frank Steinmetzger
2022-12-18 20:06 ` Rich Freeman
2022-12-18 20:30 ` Mark Knecht
2022-12-18 20:34 ` Mark Knecht
2022-12-18 20:53 ` Frank Steinmetzger
2022-12-18 21:53 ` Dale
2022-12-18 22:08 ` Frank Steinmetzger
2022-12-18 22:18 ` Dale
2022-12-19 1:37 ` Rich Freeman
2022-12-19 5:11 ` Dale
2022-12-19 12:00 ` Rich Freeman
2022-12-19 12:51 ` Wols Lists
2022-12-19 13:30 ` Rich Freeman
2022-12-19 16:43 ` Mark Knecht
2022-12-20 0:00 ` Rich Freeman
2022-12-20 1:46 ` William Kenworthy
2022-12-18 17:11 ` Wol
2022-12-18 18:03 ` Mark Knecht
2022-12-18 0:20 ` Wol
2022-12-17 20:47 ` Frank Steinmetzger
2022-12-21 0:14 ` Frank Steinmetzger
2022-12-21 2:47 ` Dale
2022-12-21 5:53 ` Wols Lists
2022-12-21 6:19 ` Frank Steinmetzger
2022-12-21 7:01 ` William Kenworthy
2022-12-21 20:03 ` Wol [this message]
2022-12-21 20:40 ` Frank Steinmetzger
2022-12-21 21:33 ` Wol
2022-12-21 6:52 ` Dale
2022-12-21 13:50 ` Mark Knecht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=49c8e857-0ed9-0dfd-341b-af955635337c@youngman.org.uk \
--to=antlists@youngman.org.uk \
--cc=gentoo-user@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox