From: Rich Freeman <rich0@gentoo.org>
To: gentoo-user@lists.gentoo.org
Subject: Re: [gentoo-user] which linux RAID setup to choose?
Date: Sun, 3 May 2020 20:46:09 -0400 [thread overview]
Message-ID: <CAGfcS_k1bC=HQRy-BgVm2-Aqfb=5tHKixVmvRzr3qLzGmw27Yg@mail.gmail.com> (raw)
In-Reply-To: <a245aab8-52d2-18f5-0859-ab3fda06341c@konstantinhansen.de>
On Sun, May 3, 2020 at 6:50 PM hitachi303
<gentoo-user@konstantinhansen.de> wrote:
>
> The only person I know who is running a really huge raid ( I guess 2000+
> drives) is comfortable with some spare drives. His raid did fail an can
> fail. Data will be lost. Everything important has to be stored at a
> secondary location. But they are using the raid to store data for some
> days or weeks when a server is calculating stuff. If the raid fails they
> have to restart the program for the calculation.
So, if you have thousands of drives, you really shouldn't be using a
conventional RAID solution. Now, if you're just using RAID to refer
to any technology that stores data redundantly that is one thing.
However, if you wanted to stick 2000 drives into a single host using
something like mdadm/zfs, or heaven forbid a bazillion LSI HBAs with
some kind of hacked-up solution for PCIe port replication plus SATA
bus multipliers/etc, you're probably doing it wrong. (Really even
with mdadm/zfs you probably still need some kind of terribly
non-optimal solution for attaching all those drives to a single host.)
At that scale you really should be using a distributed filesystem. Or
you could use some application-level solution that accomplishes the
same thing on top of a bunch of more modest hosts running zfs/etc (the
Backblaze solution at least in the past).
The most mainstream FOSS solution at this scale is Ceph. It achieves
redundancy at the host level. That is, if you have it set up to
tolerate two failures then you can take two random hosts in the
cluster and smash their motherboards with a hammer in the middle of
operation, and the cluster will keep on working and quickly restore
its redundancy. Each host can have multiple drives, and losing any or
all of the drives within a single host counts as a single failure.
You can even do clever stuff like tell it which hosts are attached to
which circuit breakers and then you could lose all the hosts on a
single power circuit at once and it would be fine.
This also has the benefit of covering you when one of your flakey
drives causes weird bus issues that affect other drives, or one host
crashes, and so on. The redundancy is entirely at the host level so
you're protected against a much larger number of failure modes.
This sort of solution also performs much faster as data requests are
not CPU/NIC/HBA limited for any particular host. The software is
obviously more complex, but the hardware can be simpler since if you
want to expand storage you just buy more servers and plug them into
the LAN, versus trying to figure out how to cram an extra dozen hard
drives into a single host with all kinds of port multiplier games.
You can also do maintenance and just reboot an entire host while the
cluster stays online as long as you aren't messing with them all at
once.
I've gone in this general direction because I was tired of having to
try to deal with massive cases, being limited to motherboards with 6
SATA ports, adding LSI HBAs that require an 8x slot and often
conflicts with using an NVMe, and so on.
--
Rich
next prev parent reply other threads:[~2020-05-04 0:46 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-03 5:44 [gentoo-user] which linux RAID setup to choose? Caveman Al Toraboran
2020-05-03 7:53 ` hitachi303
2020-05-03 9:23 ` Wols Lists
2020-05-03 17:55 ` Caveman Al Toraboran
2020-05-03 18:04 ` Dale
2020-05-03 18:29 ` Mark Knecht
2020-05-03 20:16 ` Rich Freeman
2020-05-03 22:52 ` Mark Knecht
2020-05-03 23:23 ` Rich Freeman
2020-05-03 21:22 ` antlists
2020-05-03 9:14 ` Wols Lists
2020-05-03 9:21 ` Caveman Al Toraboran
2020-05-03 14:27 ` Jack
2020-05-03 21:46 ` Caveman Al Toraboran
2020-05-03 22:50 ` hitachi303
2020-05-04 0:29 ` Caveman Al Toraboran
2020-05-04 7:50 ` hitachi303
2020-05-04 0:46 ` Rich Freeman [this message]
2020-05-04 7:50 ` hitachi303
2020-05-04 8:18 ` William Kenworthy
2020-05-03 23:19 ` antlists
2020-05-04 1:33 ` Caveman Al Toraboran
2020-05-03 20:07 ` Rich Freeman
2020-05-03 21:32 ` antlists
2020-05-03 22:34 ` Rich Freeman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAGfcS_k1bC=HQRy-BgVm2-Aqfb=5tHKixVmvRzr3qLzGmw27Yg@mail.gmail.com' \
--to=rich0@gentoo.org \
--cc=gentoo-user@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox