public inbox for gentoo-user@lists.gentoo.org
 help / color / mirror / Atom feed
From: Wol's lists <antlists@youngman.org.uk>
To: gentoo-user@lists.gentoo.org, Dale <rdalek1967@gmail.com>
Subject: Re: [gentoo-user] Hard drive storage questions
Date: Sun, 11 Nov 2018 21:41:46 +0000	[thread overview]
Message-ID: <e47f75c3-6edf-9967-66e9-595cbb117dff@youngman.org.uk> (raw)
In-Reply-To: <cb577279-9732-8d3b-1df1-e09d890ebe01@gmail.com>

On 11/11/2018 00:45, Dale wrote:
> This is a lot to think on.  Money wise, and maybe even expansion wise, I
> may go with the PCI SATA cards and add drives inside my case.  I have
> plenty of power supply since it pulls at most 200 watts and I think my
> P/S is like 700 or 800 watts.  I can also add a external SATA card or
> another USB drive to do backups with as well.  At some point tho, I may
> have to build one of those little tiny systems that is basically nothing
> but SATA drive controllers and ethernet enabled.  Have that sitting in a
> closet somewhere running some small OS.  I can always just move the
> drives from my system to it if needed.

https://raid.wiki.kernel.org/index.php/What_is_RAID_and_why_should_you_want_it%3F

(disclaimer - I wrote it :-)

You've got a bunch of questions to ask yourself. Is this an amateur 
setup (sounds a bit like it in that it appears to be a home server) or 
is it a professional "money no object" setup.

Either way, if you spend good money on good disks (WD Red, Seagate 
Ironwolf, etc) then most of your investment will be good to re-purpose. 
My current 3TB drives are Barracudas - not a good idea for a 
fault-tolerant system - which is why the replacements are Ironwolves.

Then, as that web-page makes clear, do you want your raid/volume 
management to be separate from your filesystem - mdraid/lvm under ext4 - 
or do you want a filesystem that is hardware-aware like zfs or xfs, or 
do you want something like btrfs which tries to be the latter, but is 
better used as the former.

One thing to seriously watch out for - many filesystems are aware of the 
underlying layer even when you don't expect it. Not sure which 
filesystem it is but I remember an email discussion where the filesystem 
was aware it was running over mdraid and balanced itself for the 
underlying disks. The filesystem developer didn't realise that mdraid 
can add and remove disks so the underlying structure can change, and the 
recommendation was "once you've set up the raid, if you want to grow 
your space move it to a new raid".

At the end of the day, there is no perfect answer, and you need to ask 
yourself what you are trying to achieve, and what you can afford.

Cheers,
Wol


  reply	other threads:[~2018-11-11 21:42 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-09  1:16 [gentoo-user] Hard drive storage questions Dale
2018-11-09  1:31 ` Jack
2018-11-09  1:43   ` Dale
2018-11-09  2:04     ` Andrew Lowe
2018-11-09  2:07     ` Bill Kenworthy
2018-11-09  8:39       ` Neil Bothwick
2018-11-09  2:29 ` Rich Freeman
2018-11-09  8:17   ` Bill Kenworthy
2018-11-09 13:25     ` Rich Freeman
2018-11-09  9:02   ` J. Roeleveld
2018-11-11  0:45   ` Dale
2018-11-11 21:41     ` Wol's lists [this message]
2018-11-11 22:17       ` Dale
2018-11-09  9:24 ` Wols Lists
  -- strict thread matches above, loose matches on Subject: below --
2015-04-28  8:39 Dale
2015-04-28 14:49 ` Francisco Ares
2015-04-28 15:01 ` Alan McKinnon
2015-04-28 15:24   ` Neil Bothwick
2015-04-28 17:38     ` Rich Freeman
2015-04-28 18:11       ` Neil Bothwick
2015-04-28 18:31         ` Rich Freeman
2015-04-28 18:41           ` Neil Bothwick
2015-04-29  6:13     ` Alan McKinnon
2015-04-29  7:52       ` Neil Bothwick
2015-05-04  7:39         ` Dale
2015-05-04  7:46           ` Neil Bothwick
2015-05-04  8:13             ` Mick
2015-05-04  8:26               ` Dale
2015-05-04  8:23             ` Dale
2015-05-04 10:31               ` Neil Bothwick
2015-05-04 10:40                 ` Dale
2015-05-04 11:26                   ` Neil Bothwick
2015-05-09 10:56                     ` Dale
2015-05-09 12:59                       ` Rich Freeman
2015-05-09 14:46                         ` Todd Goodman
2015-05-09 18:16                           ` Rich Freeman
2015-05-04 11:35                 ` Rich Freeman
2015-05-04 18:42                   ` Nuno Magalhães
2015-05-05  6:41                     ` Alan McKinnon
2015-05-05 10:56                     ` Rich Freeman
2015-05-05 11:33                       ` Neil Bothwick
2015-05-05 12:05                         ` Mick
2015-05-05 12:21                           ` Neil Bothwick
2015-05-05 12:39                             ` Mick
2015-05-05 12:53                             ` Rich Freeman
2015-05-05 21:50                               ` Neil Bothwick
2015-05-05 22:21                                 ` Bill Kenworthy
2015-05-05 22:33                                   ` Bill Kenworthy
2015-05-04 10:57               ` Alan Mackenzie
2015-04-28 15:02 ` Rich Freeman
2015-05-04  7:23 ` Dale
2015-05-05  3:01   ` Walter Dnes
2015-04-27  7:41 Dale
2015-04-28 18:25 ` Daniel Frey
2015-04-28 21:23   ` Dale

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e47f75c3-6edf-9967-66e9-595cbb117dff@youngman.org.uk \
    --to=antlists@youngman.org.uk \
    --cc=gentoo-user@lists.gentoo.org \
    --cc=rdalek1967@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox