public inbox for gentoo-user@lists.gentoo.org
 help / color / mirror / Atom feed
From: Frank Steinmetzger <Warp_7@gmx.de>
To: gentoo-user@lists.gentoo.org
Subject: Re: [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ
Date: Thu, 1 Jul 2021 01:31:44 +0200	[thread overview]
Message-ID: <YNz+4CQLCT11CncA@kern> (raw)
In-Reply-To: <d3457344-bf49-2c61-a536-c44116727214@youngman.org.uk>

[-- Attachment #1: Type: text/plain, Size: 4680 bytes --]

Am Wed, Jun 30, 2021 at 09:00:29PM +0100 schrieb antlists:

> > I reached 80 % usage (which is the recommended maximum for ZFS) and am
> > now evaluating my options for the coming years.
> > 1) Reduce use of space by re-encoding. My payload is mainly movies, among
> >     which are 3 TB of DVDs which can be shrunk by at least ⅔ by re-encoding.
> >     → this takes time and computing effort, but is a long-term goal anyway.
> > 2) Replace all drives with bigger ones. There are three counter arguments:
> >     • 1000 € for four 10 TB drives (the biggest size available w/o helium)
> >     • they are only available with 7200 rpm (more power, noise and heat)
> >     • I am left with four perfectly fine 6 TB drives
> > 3) Go for 4+2 RaidZ2. This requires a bigger case (with new PSU due to
> >     different form factor) and a SATA expansion card b/c the Mobo only has
> >     six connectors (I need at least one more for the system drive), costing
> >     250 € plus drives.
> > 4) Convert to RaidZ1. Gain space of one drive at the cost of resilience. I
> >     can live with the latter; the server only runs occasionally and not for
> >     very long at a time. *** This option brings me to my question above,
> >     because it is easy to achieve and costs no €€€.
> >
> 5) Dunno if this is possible but ... replace one 6TB by a 12TB (any reason
> you don't like Helium?)

It is technically impossible to keep it in. It will diffuse through the case
eventually, because the atoms are smaller than in any other material. AFAIK,
the drive will still work, but suffer reduced performance. But I’m not sure.
And who knows how it will behave in a RAID once that happens. Big capacity
drives need longer to rebuild, which increases the probability of failure
during RAID rebuild. I think that’s why companies tend to stick to smaller
drivers (2 TB or so).

I also prefer slow spinning disks (power consumption and noise).
There is exactly one HDD model with 10 TB, no helium and 5400 rpm (naturally
non-SMR). It’s a “WD Red Desktop Mainstream Kit”, which in itself sounds
like an oxymoron.

> and raid-0 two of the remaining 6's together. Dunno anything about what
> the raidZ's are but I presume this would give you 12TB of mirrored
> storage. It would also only use 3 slots,

The approach in itself sounds interesting – if I already had the drives. But
I don’t, and so I’d have to pay 350 € to get 2 TB more effective storage,
while still losing one level of redundancy. :-/

> so you could use the 4th for eg your videos, and back them up on external
> storage ie the drive you've just removed :-)

Unfortunately, I can come up with many reasons against this approach.
- I don’t like the idea of using two different file systems for one single
  purpose, because sooner or later one will fill up and the data “spills
  over” to the other FS. Just the idea that this might happen bugs me and
  I’d always have to think of it when I copy something over, starting with
  the decision on which FS to use in the first place. ;-)
- I don’t want to have to deal with making backups to external media. I’d
  have to hook them up regularly and maintain and run a backup tool.
- I don’t want to rely on external storage in general. That’s what I bought
  the NAS. :o)

I used external drives exclusively until I had the NAS. Over time, the
biggest drive of the day became too small and I bought a bigger one as my
wallet allowed (sounds like the story Dale told some time ago). The first
had 500 G, then 1 T, then 3 T. Each one has its own power adapter, usually
incompatible with anything else I have. It needs a reliable USB connection,
yadda yadda. The files will inevitably become cluttered and dispersed and I
need to keep tabs on what was where. No thanks. :)

I do have a hot swap bay in my PC for bare drives¹. But SATA connectors
aren’t made for many physical connect-disconnect cycles.

¹ https://en.sharkoon.com/product//12640

> (The raid-0, I'd probably stripe rather than linear for performance.)

When I did some research over the last days I read that ZFS distributes
writes across all vdevs of a pool depending on their individual fill state.
So one doesn’t really have control over linear vs. striped anyway.


Dang, I wanted to go to bed 1½ hours ago. Instead I composed mails. :)

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

There is so much sand in Northern Africa that if it were spread out over the
globe it would completely cover the Sahara Desert.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  reply	other threads:[~2021-06-30 23:32 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-29 13:56 [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ Frank Steinmetzger
2021-06-30 20:00 ` antlists
2021-06-30 23:31   ` Frank Steinmetzger [this message]
2021-06-30 20:45 ` Neil Bothwick
2021-06-30 23:31   ` Frank Steinmetzger
2021-07-01  1:29     ` William Kenworthy
2021-07-02 15:09       ` J. Roeleveld
2021-07-01 15:07     ` antlists
2021-07-01 17:21       ` Frank Steinmetzger
2021-07-01 13:47 ` Robert David
2021-07-01 15:01   ` antlists
2021-07-01 17:35     ` Frank Steinmetzger
2021-07-04 10:56     ` Robert David
2021-07-02 15:13   ` J. Roeleveld

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YNz+4CQLCT11CncA@kern \
    --to=warp_7@gmx.de \
    --cc=gentoo-user@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox