* Re: [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ
@ 2021-06-30 20:45 99% ` Neil Bothwick
0 siblings, 0 replies; 1+ results
From: Neil Bothwick @ 2021-06-30 20:45 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 1460 bytes --]
On Tue, 29 Jun 2021 15:56:49 +0200, Frank Steinmetzger wrote:
> I reached 80 % usage (which is the recommended maximum for ZFS) and am
> now evaluating my options for the coming years.
> 1) Reduce use of space by re-encoding. My payload is mainly movies,
> among which are 3 TB of DVDs which can be shrunk by at least ⅔ by
> re-encoding. → this takes time and computing effort, but is a long-term
> goal anyway. 2) Replace all drives with bigger ones. There are three
> counter arguments:
> • 1000 € for four 10 TB drives (the biggest size available w/o
> helium)
> • they are only available with 7200 rpm (more power, noise and heat)
> • I am left with four perfectly fine 6 TB drives
> 3) Go for 4+2 RaidZ2. This requires a bigger case (with new PSU due to
> different form factor) and a SATA expansion card b/c the Mobo only
> has six connectors (I need at least one more for the system drive),
> costing 250 € plus drives.
> 4) Convert to RaidZ1. Gain space of one drive at the cost of
> resilience. I can live with the latter; the server only runs
> occasionally and not for very long at a time. *** This option brings me
> to my question above, because it is easy to achieve and costs no €€€.
5) (or 3a) Add an eSATA card and expand the RAID with external drives.
That way you can stick with 6TB drives.
--
Neil Bothwick
Bagpipe for free: Stuff cat under arm. Pull legs, chew tail.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [relevance 99%]
Results 1-1 of 1 | reverse | options above
-- pct% links below jump to the message on this page, permalinks otherwise --
2021-06-29 13:56 [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ Frank Steinmetzger
2021-06-30 20:45 99% ` Neil Bothwick
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox