* [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ
@ 2021-06-29 13:56 Frank Steinmetzger
2021-06-30 20:00 ` antlists
` (2 more replies)
0 siblings, 3 replies; 14+ messages in thread
From: Frank Steinmetzger @ 2021-06-29 13:56 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 2835 bytes --]
Hello fellows
This is not really a Gentoo question, but at least my NAS (which this mail
is about) is running Gentoo. :)
There are some people amongst this esteemed group that know their stuff
about storage and servers and things, so I thought I might try my luck here.
I’ve already looked on the Webs, but my question is a wee bit specific and I
wasn’t able to find the exact answer (yet). And I’m a bit hesitant to ask
this newbie-ish question in a ZFS expert forum. ;-)
Prologue:
Due to how records are distributed across blocks in a parity-based ZFS vdev,
it is recommended to use 2^n data disks. Technically, it is perfectly fine
to deviate from it, but for performance reasons (mostly space efficiency) it
is not the recommended way. That’s because the (default) maximum record size
of 128 k itself is a power of 2 and thus can be distributed evenly on all
drives. At least that’s my understanding. Is that correct?
So here’s the question:
If I had three data drives, (c|w)ould I get around that problem by setting a
record size that is divisible by 3, like 96 k, or even 3 M?
Here’s the background of my question:
Said NAS is based on a Mini-ITX case which has only four drive slots (which
is the most common configuration for a case of this formfactor). I started
with two 6 TB drives, running in a mirror configuration. One year later
space was running out and I filled the remaining slots. To maximise
reliability, I went with RaidZ2.
I reached 80 % usage (which is the recommended maximum for ZFS) and am
now evaluating my options for the coming years.
1) Reduce use of space by re-encoding. My payload is mainly movies, among
which are 3 TB of DVDs which can be shrunk by at least ⅔ by re-encoding.
→ this takes time and computing effort, but is a long-term goal anyway.
2) Replace all drives with bigger ones. There are three counter arguments:
• 1000 € for four 10 TB drives (the biggest size available w/o helium)
• they are only available with 7200 rpm (more power, noise and heat)
• I am left with four perfectly fine 6 TB drives
3) Go for 4+2 RaidZ2. This requires a bigger case (with new PSU due to
different form factor) and a SATA expansion card b/c the Mobo only has
six connectors (I need at least one more for the system drive), costing
250 € plus drives.
4) Convert to RaidZ1. Gain space of one drive at the cost of resilience. I
can live with the latter; the server only runs occasionally and not for
very long at a time. *** This option brings me to my question above,
because it is easy to achieve and costs no €€€.
--
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.
In this sentance are definately three error’s!
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ
2021-06-29 13:56 [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ Frank Steinmetzger
@ 2021-06-30 20:00 ` antlists
2021-06-30 23:31 ` Frank Steinmetzger
2021-06-30 20:45 ` Neil Bothwick
2021-07-01 13:47 ` Robert David
2 siblings, 1 reply; 14+ messages in thread
From: antlists @ 2021-06-30 20:00 UTC (permalink / raw
To: gentoo-user
On 29/06/2021 14:56, Frank Steinmetzger wrote:
> Hello fellows
>
> This is not really a Gentoo question, but at least my NAS (which this mail
> is about) is running Gentoo. :)
>
> There are some people amongst this esteemed group that know their stuff
> about storage and servers and things, so I thought I might try my luck here.
> I’ve already looked on the Webs, but my question is a wee bit specific and I
> wasn’t able to find the exact answer (yet). And I’m a bit hesitant to ask
> this newbie-ish question in a ZFS expert forum. ;-)
>
> Prologue:
> Due to how records are distributed across blocks in a parity-based ZFS vdev,
> it is recommended to use 2^n data disks. Technically, it is perfectly fine
> to deviate from it, but for performance reasons (mostly space efficiency) it
> is not the recommended way. That’s because the (default) maximum record size
> of 128 k itself is a power of 2 and thus can be distributed evenly on all
> drives. At least that’s my understanding. Is that correct?
>
> So here’s the question:
> If I had three data drives, (c|w)ould I get around that problem by setting a
> record size that is divisible by 3, like 96 k, or even 3 M?
>
>
>
> Here’s the background of my question:
> Said NAS is based on a Mini-ITX case which has only four drive slots (which
> is the most common configuration for a case of this formfactor). I started
> with two 6 TB drives, running in a mirror configuration. One year later
> space was running out and I filled the remaining slots. To maximise
> reliability, I went with RaidZ2.
>
> I reached 80 % usage (which is the recommended maximum for ZFS) and am
> now evaluating my options for the coming years.
> 1) Reduce use of space by re-encoding. My payload is mainly movies, among
> which are 3 TB of DVDs which can be shrunk by at least ⅔ by re-encoding.
> → this takes time and computing effort, but is a long-term goal anyway.
> 2) Replace all drives with bigger ones. There are three counter arguments:
> • 1000 € for four 10 TB drives (the biggest size available w/o helium)
> • they are only available with 7200 rpm (more power, noise and heat)
> • I am left with four perfectly fine 6 TB drives
> 3) Go for 4+2 RaidZ2. This requires a bigger case (with new PSU due to
> different form factor) and a SATA expansion card b/c the Mobo only has
> six connectors (I need at least one more for the system drive), costing
> 250 € plus drives.
> 4) Convert to RaidZ1. Gain space of one drive at the cost of resilience. I
> can live with the latter; the server only runs occasionally and not for
> very long at a time. *** This option brings me to my question above,
> because it is easy to achieve and costs no €€€.
>
5) Dunno if this is possible but ... replace one 6TB by a 12TB (any
reason you don't like Helium?) and raid-0 two of the remaining 6's
together. Dunno anything about what the raidZ's are but I presume this
would give you 12TB of mirrored storage. It would also only use 3 slots,
so you could use the 4th for eg your videos, and back them up on
external storage ie the drive you've just removed :-)
(The raid-0, I'd probably stripe rather than linear for performance.)
Cheers,
Wol
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ
2021-06-29 13:56 [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ Frank Steinmetzger
2021-06-30 20:00 ` antlists
@ 2021-06-30 20:45 ` Neil Bothwick
2021-06-30 23:31 ` Frank Steinmetzger
2021-07-01 13:47 ` Robert David
2 siblings, 1 reply; 14+ messages in thread
From: Neil Bothwick @ 2021-06-30 20:45 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 1460 bytes --]
On Tue, 29 Jun 2021 15:56:49 +0200, Frank Steinmetzger wrote:
> I reached 80 % usage (which is the recommended maximum for ZFS) and am
> now evaluating my options for the coming years.
> 1) Reduce use of space by re-encoding. My payload is mainly movies,
> among which are 3 TB of DVDs which can be shrunk by at least ⅔ by
> re-encoding. → this takes time and computing effort, but is a long-term
> goal anyway. 2) Replace all drives with bigger ones. There are three
> counter arguments:
> • 1000 € for four 10 TB drives (the biggest size available w/o
> helium)
> • they are only available with 7200 rpm (more power, noise and heat)
> • I am left with four perfectly fine 6 TB drives
> 3) Go for 4+2 RaidZ2. This requires a bigger case (with new PSU due to
> different form factor) and a SATA expansion card b/c the Mobo only
> has six connectors (I need at least one more for the system drive),
> costing 250 € plus drives.
> 4) Convert to RaidZ1. Gain space of one drive at the cost of
> resilience. I can live with the latter; the server only runs
> occasionally and not for very long at a time. *** This option brings me
> to my question above, because it is easy to achieve and costs no €€€.
5) (or 3a) Add an eSATA card and expand the RAID with external drives.
That way you can stick with 6TB drives.
--
Neil Bothwick
Bagpipe for free: Stuff cat under arm. Pull legs, chew tail.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ
2021-06-30 20:00 ` antlists
@ 2021-06-30 23:31 ` Frank Steinmetzger
0 siblings, 0 replies; 14+ messages in thread
From: Frank Steinmetzger @ 2021-06-30 23:31 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 4680 bytes --]
Am Wed, Jun 30, 2021 at 09:00:29PM +0100 schrieb antlists:
> > I reached 80 % usage (which is the recommended maximum for ZFS) and am
> > now evaluating my options for the coming years.
> > 1) Reduce use of space by re-encoding. My payload is mainly movies, among
> > which are 3 TB of DVDs which can be shrunk by at least ⅔ by re-encoding.
> > → this takes time and computing effort, but is a long-term goal anyway.
> > 2) Replace all drives with bigger ones. There are three counter arguments:
> > • 1000 € for four 10 TB drives (the biggest size available w/o helium)
> > • they are only available with 7200 rpm (more power, noise and heat)
> > • I am left with four perfectly fine 6 TB drives
> > 3) Go for 4+2 RaidZ2. This requires a bigger case (with new PSU due to
> > different form factor) and a SATA expansion card b/c the Mobo only has
> > six connectors (I need at least one more for the system drive), costing
> > 250 € plus drives.
> > 4) Convert to RaidZ1. Gain space of one drive at the cost of resilience. I
> > can live with the latter; the server only runs occasionally and not for
> > very long at a time. *** This option brings me to my question above,
> > because it is easy to achieve and costs no €€€.
> >
> 5) Dunno if this is possible but ... replace one 6TB by a 12TB (any reason
> you don't like Helium?)
It is technically impossible to keep it in. It will diffuse through the case
eventually, because the atoms are smaller than in any other material. AFAIK,
the drive will still work, but suffer reduced performance. But I’m not sure.
And who knows how it will behave in a RAID once that happens. Big capacity
drives need longer to rebuild, which increases the probability of failure
during RAID rebuild. I think that’s why companies tend to stick to smaller
drivers (2 TB or so).
I also prefer slow spinning disks (power consumption and noise).
There is exactly one HDD model with 10 TB, no helium and 5400 rpm (naturally
non-SMR). It’s a “WD Red Desktop Mainstream Kit”, which in itself sounds
like an oxymoron.
> and raid-0 two of the remaining 6's together. Dunno anything about what
> the raidZ's are but I presume this would give you 12TB of mirrored
> storage. It would also only use 3 slots,
The approach in itself sounds interesting – if I already had the drives. But
I don’t, and so I’d have to pay 350 € to get 2 TB more effective storage,
while still losing one level of redundancy. :-/
> so you could use the 4th for eg your videos, and back them up on external
> storage ie the drive you've just removed :-)
Unfortunately, I can come up with many reasons against this approach.
- I don’t like the idea of using two different file systems for one single
purpose, because sooner or later one will fill up and the data “spills
over” to the other FS. Just the idea that this might happen bugs me and
I’d always have to think of it when I copy something over, starting with
the decision on which FS to use in the first place. ;-)
- I don’t want to have to deal with making backups to external media. I’d
have to hook them up regularly and maintain and run a backup tool.
- I don’t want to rely on external storage in general. That’s what I bought
the NAS. :o)
I used external drives exclusively until I had the NAS. Over time, the
biggest drive of the day became too small and I bought a bigger one as my
wallet allowed (sounds like the story Dale told some time ago). The first
had 500 G, then 1 T, then 3 T. Each one has its own power adapter, usually
incompatible with anything else I have. It needs a reliable USB connection,
yadda yadda. The files will inevitably become cluttered and dispersed and I
need to keep tabs on what was where. No thanks. :)
I do have a hot swap bay in my PC for bare drives¹. But SATA connectors
aren’t made for many physical connect-disconnect cycles.
¹ https://en.sharkoon.com/product//12640
> (The raid-0, I'd probably stripe rather than linear for performance.)
When I did some research over the last days I read that ZFS distributes
writes across all vdevs of a pool depending on their individual fill state.
So one doesn’t really have control over linear vs. striped anyway.
Dang, I wanted to go to bed 1½ hours ago. Instead I composed mails. :)
--
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.
There is so much sand in Northern Africa that if it were spread out over the
globe it would completely cover the Sahara Desert.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ
2021-06-30 20:45 ` Neil Bothwick
@ 2021-06-30 23:31 ` Frank Steinmetzger
2021-07-01 1:29 ` William Kenworthy
2021-07-01 15:07 ` antlists
0 siblings, 2 replies; 14+ messages in thread
From: Frank Steinmetzger @ 2021-06-30 23:31 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 2802 bytes --]
Am Wed, Jun 30, 2021 at 09:45:13PM +0100 schrieb Neil Bothwick:
> On Tue, 29 Jun 2021 15:56:49 +0200, Frank Steinmetzger wrote:
>
> > I reached 80 % usage (which is the recommended maximum for ZFS) and am
> > now evaluating my options for the coming years.
> > 1) Reduce use of space by re-encoding. My payload is mainly movies,
> > among which are 3 TB of DVDs which can be shrunk by at least ⅔ by
> > re-encoding. → this takes time and computing effort, but is a long-term
> > goal anyway. 2) Replace all drives with bigger ones. There are three
> > counter arguments:
> > • 1000 € for four 10 TB drives (the biggest size available w/o
> > helium)
> > • they are only available with 7200 rpm (more power, noise and heat)
> > • I am left with four perfectly fine 6 TB drives
> > 3) Go for 4+2 RaidZ2. This requires a bigger case (with new PSU due to
> > different form factor) and a SATA expansion card b/c the Mobo only
> > has six connectors (I need at least one more for the system drive),
> > costing 250 € plus drives.
> > 4) Convert to RaidZ1. Gain space of one drive at the cost of
> > resilience. I can live with the latter; the server only runs
> > occasionally and not for very long at a time. *** This option brings me
> > to my question above, because it is easy to achieve and costs no €€€.
>
> 5) (or 3a) Add an eSATA card and expand the RAID with external drives.
> That way you can stick with 6TB drives.
Antlist made a similar suggestion using external USB, and I gave a more
detailed answer in reply to his mail.
Your proposal, though different regarding filesystem setup, has the same
drawbacks: I am dependent on an external case with its own power supply.
Having everything in one case is very convenient when I want to take the
data on a visit to someone – and it keeps my flat cleaner. :D
I actually looked at external enclosures that I could simply hook up to a
host computer, which then does all the work of speaking to the individual
disks. The problems with that:
- The host needs ECC RAM. NUC-Class devices don’t support that. Even most
consumer boards don’t (at least officially).
- USB is not suitable for RAID because it lacks protocol features in case of
errors.
- It’s also a costly endeavour.
I found exactly one case that can hold 6 disks – it cost almost 700 € and
only speaks firewire, which none of my hosts do. My NAS hardware was
actually cheaper than that, including server-grade Mobo, 16 GIG of ECC RAM,
a Gold-rated PSU, and an i3 with custom cooler.
Tata.
--
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.
“If wishes were horses we’d all be eating steak.” – Jayne, Firefly
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ
2021-06-30 23:31 ` Frank Steinmetzger
@ 2021-07-01 1:29 ` William Kenworthy
2021-07-02 15:09 ` J. Roeleveld
2021-07-01 15:07 ` antlists
1 sibling, 1 reply; 14+ messages in thread
From: William Kenworthy @ 2021-07-01 1:29 UTC (permalink / raw
To: gentoo-user
On 1/7/21 7:31 am, Frank Steinmetzger wrote:
> Am Wed, Jun 30, 2021 at 09:45:13PM +0100 schrieb Neil Bothwick:
>> On Tue, 29 Jun 2021 15:56:49 +0200, Frank Steinmetzger wrote:
>>
>>> I reached 80 % usage (which is the recommended maximum for ZFS) and am
>>> now evaluating my options for the coming years.
>>> ...
Are you welded to ZFS? Is BTRFS or another alternative viable as it
might handle the different drive sizes more elegantly? (e.g., btrfs raid
handles different sized disks quite well)
BillK
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ
2021-06-29 13:56 [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ Frank Steinmetzger
2021-06-30 20:00 ` antlists
2021-06-30 20:45 ` Neil Bothwick
@ 2021-07-01 13:47 ` Robert David
2021-07-01 15:01 ` antlists
2021-07-02 15:13 ` J. Roeleveld
2 siblings, 2 replies; 14+ messages in thread
From: Robert David @ 2021-07-01 13:47 UTC (permalink / raw
To: gentoo-user
Hi Frank,
On Tuesday, June 29, 2021 3:56:49 PM CEST Frank Steinmetzger wrote:
> Hello fellows
>
> This is not really a Gentoo question, but at least my NAS (which this mail
> is about) is running Gentoo. :)
>
> There are some people amongst this esteemed group that know their stuff
> about storage and servers and things, so I thought I might try my luck here.
> I’ve already looked on the Webs, but my question is a wee bit specific and
> I wasn’t able to find the exact answer (yet). And I’m a bit hesitant to ask
> this newbie-ish question in a ZFS expert forum. ;-)
>
> Prologue:
> Due to how records are distributed across blocks in a parity-based ZFS vdev,
> it is recommended to use 2^n data disks. Technically, it is perfectly fine
> to deviate from it, but for performance reasons (mostly space efficiency)
> it is not the recommended way. That’s because the (default) maximum record
> size of 128 k itself is a power of 2 and thus can be distributed evenly on
> all drives. At least that’s my understanding. Is that correct?
>
> So here’s the question:
> If I had three data drives, (c|w)ould I get around that problem by setting a
> record size that is divisible by 3, like 96 k, or even 3 M?
I would not bother with this. 128k is a good default for general usage
and even if you got 3 data disks the actual loss is pointless to think
about (assuming you got 4k disks).
>
>
>
> Here’s the background of my question:
> Said NAS is based on a Mini-ITX case which has only four drive slots (which
> is the most common configuration for a case of this formfactor). I started
> with two 6 TB drives, running in a mirror configuration. One year later
> space was running out and I filled the remaining slots. To maximise
> reliability, I went with RaidZ2.
>
> I reached 80 % usage (which is the recommended maximum for ZFS) and am
> now evaluating my options for the coming years.
> 1) Reduce use of space by re-encoding. My payload is mainly movies, among
> which are 3 TB of DVDs which can be shrunk by at least ⅔ by re-encoding.
> → this takes time and computing effort, but is a long-term goal anyway.
I always think about in such cases if I really need such data. In many
cases with clear consideration I find out I may remove half of the data
without any pain. It is like cleaning my home, there are many things
extra and there is missing a space for real valuable things, with disk
data it is the same.
> 2) Replace all drives with bigger ones. There are three counter arguments:
> • 1000 € for four 10 TB drives (the biggest size available w/o helium)
> • they are only available with 7200 rpm (more power, noise and heat)
> • I am left with four perfectly fine 6 TB drives
> 3) Go for 4+2 RaidZ2. This requires a bigger case (with new PSU due to
> different form factor) and a SATA expansion card b/c the Mobo only has
> six connectors (I need at least one more for the system drive), costing
> 250 € plus drives.
> 4) Convert to RaidZ1. Gain space of one drive at the cost of resilience. I
> can live with the latter; the server only runs occasionally and not for
> very long at a time. *** This option brings me to my question above,
> because it is easy to achieve and costs no €€€.
In any of my data arrays I have long time migrated off the RAIDZ to the
MIRROR or RAID10. You will find finally that the RAIDZ is slow and not
very flexible. Only think you gain is the extra space in constrained
array spaces. For RAID10 it is much easier to raise the size, just
resilvering to new bigger disks, removing old and expanding. The
resilvering speed is magnitude faster. And anyway much easier to recover
in cases of failure.
If you really need the additional space, consider adding second jbod
with another disks.
Robert.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ
2021-07-01 13:47 ` Robert David
@ 2021-07-01 15:01 ` antlists
2021-07-01 17:35 ` Frank Steinmetzger
2021-07-04 10:56 ` Robert David
2021-07-02 15:13 ` J. Roeleveld
1 sibling, 2 replies; 14+ messages in thread
From: antlists @ 2021-07-01 15:01 UTC (permalink / raw
To: gentoo-user
On 01/07/2021 14:47, Robert David wrote:
> Hi Frank,
>
>
> In any of my data arrays I have long time migrated off the RAIDZ to the
> MIRROR or RAID10. You will find finally that the RAIDZ is slow and not
> very flexible. Only think you gain is the extra space in constrained
> array spaces. For RAID10 it is much easier to raise the size, just
> resilvering to new bigger disks, removing old and expanding. The
> resilvering speed is magnitude faster.
> And anyway much easier to recover
> in cases of failure.
>
ARE YOU SURE???
The standard mirror does not cope with corruption very well. Lose a disk
and resilvering is fast. Corrupt the data, and you'll be tearing your
hair out why things go wrong randomly, with no automated way, even once
you've realised what's happened, to recover your data other than a
restore from backup.
> If you really need the additional space, consider adding second jbod
> with another disks.
That'd be my approach - migrate a load of stuff off onto another disk
elsewhere, but that's not what the OP wants to do.
>
> Robert.
>
Cheers,
Wol
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ
2021-06-30 23:31 ` Frank Steinmetzger
2021-07-01 1:29 ` William Kenworthy
@ 2021-07-01 15:07 ` antlists
2021-07-01 17:21 ` Frank Steinmetzger
1 sibling, 1 reply; 14+ messages in thread
From: antlists @ 2021-07-01 15:07 UTC (permalink / raw
To: gentoo-user
On 01/07/2021 00:31, Frank Steinmetzger wrote:
> Antlist made a similar suggestion using external USB, and I gave a more
> detailed answer in reply to his mail.
I've got this ...
https://www.amazon.co.uk/gp/product/B072J52TR1/ref=ppx_yo_dt_b_asin_title_o04_s00?ie=UTF8&psc=1
It's eSATA not USB.
It's worked fine for me, but no I haven;t used it much - I shall
probably be using it a bit very soon as I finish building my new system...
Cheers,
Wol
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ
2021-07-01 15:07 ` antlists
@ 2021-07-01 17:21 ` Frank Steinmetzger
0 siblings, 0 replies; 14+ messages in thread
From: Frank Steinmetzger @ 2021-07-01 17:21 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 1271 bytes --]
Am Thu, Jul 01, 2021 at 04:07:39PM +0100 schrieb antlists:
> On 01/07/2021 00:31, Frank Steinmetzger wrote:
> > Antlist made a similar suggestion using external USB, and I gave a more
> > detailed answer in reply to his mail.
>
> I've got this ...
>
> https://www.amazon.co.uk/gp/product/B072J52TR1/ref=ppx_yo_dt_b_asin_title_o04_s00?ie=UTF8&psc=1
>
> It's eSATA not USB.
Which is very similar in function to the one I have, only mine is for
installation in a 5¼″ case slot. When I bought it years ago I was thing
about a desktop dock like yours, because that can also be used with my
laptop. But I found it neater to install it cleanly into my PC case. :-)
> It's worked fine for me, but no I haven;t used it much - I shall probably be
> using it a bit very soon as I finish building my new system...
When I was reading up on the differences between SAS and SATA (because some
enclosures and cases use a SAS backplane), one of the items was multipath
support: SAS can address several drives, SATA only one. So eSATA is limited
to a single drive per cable.
--
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.
Arrogance is the art of being proud of one’s own stupidity.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ
2021-07-01 15:01 ` antlists
@ 2021-07-01 17:35 ` Frank Steinmetzger
2021-07-04 10:56 ` Robert David
1 sibling, 0 replies; 14+ messages in thread
From: Frank Steinmetzger @ 2021-07-01 17:35 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 1559 bytes --]
Am Thu, Jul 01, 2021 at 04:01:25PM +0100 schrieb antlists:
> On 01/07/2021 14:47, Robert David wrote:
> > Hi Frank,
> >
>
> >
> > In any of my data arrays I have long time migrated off the RAIDZ to the
> > MIRROR or RAID10. You will find finally that the RAIDZ is slow and not
> > very flexible.
Flexibility indeed. This bites me in the butt now. But performance is
sufficient for me, because everything can saturate gigabit ethernet and
there are no VMs involved.
A scrub currently takes 10½ hours. Considering each drive is filled with
6 TB * 80 % = 4.8 TB, that’s an average of 130 MB/s/device which seems not
so bad for 5400 rpm drives.
When I installed drives #3 and 4, I thought long and hard about whether to
use Raid-10 or Z2. The increased resilience won the argument (any 2 drives
over 2 particular drives).
> > If you really need the additional space, consider adding second jbod
> > with another disks.
>
> That'd be my approach - migrate a load of stuff off onto another disk
> elsewhere, but that's not what the OP wants to do.
Yeah… I know that for some people, carrying around TBs of movies and TV
series is overkill, but I like having them, and I like having them all in
this neat little box:
https://www.inter-tech.de/products/ipc/storage-cases/sc-4100
:)
--
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.
“If I could explain it to the average person, I wouldn't have been worth
the Nobel Prize.” – Richard Feynman
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ
2021-07-01 1:29 ` William Kenworthy
@ 2021-07-02 15:09 ` J. Roeleveld
0 siblings, 0 replies; 14+ messages in thread
From: J. Roeleveld @ 2021-07-02 15:09 UTC (permalink / raw
To: gentoo-user
On Thursday, July 1, 2021 3:29:03 AM CEST William Kenworthy wrote:
> On 1/7/21 7:31 am, Frank Steinmetzger wrote:
> > Am Wed, Jun 30, 2021 at 09:45:13PM +0100 schrieb Neil Bothwick:
> >> On Tue, 29 Jun 2021 15:56:49 +0200, Frank Steinmetzger wrote:
> >>> I reached 80 % usage (which is the recommended maximum for ZFS) and am
> >>> now evaluating my options for the coming years.
> >>> ...
>
> Are you welded to ZFS? Is BTRFS or another alternative viable as it
> might handle the different drive sizes more elegantly? (e.g., btrfs raid
> handles different sized disks quite well)
Last I checked, BTRFS doesn't have any RAID5/6 equivalent.
And mirrored has too much storage-loss.
--
Joost
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ
2021-07-01 13:47 ` Robert David
2021-07-01 15:01 ` antlists
@ 2021-07-02 15:13 ` J. Roeleveld
1 sibling, 0 replies; 14+ messages in thread
From: J. Roeleveld @ 2021-07-02 15:13 UTC (permalink / raw
To: gentoo-user
On Thursday, July 1, 2021 3:47:08 PM CEST Robert David wrote:
> In any of my data arrays I have long time migrated off the RAIDZ to the
> MIRROR or RAID10. You will find finally that the RAIDZ is slow and not
> very flexible. Only think you gain is the extra space in constrained
> array spaces. For RAID10 it is much easier to raise the size, just
> resilvering to new bigger disks, removing old and expanding. The
> resilvering speed is magnitude faster. And anyway much easier to recover
> in cases of failure.
multiple RAIDZ2 vdevs with a fast enough I/O can easily saturate multiple
10Gbit links. I actually have 2 pools in my system, one is using tripple-
mirrored VDEVs, the other 6-disk RAIDZ2 sets.
Both are easily capable of saturating the 10gbit link I use.
--
Joost
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ
2021-07-01 15:01 ` antlists
2021-07-01 17:35 ` Frank Steinmetzger
@ 2021-07-04 10:56 ` Robert David
1 sibling, 0 replies; 14+ messages in thread
From: Robert David @ 2021-07-04 10:56 UTC (permalink / raw
To: gentoo-user; +Cc: antlists
On Thursday, July 1, 2021 5:01:25 PM CEST antlists wrote:
> On 01/07/2021 14:47, Robert David wrote:
> > Hi Frank,
> >
> >
> >
> > In any of my data arrays I have long time migrated off the RAIDZ to the
> > MIRROR or RAID10. You will find finally that the RAIDZ is slow and not
> > very flexible. Only think you gain is the extra space in constrained
> > array spaces. For RAID10 it is much easier to raise the size, just
> > resilvering to new bigger disks, removing old and expanding. The
> > resilvering speed is magnitude faster.
> >
> > And anyway much easier to recover
> > in cases of failure.
>
> ARE YOU SURE???
>
> The standard mirror does not cope with corruption very well. Lose a disk
> and resilvering is fast. Corrupt the data, and you'll be tearing your
> hair out why things go wrong randomly, with no automated way, even once
> you've realised what's happened, to recover your data other than a
> restore from backup.
Yes I'm sure. What I meant easier is that the pool is much easier to
handle and recover.
For RAIDZ1 the thing you mention for MIRROR is the same, only it is
multiplied with the amount of disks. So if 1 disk fail and you resilver,
then all the remaining disks spinning to populate the spare. If any of
them fails, then you are screwed. RAIDZ2 is better in this space and in
case of 4 disks it is better when it comes to resiliency (for 10 disks
it may not be true), but you lose the flexibility.
Also time to resilver under RAIDZ is much slower, which means longer
time under unprotected pool. It is always needed to decide what workload
you are serving and how precious the data are.
For data like movies RAIDZ1 is enough I think.
Also it is good to check the SMART data time to time to see the amount
of error corrections (some are ok, but highly rising no). Solaris has
FMA for this to kick in spare. Under home environment it is fine to
check it time to time and consider new disk before the old one
completely dies. This reminds me I need to buy new disk to my home NAS :)
(because of the rising corrections).
And finally, always do backups for the data you are about to save. I got
raspberry pi with attached USB JBOD with two disks serving as backup
station. It is not fast to be a real NAS, but to do send/receive of
incremental snapshots it is enough. I automaticly sync there the
datasets that are worth not to lose (photos, documents, etc), many times
these datasets are also the ones that are not such big.
Ideally put this backup station to some remote location (or at least
different room).
Robert.
>
> > If you really need the additional space, consider adding second jbod
> > with another disks.
>
> That'd be my approach - migrate a load of stuff off onto another disk
> elsewhere, but that's not what the OP wants to do.
>
> > Robert.
>
> Cheers,
> Wol
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2021-07-04 10:56 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-06-29 13:56 [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ Frank Steinmetzger
2021-06-30 20:00 ` antlists
2021-06-30 23:31 ` Frank Steinmetzger
2021-06-30 20:45 ` Neil Bothwick
2021-06-30 23:31 ` Frank Steinmetzger
2021-07-01 1:29 ` William Kenworthy
2021-07-02 15:09 ` J. Roeleveld
2021-07-01 15:07 ` antlists
2021-07-01 17:21 ` Frank Steinmetzger
2021-07-01 13:47 ` Robert David
2021-07-01 15:01 ` antlists
2021-07-01 17:35 ` Frank Steinmetzger
2021-07-04 10:56 ` Robert David
2021-07-02 15:13 ` J. Roeleveld
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox