* [gentoo-user] ZFS
@ 2013-09-17 7:20 Grant
2013-09-17 7:36 ` Marc Stürmer
` (6 more replies)
0 siblings, 7 replies; 72+ messages in thread
From: Grant @ 2013-09-17 7:20 UTC (permalink / raw
To: Gentoo mailing list
I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
running. I'd also like to stripe for performance, resulting in
RAID10. It sounds like most hardware controllers do not support
6-disk RAID10 so ZFS looks very interesting.
Can I operate ZFS RAID without a hardware RAID controller?
From a RAID perspective only, is ZFS a better choice than conventional
software RAID?
ZFS seems to have many excellent features and I'd like to ease into
them slowly (like an old man into a nice warm bath). Does ZFS allow
you to set up additional features later (e.g. snapshots, encryption,
deduplication, compression) or is some forethought required when first
making the filesystem?
It looks like there are comprehensive ZFS Gentoo docs
(http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
world about how much extra difficulty/complexity is added to
installation and ongoing administration when choosing ZFS over ext4?
Performance doesn't seem to be one of ZFS's strong points. Is it
considered suitable for a high-performance server?
http://www.phoronix.com/scan.php?page=news_item&px=MTM1NTA
Besides performance, are there any drawbacks to ZFS compared to ext4?
- Grant
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 7:20 [gentoo-user] ZFS Grant
@ 2013-09-17 7:36 ` Marc Stürmer
2013-09-17 8:05 ` Pandu Poluan
` (5 subsequent siblings)
6 siblings, 0 replies; 72+ messages in thread
From: Marc Stürmer @ 2013-09-17 7:36 UTC (permalink / raw
To: gentoo-user
Am 17.09.2013 09:20, schrieb Grant:
> Performance doesn't seem to be one of ZFS's strong points. Is it
> considered suitable for a high-performance server?
A high performance server for what?
But you've already given yourself the answer: if high performance is
what you are aiming for it depends on your performance needs and
probably ZFS on Linux is not got to meet those - yet. It is still evolving.
Of course benchmarks are static, real world usage is another cup of coffee.
> Besides performance, are there any drawbacks to ZFS compared to ext4?
Well it only comes as kernel module at the moment. Some people dislike
that.
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 7:20 [gentoo-user] ZFS Grant
2013-09-17 7:36 ` Marc Stürmer
@ 2013-09-17 8:05 ` Pandu Poluan
2013-09-17 8:22 ` Alan McKinnon
` (3 more replies)
2013-09-17 9:52 ` Joerg Schilling
` (4 subsequent siblings)
6 siblings, 4 replies; 72+ messages in thread
From: Pandu Poluan @ 2013-09-17 8:05 UTC (permalink / raw
To: gentoo-user
On Tue, Sep 17, 2013 at 2:20 PM, Grant <emailgrant@gmail.com> wrote:
> I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
> running. I'd also like to stripe for performance, resulting in
> RAID10. It sounds like most hardware controllers do not support
> 6-disk RAID10 so ZFS looks very interesting.
>
> Can I operate ZFS RAID without a hardware RAID controller?
>
Yes. In fact, that's ZFS' preferred mode of operation (i.e., it
handles all redundancy by itself).
> From a RAID perspective only, is ZFS a better choice than conventional
> software RAID?
>
Yes.
ZFS checksummed all blocks during writes, and verifies those checksums
during read.
It is possible to have 2 bits flipped at the same time among 2 hard
disks. In such case, the RAID controller will never see the bitflips.
But ZFS will see it.
> ZFS seems to have many excellent features and I'd like to ease into
> them slowly (like an old man into a nice warm bath). Does ZFS allow
> you to set up additional features later (e.g. snapshots, encryption,
> deduplication, compression) or is some forethought required when first
> making the filesystem?
>
Snapshots is built-in from the beginning. All you have to do is create
one when you want it.
Deduplication can be turned on and off at will -- but be warned: You
need HUGE amount of RAM.
Compression can be turned on and off at will. Previously-compressed
data won't become uncompressed unless you modify them.
> It looks like there are comprehensive ZFS Gentoo docs
> (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
> world about how much extra difficulty/complexity is added to
> installation and ongoing administration when choosing ZFS over ext4?
>
Very very minimal. So minimal, in fact, that if you don't plan to use
ZFS as a root filesystem, it's laughably simple. You don't even have
to edit /etc/fstab
> Performance doesn't seem to be one of ZFS's strong points. Is it
> considered suitable for a high-performance server?
>
> http://www.phoronix.com/scan.php?page=news_item&px=MTM1NTA
>
Several points:
1. The added steps of checksumming (and verifying the checksums)
*will* give a performance penalty.
2. When comparing performance of 1 (one) drive, of course ZFS will
lose. But when you build a ZFS pool out of 3 pairs of mirrored drives,
throughput will increase significantly as ZFS has the ability to do
'load-balancing' among mirror-pairs (or, in ZFS parlance, "mirrored
vdevs")
Go directly to this post:
http://phoronix.com/forums/showthread.php?79922-Benchmarks-Of-The-New-ZFS-On-Linux-EXT4-Wins&p=326838#post326838
Notice how ZFS won against ext4 in 8 scenarios out of 9. (The only
scenario where ZFS lost is in the single-client RAID-1 scenario)
> Besides performance, are there any drawbacks to ZFS compared to ext4?
>
1. You need a huge amount of RAM to let ZFS do its magic. But RAM is
cheap nowadays. Data... possibly priceless.
2. Be careful when using ZFS on a server on which processes rapidly
spawn and terminate. ZFS doesn't like memory fragmentation.
For point #2, I can give you a real-life example:
My mail server, for some reasons, choke if too many TLS errors happen.
So, I placed "Perdition" in to capture all POP3 connections and
'un-TLS' them. Perdition spawns a new process for *every* connection.
My mail server has 2000 users, I regularly see more than 100 Perdition
child processes. Many very ephemeral (i.e., existing for less than 5
seconds). The RAM is undoubtedly *extremely* fragmented. ZFS cries
murder when it cannot allocate a contiguous SLAB of memory to increase
its ARC Cache.
OTOH, on another very busy server (mail archiving server using
MailArchiva, handling 2000+ emails per hour), ZFS run flawlessly. No
incident _at_all_. Undoubtedly because MailArchiva use one single huge
process (Java-based) to handle all transactions, so no RAM
fragmentation here.
Rgds,
--
FdS Pandu E Poluan
~ IT Optimizer ~
• LOPSA Member #15248
• Blog : http://pepoluan.tumblr.com
• Linked-In : http://id.linkedin.com/in/pepoluan
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 8:05 ` Pandu Poluan
@ 2013-09-17 8:22 ` Alan McKinnon
2013-09-17 9:44 ` Grant
2013-09-17 9:42 ` Grant
` (2 subsequent siblings)
3 siblings, 1 reply; 72+ messages in thread
From: Alan McKinnon @ 2013-09-17 8:22 UTC (permalink / raw
To: gentoo-user
On 17/09/2013 10:05, Pandu Poluan wrote:
> On Tue, Sep 17, 2013 at 2:20 PM, Grant <emailgrant@gmail.com> wrote:
>> I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
>> running. I'd also like to stripe for performance, resulting in
>> RAID10. It sounds like most hardware controllers do not support
>> 6-disk RAID10 so ZFS looks very interesting.
>>
>> Can I operate ZFS RAID without a hardware RAID controller?
>>
>
> Yes. In fact, that's ZFS' preferred mode of operation (i.e., it
> handles all redundancy by itself).
I would take it a step further and say that a hardware RAID controller
actively interferes with ZFS and gets in the way. It gets in the way so
much that one should not do it at all.
Running the controller in JBOD mode is not a good idea, I'd say it's a
requirement.
--
Alan McKinnon
alan.mckinnon@gmail.com
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 8:05 ` Pandu Poluan
2013-09-17 8:22 ` Alan McKinnon
@ 2013-09-17 9:42 ` Grant
2013-09-17 10:11 ` Tanstaafl
2013-09-17 16:32 ` covici
3 siblings, 0 replies; 72+ messages in thread
From: Grant @ 2013-09-17 9:42 UTC (permalink / raw
To: Gentoo mailing list
>> It looks like there are comprehensive ZFS Gentoo docs
>> (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
>> world about how much extra difficulty/complexity is added to
>> installation and ongoing administration when choosing ZFS over ext4?
>
> Very very minimal. So minimal, in fact, that if you don't plan to use
> ZFS as a root filesystem, it's laughably simple. You don't even have
> to edit /etc/fstab
I do plan to use it as the root filesystem but it sounds like I
shouldn't worry about extra headaches.
>> Performance doesn't seem to be one of ZFS's strong points. Is it
>> considered suitable for a high-performance server?
>>
>> http://www.phoronix.com/scan.php?page=news_item&px=MTM1NTA
>
> Go directly to this post:
> http://phoronix.com/forums/showthread.php?79922-Benchmarks-Of-The-New-ZFS-On-Linux-EXT4-Wins&p=326838#post326838
>
> Notice how ZFS won against ext4 in 8 scenarios out of 9. (The only
> scenario where ZFS lost is in the single-client RAID-1 scenario)
Very encouraging. I'll let that assuage my performance concerns.
>> Besides performance, are there any drawbacks to ZFS compared to ext4?
>
> 1. You need a huge amount of RAM to let ZFS do its magic. But RAM is
> cheap nowadays. Data... possibly priceless.
Is this a requirement for deduplication, or for ZFS in general?
How can you determine how much RAM you'll need?
> 2. Be careful when using ZFS on a server on which processes rapidly
> spawn and terminate. ZFS doesn't like memory fragmentation.
I don't think I have that sort of scenario on my server. Is there a
way to check for memory fragmentation to be sure?
> For point #2, I can give you a real-life example:
>
> My mail server, for some reasons, choke if too many TLS errors happen.
> So, I placed "Perdition" in to capture all POP3 connections and
> 'un-TLS' them. Perdition spawns a new process for *every* connection.
> My mail server has 2000 users, I regularly see more than 100 Perdition
> child processes. Many very ephemeral (i.e., existing for less than 5
> seconds). The RAM is undoubtedly *extremely* fragmented. ZFS cries
> murder when it cannot allocate a contiguous SLAB of memory to increase
> its ARC Cache.
Did you have to switch to a different filesystem on that server?
- Grant
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 8:22 ` Alan McKinnon
@ 2013-09-17 9:44 ` Grant
0 siblings, 0 replies; 72+ messages in thread
From: Grant @ 2013-09-17 9:44 UTC (permalink / raw
To: Gentoo mailing list
>>> I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
>>> running. I'd also like to stripe for performance, resulting in
>>> RAID10. It sounds like most hardware controllers do not support
>>> 6-disk RAID10 so ZFS looks very interesting.
>>>
>>> Can I operate ZFS RAID without a hardware RAID controller?
>>
>> Yes. In fact, that's ZFS' preferred mode of operation (i.e., it
>> handles all redundancy by itself).
>
> I would take it a step further and say that a hardware RAID controller
> actively interferes with ZFS and gets in the way. It gets in the way so
> much that one should not do it at all.
>
> Running the controller in JBOD mode is not a good idea, I'd say it's a
> requirement.
If I go with ZFS I won't have a RAID controller installed at all. One
less point of hardware failure too.
- Grant
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 7:20 [gentoo-user] ZFS Grant
2013-09-17 7:36 ` Marc Stürmer
2013-09-17 8:05 ` Pandu Poluan
@ 2013-09-17 9:52 ` Joerg Schilling
2013-09-17 13:22 ` Grant
2013-09-17 10:19 ` Tanstaafl
` (3 subsequent siblings)
6 siblings, 1 reply; 72+ messages in thread
From: Joerg Schilling @ 2013-09-17 9:52 UTC (permalink / raw
To: gentoo-user
Grant <emailgrant@gmail.com> wrote:
> Performance doesn't seem to be one of ZFS's strong points. Is it
> considered suitable for a high-performance server?
ZFS is one of the fastest FS I am aware of (if not the fastest).
You need a sufficient amount of RAM to make the ARC useful.
The only problem I am aware with ZFS is the fact that if you ask it to grant
consistency for a specific file at a specific time, you force it to become slow.
Jörg
--
EMail:joerg@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
js@cs.tu-berlin.de (uni)
joerg.schilling@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 8:05 ` Pandu Poluan
2013-09-17 8:22 ` Alan McKinnon
2013-09-17 9:42 ` Grant
@ 2013-09-17 10:11 ` Tanstaafl
2013-09-17 16:32 ` covici
3 siblings, 0 replies; 72+ messages in thread
From: Tanstaafl @ 2013-09-17 10:11 UTC (permalink / raw
To: gentoo-user
On 2013-09-17 4:05 AM, Pandu Poluan <pandu@poluan.info> wrote:
> 2. When comparing performance of 1 (one) drive, of course ZFS will
> lose. But when you build a ZFS pool out of 3 pairs of mirrored drives,
> throughput will increase significantly as ZFS has the ability to do
> 'load-balancing' among mirror-pairs (or, in ZFS parlance, "mirrored
> vdevs")
Hmmm...
If conventional wisdom is to run a hardware RAID card in JBOD mode, how
can you also set it up with mirrored pairs at the same time?
So, for best performance & reliability, which is it? JBOD mode? Or
mirrored vdevs?
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 7:20 [gentoo-user] ZFS Grant
` (2 preceding siblings ...)
2013-09-17 9:52 ` Joerg Schilling
@ 2013-09-17 10:19 ` Tanstaafl
2013-09-17 13:21 ` Grant
2013-09-17 18:00 ` Volker Armin Hemmann
` (2 subsequent siblings)
6 siblings, 1 reply; 72+ messages in thread
From: Tanstaafl @ 2013-09-17 10:19 UTC (permalink / raw
To: gentoo-user
On 2013-09-17 3:20 AM, Grant <emailgrant@gmail.com> wrote:
> It sounds like most hardware controllers do not support
> 6-disk RAID10 so ZFS looks very interesting.
?? RAID 10 simply requires an even number of drives with a minimum of 4.
So, you certainly can have a 6 disk RAID10 - I've got a system with one
right now in fact.
> Can I operate ZFS RAID without a hardware RAID controller?
Yes.
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 10:19 ` Tanstaafl
@ 2013-09-17 13:21 ` Grant
2013-09-17 15:18 ` Michael Orlitzky
0 siblings, 1 reply; 72+ messages in thread
From: Grant @ 2013-09-17 13:21 UTC (permalink / raw
To: Gentoo mailing list
>> It sounds like most hardware controllers do not support
>> 6-disk RAID10 so ZFS looks very interesting.
>
> ?? RAID 10 simply requires an even number of drives with a minimum of 4.
OK, there seems to be some disagreement on this. Michael?
- Grant
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 9:52 ` Joerg Schilling
@ 2013-09-17 13:22 ` Grant
2013-09-17 13:30 ` Joerg Schilling
2013-09-17 16:39 ` Alan McKinnon
0 siblings, 2 replies; 72+ messages in thread
From: Grant @ 2013-09-17 13:22 UTC (permalink / raw
To: Gentoo mailing list
>> Performance doesn't seem to be one of ZFS's strong points. Is it
>> considered suitable for a high-performance server?
>
> ZFS is one of the fastest FS I am aware of (if not the fastest).
> You need a sufficient amount of RAM to make the ARC useful.
How much RAM is that?
- Grant
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 13:22 ` Grant
@ 2013-09-17 13:30 ` Joerg Schilling
2013-09-17 16:39 ` Alan McKinnon
1 sibling, 0 replies; 72+ messages in thread
From: Joerg Schilling @ 2013-09-17 13:30 UTC (permalink / raw
To: gentoo-user
Grant <emailgrant@gmail.com> wrote:
> >> Performance doesn't seem to be one of ZFS's strong points. Is it
> >> considered suitable for a high-performance server?
> >
> > ZFS is one of the fastest FS I am aware of (if not the fastest).
> > You need a sufficient amount of RAM to make the ARC useful.
>
> How much RAM is that?
How much do you have?
File servers usually have at least 20 GB but 64+ is usual...
Jörg
--
EMail:joerg@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
js@cs.tu-berlin.de (uni)
joerg.schilling@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 13:21 ` Grant
@ 2013-09-17 15:18 ` Michael Orlitzky
2013-09-17 15:40 ` Tanstaafl
0 siblings, 1 reply; 72+ messages in thread
From: Michael Orlitzky @ 2013-09-17 15:18 UTC (permalink / raw
To: gentoo-user
On 09/17/2013 09:21 AM, Grant wrote:
>>> It sounds like most hardware controllers do not support
>>> 6-disk RAID10 so ZFS looks very interesting.
>>
>> ?? RAID 10 simply requires an even number of drives with a minimum of 4.
>
> OK, there seems to be some disagreement on this. Michael?
>
Any controller that claims RAID10 on a server with 6 drive bays should
be able to put all six drives in an array. But you'll get a three-way
stripe (better performance) instead of a three-way mirror (better fault
tolerance).
So,
A B C
A B C
and not,
A B
A B
A B
The former gives you more space but slightly less fault tolerance than
four drives with a hot spare.
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 15:18 ` Michael Orlitzky
@ 2013-09-17 15:40 ` Tanstaafl
2013-09-17 16:34 ` Michael Orlitzky
0 siblings, 1 reply; 72+ messages in thread
From: Tanstaafl @ 2013-09-17 15:40 UTC (permalink / raw
To: gentoo-user
On 2013-09-17 11:18 AM, Michael Orlitzky <michael@orlitzky.com> wrote:
> Any controller that claims RAID10 on a server with 6 drive bays should
> be able to put all six drives in an array. But you'll get a three-way
> stripe (better performance) instead of a three-way mirror (better fault
> tolerance).
>
> So,
>
> A B C
> A B C
>
> and not,
>
> A B
> A B
> A B
>
> The former gives you more space but slightly less fault tolerance than
> four drives with a hot spare.
Sorry, don't understand what you're saying.
Are you talking about the difference between RAID1+0 and RAID0+1?
If not, then please point to *authoritative* docs on what you mean.
Googling on just RAID10 doesn't confuse the issues like you seem to be
doing (probably my ignorance though)...
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 8:05 ` Pandu Poluan
` (2 preceding siblings ...)
2013-09-17 10:11 ` Tanstaafl
@ 2013-09-17 16:32 ` covici
2013-09-19 22:41 ` Douglas J Hunley
2013-09-19 22:46 ` Douglas J Hunley
3 siblings, 2 replies; 72+ messages in thread
From: covici @ 2013-09-17 16:32 UTC (permalink / raw
To: gentoo-user
Pandu Poluan <pandu@poluan.info> wrote:
> On Tue, Sep 17, 2013 at 2:20 PM, Grant <emailgrant@gmail.com> wrote:
> > I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
> > running. I'd also like to stripe for performance, resulting in
> > RAID10. It sounds like most hardware controllers do not support
> > 6-disk RAID10 so ZFS looks very interesting.
> >
> > Can I operate ZFS RAID without a hardware RAID controller?
> >
>
> Yes. In fact, that's ZFS' preferred mode of operation (i.e., it
> handles all redundancy by itself).
>
> > From a RAID perspective only, is ZFS a better choice than conventional
> > software RAID?
> >
>
> Yes.
>
> ZFS checksummed all blocks during writes, and verifies those checksums
> during read.
>
> It is possible to have 2 bits flipped at the same time among 2 hard
> disks. In such case, the RAID controller will never see the bitflips.
> But ZFS will see it.
>
> > ZFS seems to have many excellent features and I'd like to ease into
> > them slowly (like an old man into a nice warm bath). Does ZFS allow
> > you to set up additional features later (e.g. snapshots, encryption,
> > deduplication, compression) or is some forethought required when first
> > making the filesystem?
> >
>
> Snapshots is built-in from the beginning. All you have to do is create
> one when you want it.
>
> Deduplication can be turned on and off at will -- but be warned: You
> need HUGE amount of RAM.
>
> Compression can be turned on and off at will. Previously-compressed
> data won't become uncompressed unless you modify them.
>
> > It looks like there are comprehensive ZFS Gentoo docs
> > (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
> > world about how much extra difficulty/complexity is added to
> > installation and ongoing administration when choosing ZFS over ext4?
> >
>
> Very very minimal. So minimal, in fact, that if you don't plan to use
> ZFS as a root filesystem, it's laughably simple. You don't even have
> to edit /etc/fstab
>
> > Performance doesn't seem to be one of ZFS's strong points. Is it
> > considered suitable for a high-performance server?
> >
> > http://www.phoronix.com/scan.php?page=news_item&px=MTM1NTA
> >
>
> Several points:
>
> 1. The added steps of checksumming (and verifying the checksums)
> *will* give a performance penalty.
>
> 2. When comparing performance of 1 (one) drive, of course ZFS will
> lose. But when you build a ZFS pool out of 3 pairs of mirrored drives,
> throughput will increase significantly as ZFS has the ability to do
> 'load-balancing' among mirror-pairs (or, in ZFS parlance, "mirrored
> vdevs")
>
> Go directly to this post:
> http://phoronix.com/forums/showthread.php?79922-Benchmarks-Of-The-New-ZFS-On-Linux-EXT4-Wins&p=326838#post326838
>
> Notice how ZFS won against ext4 in 8 scenarios out of 9. (The only
> scenario where ZFS lost is in the single-client RAID-1 scenario)
>
> > Besides performance, are there any drawbacks to ZFS compared to ext4?
> >
>
> 1. You need a huge amount of RAM to let ZFS do its magic. But RAM is
> cheap nowadays. Data... possibly priceless.
>
> 2. Be careful when using ZFS on a server on which processes rapidly
> spawn and terminate. ZFS doesn't like memory fragmentation.
>
> For point #2, I can give you a real-life example:
>
> My mail server, for some reasons, choke if too many TLS errors happen.
> So, I placed "Perdition" in to capture all POP3 connections and
> 'un-TLS' them. Perdition spawns a new process for *every* connection.
> My mail server has 2000 users, I regularly see more than 100 Perdition
> child processes. Many very ephemeral (i.e., existing for less than 5
> seconds). The RAM is undoubtedly *extremely* fragmented. ZFS cries
> murder when it cannot allocate a contiguous SLAB of memory to increase
> its ARC Cache.
>
> OTOH, on another very busy server (mail archiving server using
> MailArchiva, handling 2000+ emails per hour), ZFS run flawlessly. No
> incident _at_all_. Undoubtedly because MailArchiva use one single huge
> process (Java-based) to handle all transactions, so no RAM
> fragmentation here.
Spo do I need that overlay at all, or just emerge zfs and its module?
Also, I now have lvm volumes, including root, but not boot, how to
convert and do I have to do anything to my initramfs?
--
Your life is like a penny. You're going to lose it. The question is:
How do
you spend it?
John Covici
covici@ccs.covici.com
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 15:40 ` Tanstaafl
@ 2013-09-17 16:34 ` Michael Orlitzky
2013-09-17 17:00 ` Tanstaafl
2013-09-18 4:02 ` Grant
0 siblings, 2 replies; 72+ messages in thread
From: Michael Orlitzky @ 2013-09-17 16:34 UTC (permalink / raw
To: gentoo-user
On 09/17/2013 11:40 AM, Tanstaafl wrote:
> On 2013-09-17 11:18 AM, Michael Orlitzky <michael@orlitzky.com> wrote:
>> Any controller that claims RAID10 on a server with 6 drive bays should
>> be able to put all six drives in an array. But you'll get a three-way
>> stripe (better performance) instead of a three-way mirror (better fault
>> tolerance).
>>
>> So,
>>
>> A B C
>> A B C
>>
>> and not,
>>
>> A B
>> A B
>> A B
>>
>> The former gives you more space but slightly less fault tolerance than
>> four drives with a hot spare.
>
> Sorry, don't understand what you're saying.
>
> Are you talking about the difference between RAID1+0 and RAID0+1?
Nope. Both of my examples above are stripes of mirrors, i.e. 1 + 0.
> If not, then please point to *authoritative* docs on what you mean.
http://www.snia.org/tech_activities/standards/curr_standards/ddf
> Googling on just RAID10 doesn't confuse the issues like you seem to be
> doing (probably my ignorance though)...
>
It's not my fault, the standard confuses the issue =)
Controllers that can do multi-mirroring are next to nonexistent, so
produce few Google results. You can generally assume that RAID10 with 6
drives is going to give you,
A B C
A B C
so you don't get much more fault tolerance by throwing more drives at
it. The controller in Grant's server can do this, I'm sure.
For maximum fault tolerance, what you really want is,
A B
A B
A B
but, like I said, it's hard to find in hardware. The standard I linked
to calls both of these "RAID10", thus the confusion.
I forget why I even brought it up. I think it was in order to argue that
4 drives w/ spare is more tolerant that 6 drives in RAID10. To make that
argument, we need to be clear about what "RAID10" means.
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 13:22 ` Grant
2013-09-17 13:30 ` Joerg Schilling
@ 2013-09-17 16:39 ` Alan McKinnon
2013-09-18 4:06 ` Grant
1 sibling, 1 reply; 72+ messages in thread
From: Alan McKinnon @ 2013-09-17 16:39 UTC (permalink / raw
To: gentoo-user
On 17/09/2013 15:22, Grant wrote:
>>> Performance doesn't seem to be one of ZFS's strong points. Is it
>>> considered suitable for a high-performance server?
>>
>> ZFS is one of the fastest FS I am aware of (if not the fastest).
>> You need a sufficient amount of RAM to make the ARC useful.
>
> How much RAM is that?
>
> - Grant
>
1G of RAM per 1TB of data is the recommendation.
For de-duped data, it is considerably more, something on the order of 6G
of RAM per 1TB of data.
The first guideline is actually not too onerous. It *seems* like a huge
amount of RAM, but
a) Most modern motherboards can handle that with ease
b) RAM is comparatively cheap
c) It's a once-of purchase
d) RAM is very reliable so once-off really does mean once-off
--
Alan McKinnon
alan.mckinnon@gmail.com
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 16:34 ` Michael Orlitzky
@ 2013-09-17 17:00 ` Tanstaafl
2013-09-17 17:07 ` Michael Orlitzky
2013-09-18 4:02 ` Grant
1 sibling, 1 reply; 72+ messages in thread
From: Tanstaafl @ 2013-09-17 17:00 UTC (permalink / raw
To: gentoo-user
On 2013-09-17 12:34 PM, Michael Orlitzky <michael@orlitzky.com> wrote:
> For maximum fault tolerance, what you really want is,
>
> A B
> A B
> A B
>
> but, like I said, it's hard to find in hardware. The standard I linked
> to calls both of these "RAID10", thus the confusion.
Ok, I see where my confusion came in... when you first referred to this,
you said that the *latter* was the more common version, but I guess you
meant the former (since you're no saying the latter is 'hard to find in
hardware')...
> I forget why I even brought it up. I think it was in order to argue that
> 4 drives w/ spare is more tolerant that 6 drives in RAID10.
But not 6-drive RAID w/ hot spare... ;) Anyone who can't afford to add a
single additional drive for the piece of mind has no business buying the
RAID card to begin with...
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 17:00 ` Tanstaafl
@ 2013-09-17 17:07 ` Michael Orlitzky
2013-09-17 17:34 ` Tanstaafl
0 siblings, 1 reply; 72+ messages in thread
From: Michael Orlitzky @ 2013-09-17 17:07 UTC (permalink / raw
To: gentoo-user
On 09/17/2013 01:00 PM, Tanstaafl wrote:
>
> But not 6-drive RAID w/ hot spare... ;) Anyone who can't afford to add a
> single additional drive for the piece of mind has no business buying the
> RAID card to begin with...
Most of our servers only come with 6 drive bays -- that's why I have
this speech already rehearsed!
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 17:07 ` Michael Orlitzky
@ 2013-09-17 17:34 ` Tanstaafl
2013-09-17 17:54 ` Stefan G. Weichinger
0 siblings, 1 reply; 72+ messages in thread
From: Tanstaafl @ 2013-09-17 17:34 UTC (permalink / raw
To: gentoo-user
On 2013-09-17 1:07 PM, Michael Orlitzky <michael@orlitzky.com> wrote:
> On 09/17/2013 01:00 PM, Tanstaafl wrote:
>>
>> But not 6-drive RAID w/ hot spare... ;) Anyone who can't afford to add a
>> single additional drive for the piece of mind has no business buying the
>> RAID card to begin with...
>
> Most of our servers only come with 6 drive bays -- that's why I have
> this speech already rehearsed!
Ahh...
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 17:34 ` Tanstaafl
@ 2013-09-17 17:54 ` Stefan G. Weichinger
2013-09-18 4:11 ` Grant
2013-09-19 22:46 ` Douglas J Hunley
0 siblings, 2 replies; 72+ messages in thread
From: Stefan G. Weichinger @ 2013-09-17 17:54 UTC (permalink / raw
To: gentoo-user
Am 17.09.2013 19:34, schrieb Tanstaafl:
> On 2013-09-17 1:07 PM, Michael Orlitzky <michael@orlitzky.com> wrote:
>> On 09/17/2013 01:00 PM, Tanstaafl wrote:
>>>
>>> But not 6-drive RAID w/ hot spare... ;) Anyone who can't afford to add a
>>> single additional drive for the piece of mind has no business buying the
>>> RAID card to begin with...
>>
>> Most of our servers only come with 6 drive bays -- that's why I have
>> this speech already rehearsed!
>
> Ahh...
>
So what would be the recommended setup with ZFS and 6 drives?
I have to set up a server w/ 8x 1TB in about 2 weeks and consider ZFS as
well, at least for data. So root-fs would go onto 2x 1TB hdds with
conventional partitioning and something like ext4.
6x 1TB would be available for data ... on one hand for a file-server
part ... on the other hand for VMs based on KVM.
The server has 64 gigs of RAM so that won't be a problem here.
I still wonder if the virtual disks for the VMs will run fine on ZFS ...
no way to test it until I am there and set the box up.
S
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 7:20 [gentoo-user] ZFS Grant
` (3 preceding siblings ...)
2013-09-17 10:19 ` Tanstaafl
@ 2013-09-17 18:00 ` Volker Armin Hemmann
2013-09-17 18:11 ` covici
` (3 more replies)
2013-09-18 13:53 ` Stefan G. Weichinger
2013-09-21 12:53 ` thegeezer
6 siblings, 4 replies; 72+ messages in thread
From: Volker Armin Hemmann @ 2013-09-17 18:00 UTC (permalink / raw
To: gentoo-user
Am 17.09.2013 09:20, schrieb Grant:
> I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
> running. I'd also like to stripe for performance, resulting in
> RAID10. It sounds like most hardware controllers do not support
> 6-disk RAID10 so ZFS looks very interesting.
>
> Can I operate ZFS RAID without a hardware RAID controller?
>
> >From a RAID perspective only, is ZFS a better choice than conventional
> software RAID?
>
> ZFS seems to have many excellent features and I'd like to ease into
> them slowly (like an old man into a nice warm bath). Does ZFS allow
> you to set up additional features later (e.g. snapshots, encryption,
> deduplication, compression) or is some forethought required when first
> making the filesystem?
>
> It looks like there are comprehensive ZFS Gentoo docs
> (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
> world about how much extra difficulty/complexity is added to
> installation and ongoing administration when choosing ZFS over ext4?
>
> Performance doesn't seem to be one of ZFS's strong points. Is it
> considered suitable for a high-performance server?
>
> http://www.phoronix.com/scan.php?page=news_item&px=MTM1NTA
>
> Besides performance, are there any drawbacks to ZFS compared to ext4?
>
do yourself three favours:
use ECC ram. Lots of it. 16GB DDR3 1600 ECC ram cost you less than 170€.
And it is worth it. ZFS showed me just how many silent corruptions can
happen on a 'stable' system. Errors never seen neither detected thanks
to using 'standard' ram.
turn off readahead. ZFS' own readahead and the kernel's clash - badly.
Turn off kernel's readahead for a visible performance boon.
use noop as io-scheduler.
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 18:00 ` Volker Armin Hemmann
@ 2013-09-17 18:11 ` covici
2013-09-17 19:30 ` Volker Armin Hemmann
2013-09-17 18:11 ` Tanstaafl
` (2 subsequent siblings)
3 siblings, 1 reply; 72+ messages in thread
From: covici @ 2013-09-17 18:11 UTC (permalink / raw
To: gentoo-user
Volker Armin Hemmann <volkerarmin@googlemail.com> wrote:
> Am 17.09.2013 09:20, schrieb Grant:
> > I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
> > running. I'd also like to stripe for performance, resulting in
> > RAID10. It sounds like most hardware controllers do not support
> > 6-disk RAID10 so ZFS looks very interesting.
> >
> > Can I operate ZFS RAID without a hardware RAID controller?
> >
> > >From a RAID perspective only, is ZFS a better choice than conventional
> > software RAID?
> >
> > ZFS seems to have many excellent features and I'd like to ease into
> > them slowly (like an old man into a nice warm bath). Does ZFS allow
> > you to set up additional features later (e.g. snapshots, encryption,
> > deduplication, compression) or is some forethought required when first
> > making the filesystem?
> >
> > It looks like there are comprehensive ZFS Gentoo docs
> > (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
> > world about how much extra difficulty/complexity is added to
> > installation and ongoing administration when choosing ZFS over ext4?
> >
> > Performance doesn't seem to be one of ZFS's strong points. Is it
> > considered suitable for a high-performance server?
> >
> > http://www.phoronix.com/scan.php?page=news_item&px=MTM1NTA
> >
> > Besides performance, are there any drawbacks to ZFS compared to ext4?
> >
> do yourself three favours:
>
> use ECC ram. Lots of it. 16GB DDR3 1600 ECC ram cost you less than 170€.
> And it is worth it. ZFS showed me just how many silent corruptions can
> happen on a 'stable' system. Errors never seen neither detected thanks
> to using 'standard' ram.
>
> turn off readahead. ZFS' own readahead and the kernel's clash - badly.
> Turn off kernel's readahead for a visible performance boon.
>
> use noop as io-scheduler.
How do you turnoff read ahead?
--
Your life is like a penny. You're going to lose it. The question is:
How do
you spend it?
John Covici
covici@ccs.covici.com
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 18:00 ` Volker Armin Hemmann
2013-09-17 18:11 ` covici
@ 2013-09-17 18:11 ` Tanstaafl
2013-09-17 19:30 ` Volker Armin Hemmann
2013-09-18 4:22 ` Bruce Hill
2013-09-18 4:12 ` [gentoo-user] ZFS Grant
2013-09-18 9:56 ` Joerg Schilling
3 siblings, 2 replies; 72+ messages in thread
From: Tanstaafl @ 2013-09-17 18:11 UTC (permalink / raw
To: gentoo-user
On 2013-09-17 2:00 PM, Volker Armin Hemmann <volkerarmin@googlemail.com>
wrote:
> use ECC ram. Lots of it. 16GB DDR3 1600 ECC ram cost you less than 170€.
> And it is worth it. ZFS showed me just how many silent corruptions can
> happen on a 'stable' system. Errors never seen neither detected thanks
> to using 'standard' ram.
>
> turn off readahead. ZFS' own readahead and the kernel's clash - badly.
> Turn off kernel's readahead for a visible performance boon.
>
> use noop as io-scheduler.
Is there a good place to read about these kinds of tuning parameters?
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 18:11 ` covici
@ 2013-09-17 19:30 ` Volker Armin Hemmann
2013-09-18 4:20 ` Grant
0 siblings, 1 reply; 72+ messages in thread
From: Volker Armin Hemmann @ 2013-09-17 19:30 UTC (permalink / raw
To: gentoo-user
Am 17.09.2013 20:11, schrieb covici@ccs.covici.com:
> Volker Armin Hemmann <volkerarmin@googlemail.com> wrote:
>
>> Am 17.09.2013 09:20, schrieb Grant:
>>> I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
>>> running. I'd also like to stripe for performance, resulting in
>>> RAID10. It sounds like most hardware controllers do not support
>>> 6-disk RAID10 so ZFS looks very interesting.
>>>
>>> Can I operate ZFS RAID without a hardware RAID controller?
>>>
>>> >From a RAID perspective only, is ZFS a better choice than conventional
>>> software RAID?
>>>
>>> ZFS seems to have many excellent features and I'd like to ease into
>>> them slowly (like an old man into a nice warm bath). Does ZFS allow
>>> you to set up additional features later (e.g. snapshots, encryption,
>>> deduplication, compression) or is some forethought required when first
>>> making the filesystem?
>>>
>>> It looks like there are comprehensive ZFS Gentoo docs
>>> (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
>>> world about how much extra difficulty/complexity is added to
>>> installation and ongoing administration when choosing ZFS over ext4?
>>>
>>> Performance doesn't seem to be one of ZFS's strong points. Is it
>>> considered suitable for a high-performance server?
>>>
>>> http://www.phoronix.com/scan.php?page=news_item&px=MTM1NTA
>>>
>>> Besides performance, are there any drawbacks to ZFS compared to ext4?
>>>
>> do yourself three favours:
>>
>> use ECC ram. Lots of it. 16GB DDR3 1600 ECC ram cost you less than 170€.
>> And it is worth it. ZFS showed me just how many silent corruptions can
>> happen on a 'stable' system. Errors never seen neither detected thanks
>> to using 'standard' ram.
>>
>> turn off readahead. ZFS' own readahead and the kernel's clash - badly.
>> Turn off kernel's readahead for a visible performance boon.
>>
>> use noop as io-scheduler.
> How do you turnoff read ahead?
>
set it with blockdev to 8 (for example). Doesn't turn it off. Just makes
it none-obstrusive.
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 18:11 ` Tanstaafl
@ 2013-09-17 19:30 ` Volker Armin Hemmann
2013-09-18 4:22 ` Bruce Hill
1 sibling, 0 replies; 72+ messages in thread
From: Volker Armin Hemmann @ 2013-09-17 19:30 UTC (permalink / raw
To: gentoo-user
Am 17.09.2013 20:11, schrieb Tanstaafl:
> On 2013-09-17 2:00 PM, Volker Armin Hemmann
> <volkerarmin@googlemail.com> wrote:
>> use ECC ram. Lots of it. 16GB DDR3 1600 ECC ram cost you less than 170€.
>> And it is worth it. ZFS showed me just how many silent corruptions can
>> happen on a 'stable' system. Errors never seen neither detected thanks
>> to using 'standard' ram.
>>
>> turn off readahead. ZFS' own readahead and the kernel's clash - badly.
>> Turn off kernel's readahead for a visible performance boon.
>>
>> use noop as io-scheduler.
>
> Is there a good place to read about these kinds of tuning parameters?
>
>
zfsonlinux?
google?
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 16:34 ` Michael Orlitzky
2013-09-17 17:00 ` Tanstaafl
@ 2013-09-18 4:02 ` Grant
1 sibling, 0 replies; 72+ messages in thread
From: Grant @ 2013-09-18 4:02 UTC (permalink / raw
To: Gentoo mailing list
>>> Any controller that claims RAID10 on a server with 6 drive bays should
>>> be able to put all six drives in an array. But you'll get a three-way
>>> stripe (better performance) instead of a three-way mirror (better fault
>>> tolerance).
>
> I forget why I even brought it up. I think it was in order to argue that
> 4 drives w/ spare is more tolerant that 6 drives in RAID10. To make that
> argument, we need to be clear about what "RAID10" means.
I'm extremely glad you did. Otherwise I would have booted my new
hardware RAID server and been very disappointed.
- Grant
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 16:39 ` Alan McKinnon
@ 2013-09-18 4:06 ` Grant
0 siblings, 0 replies; 72+ messages in thread
From: Grant @ 2013-09-18 4:06 UTC (permalink / raw
To: Gentoo mailing list
>>>> Performance doesn't seem to be one of ZFS's strong points. Is it
>>>> considered suitable for a high-performance server?
>>>
>>> ZFS is one of the fastest FS I am aware of (if not the fastest).
>>> You need a sufficient amount of RAM to make the ARC useful.
>>
>> How much RAM is that?
>
> 1G of RAM per 1TB of data is the recommendation.
>
> For de-duped data, it is considerably more, something on the order of 6G
> of RAM per 1TB of data.
Well, my entire server uses only about 50GB so I guess I'm OK with the
host's minimum of 16GB RAM.
- Grant
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 17:54 ` Stefan G. Weichinger
@ 2013-09-18 4:11 ` Grant
2013-09-18 7:26 ` Stefan G. Weichinger
2013-09-19 22:46 ` Douglas J Hunley
1 sibling, 1 reply; 72+ messages in thread
From: Grant @ 2013-09-18 4:11 UTC (permalink / raw
To: Gentoo mailing list
> I have to set up a server w/ 8x 1TB in about 2 weeks and consider ZFS as
> well, at least for data. So root-fs would go onto 2x 1TB hdds with
> conventional partitioning and something like ext4.
Is a layout like this with the data on ZFS and the root-fs on ext4 a
better choice than ZFS all around?
- Grant
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 18:00 ` Volker Armin Hemmann
2013-09-17 18:11 ` covici
2013-09-17 18:11 ` Tanstaafl
@ 2013-09-18 4:12 ` Grant
2013-09-18 9:56 ` Joerg Schilling
3 siblings, 0 replies; 72+ messages in thread
From: Grant @ 2013-09-18 4:12 UTC (permalink / raw
To: Gentoo mailing list
>> Besides performance, are there any drawbacks to ZFS compared to ext4?
>>
> do yourself three favours:
>
> use ECC ram. Lots of it. 16GB DDR3 1600 ECC ram cost you less than 170€.
> And it is worth it. ZFS showed me just how many silent corruptions can
> happen on a 'stable' system. Errors never seen neither detected thanks
> to using 'standard' ram.
>
> turn off readahead. ZFS' own readahead and the kernel's clash - badly.
> Turn off kernel's readahead for a visible performance boon.
>
> use noop as io-scheduler.
Thank you, I'm taking notes. Please feel free to toss out any more tips.
- Grant
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 19:30 ` Volker Armin Hemmann
@ 2013-09-18 4:20 ` Grant
2013-09-20 18:20 ` Grant
0 siblings, 1 reply; 72+ messages in thread
From: Grant @ 2013-09-18 4:20 UTC (permalink / raw
To: Gentoo mailing list
>>>> Besides performance, are there any drawbacks to ZFS compared to ext4?
How about hardened? Does ZFS have any problems interacting with
grsecurity or a hardened profile?
- Grant
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 18:11 ` Tanstaafl
2013-09-17 19:30 ` Volker Armin Hemmann
@ 2013-09-18 4:22 ` Bruce Hill
2013-09-18 8:03 ` Neil Bothwick
2013-09-18 12:55 ` [gentoo-user] ZFS James
1 sibling, 2 replies; 72+ messages in thread
From: Bruce Hill @ 2013-09-18 4:22 UTC (permalink / raw
To: gentoo-user
On Tue, Sep 17, 2013 at 02:11:33PM -0400, Tanstaafl wrote:
>
> Is there a good place to read about these kinds of tuning parameters?
Just wondering if anyone experienced running ZFS on Gentoo finds this wiki
article worthy of use: http://wiki.gentoo.org/wiki/ZFS
--
Happy Penguin Computers >')
126 Fenco Drive ( \
Tupelo, MS 38801 ^^
support@happypenguincomputers.com
662-269-2706 662-205-6424
http://happypenguincomputers.com/
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
Don't top-post: http://en.wikipedia.org/wiki/Top_post#Top-posting
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-18 4:11 ` Grant
@ 2013-09-18 7:26 ` Stefan G. Weichinger
2013-09-18 15:17 ` Stefan G. Weichinger
0 siblings, 1 reply; 72+ messages in thread
From: Stefan G. Weichinger @ 2013-09-18 7:26 UTC (permalink / raw
To: gentoo-user
Am 18.09.2013 06:11, schrieb Grant:
>> I have to set up a server w/ 8x 1TB in about 2 weeks and consider ZFS as
>> well, at least for data. So root-fs would go onto 2x 1TB hdds with
>> conventional partitioning and something like ext4.
>
> Is a layout like this with the data on ZFS and the root-fs on ext4 a
> better choice than ZFS all around?
Not better ... I just suggested this being conservative and cautious.
With a classic root-fs things would be splitted ... if the root-fs
breaks or I need to use some live-media to fix things this would all be
non-zfs-related operations.
In the specific case I am still unsure if I want to use zfs at all. And
I could suggest the customer a test-phase ... if it is not working as
intended I could easily roll back the 6 disks to an LVM-based software
RAID etc (moving data aside for the "conversion").
I am hesitating because I don't have zfs anywhere productive at
customers ... only for my own purposes in the basement where there is no
real performance issue.
And the customer in case wants reliability ... ok that would be provided
by zfs but I am not as used to admin that as I am with "native linux
file systems". It also leads to other topics ... I can only backup VMs
via LVM-based-snapshots (virt-backup.pl) when I use LVM, for example.
rootfs on ZFS or "everything on ZFS" would have advantages, sure. No
partitioning at all, resizeable zfs-filesystems for everything,
checksums for everything ... you name it.
In my case I have to decide until Sep, 25th -> installation day ;-)
Stefan
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-18 4:22 ` Bruce Hill
@ 2013-09-18 8:03 ` Neil Bothwick
2013-09-18 12:55 ` [gentoo-user] ZFS James
1 sibling, 0 replies; 72+ messages in thread
From: Neil Bothwick @ 2013-09-18 8:03 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 727 bytes --]
On Tue, 17 Sep 2013 23:22:29 -0500, Bruce Hill wrote:
> Just wondering if anyone experienced running ZFS on Gentoo finds this
> wiki article worthy of use: http://wiki.gentoo.org/wiki/ZFS
Yes, it is useful. However I have recently stopped using the option to
built ZFS into the kernel as I ran into problems with vdevs reported as
corrupt on the system I was trying this on. They weren't corrupt and
mounted fine in System Rescue Cd with modules, and the problem
disappeared when I switched to modules. So use caution and plenty of
testing if you want to go this root. I haven't had a chance to try and
find the exact cause yet.
--
Neil Bothwick
Am I ignorant or apathetic? I don't know and don't care!
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 18:00 ` Volker Armin Hemmann
` (2 preceding siblings ...)
2013-09-18 4:12 ` [gentoo-user] ZFS Grant
@ 2013-09-18 9:56 ` Joerg Schilling
2013-09-18 17:04 ` Volker Armin Hemmann
3 siblings, 1 reply; 72+ messages in thread
From: Joerg Schilling @ 2013-09-18 9:56 UTC (permalink / raw
To: gentoo-user
Volker Armin Hemmann <volkerarmin@googlemail.com> wrote:
> turn off readahead. ZFS' own readahead and the kernel's clash - badly.
> Turn off kernel's readahead for a visible performance boon.
You are probably not talking about ZFS readahead but about the ARC.
Jörg
--
EMail:joerg@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
js@cs.tu-berlin.de (uni)
joerg.schilling@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
^ permalink raw reply [flat|nested] 72+ messages in thread
* [gentoo-user] Re: ZFS
2013-09-18 4:22 ` Bruce Hill
2013-09-18 8:03 ` Neil Bothwick
@ 2013-09-18 12:55 ` James
2013-09-19 4:49 ` Grant
1 sibling, 1 reply; 72+ messages in thread
From: James @ 2013-09-18 12:55 UTC (permalink / raw
To: gentoo-user
Bruce Hill <daddy <at> happypenguincomputers.com> writes:
> On Tue, Sep 17, 2013 at 02:11:33PM -0400, Tanstaafl wrote:
> >
> > Is there a good place to read about these kinds of tuning parameters?
>
> Just wondering if anyone experienced running ZFS on Gentoo finds this wiki
> article worthy of use: http://wiki.gentoo.org/wiki/ZFS
I think many folks are interested in upgrading to EXT4 with RAID from
an ordinary JBOD workstation(server); or better yet to ZFS on RAID. I wish
one of the brighter minds amongst us would put out a skeleton
(wiki) information page as such:
http://wiki.gentoo.org/wiki/ZFS+RAID
I know I have struggled with completing this sort of installation
several time in the last 6 months. I'm sure this (proposed) wiki page
would get lots of updates from the Gentoo user community. Surely,
I'm not qualified to do this, or it would have already been on the
gentoo wiki....
Much of the older X + RAID pages are deprecated, when one considers
the changes that accompany such an installation ( Grub2, UUID, fstab,
partitioning of drives, Kernel options, just to name a few). We're
talking about quite a bit of deviation from the standard handbook
installation, fraught with hidden, fatal mis-steps.
Lord knows the Gentoo doc team wold appreciate such a wiki installation
guide, as the handbook is undergoing modernization.
just a thought.
James
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 7:20 [gentoo-user] ZFS Grant
` (4 preceding siblings ...)
2013-09-17 18:00 ` Volker Armin Hemmann
@ 2013-09-18 13:53 ` Stefan G. Weichinger
2013-09-19 1:02 ` Dale
2013-09-21 12:53 ` thegeezer
6 siblings, 1 reply; 72+ messages in thread
From: Stefan G. Weichinger @ 2013-09-18 13:53 UTC (permalink / raw
To: gentoo-user
Interesting news related to ZFS:
http://open-zfs.org/wiki/Main_Page
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-18 7:26 ` Stefan G. Weichinger
@ 2013-09-18 15:17 ` Stefan G. Weichinger
0 siblings, 0 replies; 72+ messages in thread
From: Stefan G. Weichinger @ 2013-09-18 15:17 UTC (permalink / raw
To: gentoo-user
Am 18.09.2013 09:26, schrieb Stefan G. Weichinger:
> rootfs on ZFS or "everything on ZFS" would have advantages, sure. No
> partitioning at all, resizeable zfs-filesystems for everything,
> checksums for everything ... you name it.
>
> In my case I have to decide until Sep, 25th -> installation day ;-)
playing around now with a gentoo-guest on an ZFS-mirror ... with
raw-format via virtio ... nice so far.
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-18 9:56 ` Joerg Schilling
@ 2013-09-18 17:04 ` Volker Armin Hemmann
2013-09-19 4:47 ` Grant
0 siblings, 1 reply; 72+ messages in thread
From: Volker Armin Hemmann @ 2013-09-18 17:04 UTC (permalink / raw
To: gentoo-user
Am 18.09.2013 11:56, schrieb Joerg Schilling:
> Volker Armin Hemmann <volkerarmin@googlemail.com> wrote:
>
>> turn off readahead. ZFS' own readahead and the kernel's clash - badly.
>> Turn off kernel's readahead for a visible performance boon.
> You are probably not talking about ZFS readahead but about the ARC.
>
> Jörg
>
which does prefetching. So yes.
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-18 13:53 ` Stefan G. Weichinger
@ 2013-09-19 1:02 ` Dale
2013-09-19 4:44 ` Grant
0 siblings, 1 reply; 72+ messages in thread
From: Dale @ 2013-09-19 1:02 UTC (permalink / raw
To: gentoo-user
Stefan G. Weichinger wrote:
> Interesting news related to ZFS:
>
> http://open-zfs.org/wiki/Main_Page
>
>
I wonder if this will be added to the kernel at some point in the
future? May even be their intention?
Dale
:-) :-)
--
I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-19 1:02 ` Dale
@ 2013-09-19 4:44 ` Grant
2013-09-19 7:40 ` Dale
2013-09-19 9:04 ` Joerg Schilling
0 siblings, 2 replies; 72+ messages in thread
From: Grant @ 2013-09-19 4:44 UTC (permalink / raw
To: Gentoo mailing list
>> Interesting news related to ZFS:
>>
>> http://open-zfs.org/wiki/Main_Page
>
> I wonder if this will be added to the kernel at some point in the
> future? May even be their intention?
I think the CDDL license is what's keeping ZFS out of the kernel,
although some argue that it should be integrated anyway. OpenZFS
retains the same license.
- Grant
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-18 17:04 ` Volker Armin Hemmann
@ 2013-09-19 4:47 ` Grant
2013-09-20 15:11 ` Volker Armin Hemmann
0 siblings, 1 reply; 72+ messages in thread
From: Grant @ 2013-09-19 4:47 UTC (permalink / raw
To: Gentoo mailing list
>>> turn off readahead. ZFS' own readahead and the kernel's clash - badly.
>>> Turn off kernel's readahead for a visible performance boon.
>> You are probably not talking about ZFS readahead but about the ARC.
>
> which does prefetching. So yes.
I'm taking notes on this so I want to clarify, when using ZFS,
readahead in the kernel should be disabled by using blockdev to set it
to 8?
- Grant
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] Re: ZFS
2013-09-18 12:55 ` [gentoo-user] ZFS James
@ 2013-09-19 4:49 ` Grant
2013-09-19 7:43 ` Pandu Poluan
2013-09-19 7:44 ` Hinnerk van Bruinehsen
0 siblings, 2 replies; 72+ messages in thread
From: Grant @ 2013-09-19 4:49 UTC (permalink / raw
To: Gentoo mailing list
> I think many folks are interested in upgrading to EXT4 with RAID from
> an ordinary JBOD workstation(server); or better yet to ZFS on RAID. I wish
> one of the brighter minds amongst us would put out a skeleton
> (wiki) information page as such:
>
> http://wiki.gentoo.org/wiki/ZFS+RAID
>
> I know I have struggled with completing this sort of installation
> several time in the last 6 months. I'm sure this (proposed) wiki page
> would get lots of updates from the Gentoo user community. Surely,
> I'm not qualified to do this, or it would have already been on the
> gentoo wiki....
>
> Much of the older X + RAID pages are deprecated, when one considers
> the changes that accompany such an installation ( Grub2, UUID, fstab,
> partitioning of drives, Kernel options, just to name a few). We're
> talking about quite a bit of deviation from the standard handbook
> installation, fraught with hidden, fatal mis-steps.
Any important points or key concepts a ZFS newbie should remember when
installing with it for the first time?
- Grant
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-19 4:44 ` Grant
@ 2013-09-19 7:40 ` Dale
2013-09-19 7:45 ` Pandu Poluan
2013-09-19 9:07 ` Joerg Schilling
2013-09-19 9:04 ` Joerg Schilling
1 sibling, 2 replies; 72+ messages in thread
From: Dale @ 2013-09-19 7:40 UTC (permalink / raw
To: gentoo-user
Grant wrote:
>>> Interesting news related to ZFS:
>>>
>>> http://open-zfs.org/wiki/Main_Page
>> I wonder if this will be added to the kernel at some point in the
>> future? May even be their intention?
> I think the CDDL license is what's keeping ZFS out of the kernel,
> although some argue that it should be integrated anyway. OpenZFS
> retains the same license.
>
> - Grant
>
> .
>
Then I wonder why it seems to have forked? <scratches head >
Dale
:-) :-)
--
I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] Re: ZFS
2013-09-19 4:49 ` Grant
@ 2013-09-19 7:43 ` Pandu Poluan
2013-09-19 7:44 ` Hinnerk van Bruinehsen
1 sibling, 0 replies; 72+ messages in thread
From: Pandu Poluan @ 2013-09-19 7:43 UTC (permalink / raw
To: gentoo-user
On Thu, Sep 19, 2013 at 11:49 AM, Grant <emailgrant@gmail.com> wrote:
>> I think many folks are interested in upgrading to EXT4 with RAID from
>> an ordinary JBOD workstation(server); or better yet to ZFS on RAID. I wish
>> one of the brighter minds amongst us would put out a skeleton
>> (wiki) information page as such:
>>
>> http://wiki.gentoo.org/wiki/ZFS+RAID
>>
>> I know I have struggled with completing this sort of installation
>> several time in the last 6 months. I'm sure this (proposed) wiki page
>> would get lots of updates from the Gentoo user community. Surely,
>> I'm not qualified to do this, or it would have already been on the
>> gentoo wiki....
>>
>> Much of the older X + RAID pages are deprecated, when one considers
>> the changes that accompany such an installation ( Grub2, UUID, fstab,
>> partitioning of drives, Kernel options, just to name a few). We're
>> talking about quite a bit of deviation from the standard handbook
>> installation, fraught with hidden, fatal mis-steps.
>
> Any important points or key concepts a ZFS newbie should remember when
> installing with it for the first time?
>
> - Grant
>
Plan carefully how you are going to create the vdev's before you add
them to a pool.
Once a vdev has been created and added to a pool, you can't ever
un-add and/or replace them.
(You always can replace a component of a vdev -- e.g., if one physical
drive fails -- but you can't remove a vdev in its entirety).
Rgds,
--
FdS Pandu E Poluan
~ IT Optimizer ~
• LOPSA Member #15248
• Blog : http://pepoluan.tumblr.com
• Linked-In : http://id.linkedin.com/in/pepoluan
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] Re: ZFS
2013-09-19 4:49 ` Grant
2013-09-19 7:43 ` Pandu Poluan
@ 2013-09-19 7:44 ` Hinnerk van Bruinehsen
2013-09-19 7:47 ` Pandu Poluan
2013-09-19 10:37 ` Tanstaafl
1 sibling, 2 replies; 72+ messages in thread
From: Hinnerk van Bruinehsen @ 2013-09-19 7:44 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 1917 bytes --]
On Wed, Sep 18, 2013 at 09:49:40PM -0700, Grant wrote:
> > I think many folks are interested in upgrading to EXT4 with RAID from
> > an ordinary JBOD workstation(server); or better yet to ZFS on RAID. I wish
> > one of the brighter minds amongst us would put out a skeleton
> > (wiki) information page as such:
> >
> > http://wiki.gentoo.org/wiki/ZFS+RAID
> >
> > I know I have struggled with completing this sort of installation
> > several time in the last 6 months. I'm sure this (proposed) wiki page
> > would get lots of updates from the Gentoo user community. Surely,
> > I'm not qualified to do this, or it would have already been on the
> > gentoo wiki....
> >
> > Much of the older X + RAID pages are deprecated, when one considers
> > the changes that accompany such an installation ( Grub2, UUID, fstab,
> > partitioning of drives, Kernel options, just to name a few). We're
> > talking about quite a bit of deviation from the standard handbook
> > installation, fraught with hidden, fatal mis-steps.
>
> Any important points or key concepts a ZFS newbie should remember when
> installing with it for the first time?
>
> - Grant
You should definitely determine the right value for ashift on pool creation
(it controls the alignment on the medium). It's an option that you afaik can only set
on filesystem creation and therefore needs a restart from scratch if you get it
wrong.
According to the illumos wiki it's possible to run a mixed pool (if you have
drives requiring different alignments[1])
If in doubt: ask ryao (iirc given the right information he can tell you which
are the right options for you if you can't deduce it yourself).
Choosing the wrong alignment can cause severe performance loss (that's not
a ZFS issue but happened when 4k sector drives appeared and tools like fdisk
weren't aware of this).
WKR
Hinnerk
[1] http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 490 bytes --]
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-19 7:40 ` Dale
@ 2013-09-19 7:45 ` Pandu Poluan
2013-09-19 9:07 ` Joerg Schilling
1 sibling, 0 replies; 72+ messages in thread
From: Pandu Poluan @ 2013-09-19 7:45 UTC (permalink / raw
To: gentoo-user
On Thu, Sep 19, 2013 at 2:40 PM, Dale <rdalek1967@gmail.com> wrote:
> Grant wrote:
>>>> Interesting news related to ZFS:
>>>>
>>>> http://open-zfs.org/wiki/Main_Page
>>> I wonder if this will be added to the kernel at some point in the
>>> future? May even be their intention?
>> I think the CDDL license is what's keeping ZFS out of the kernel,
>> although some argue that it should be integrated anyway. OpenZFS
>> retains the same license.
>>
>> - Grant
>>
>> .
>>
>
> Then I wonder why it seems to have forked? <scratches head >
>
At the moment, only to 'decouple' ZFS development from Illumos development.
Changing a license require the approval of all rightsholders, and that
takes time.
At least, with a decoupling, ZFS can quickly improve to fulfill the
needs of its users, no longer depending on Illumos' dev cycle.
Rgds,
--
FdS Pandu E Poluan
~ IT Optimizer ~
• LOPSA Member #15248
• Blog : http://pepoluan.tumblr.com
• Linked-In : http://id.linkedin.com/in/pepoluan
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] Re: ZFS
2013-09-19 7:44 ` Hinnerk van Bruinehsen
@ 2013-09-19 7:47 ` Pandu Poluan
2013-09-19 8:04 ` Stefan G. Weichinger
2013-09-19 10:37 ` Tanstaafl
1 sibling, 1 reply; 72+ messages in thread
From: Pandu Poluan @ 2013-09-19 7:47 UTC (permalink / raw
To: gentoo-user
On Thu, Sep 19, 2013 at 2:44 PM, Hinnerk van Bruinehsen
<h.v.bruinehsen@fu-berlin.de> wrote:
> On Wed, Sep 18, 2013 at 09:49:40PM -0700, Grant wrote:
>> > I think many folks are interested in upgrading to EXT4 with RAID from
>> > an ordinary JBOD workstation(server); or better yet to ZFS on RAID. I wish
>> > one of the brighter minds amongst us would put out a skeleton
>> > (wiki) information page as such:
>> >
>> > http://wiki.gentoo.org/wiki/ZFS+RAID
>> >
>> > I know I have struggled with completing this sort of installation
>> > several time in the last 6 months. I'm sure this (proposed) wiki page
>> > would get lots of updates from the Gentoo user community. Surely,
>> > I'm not qualified to do this, or it would have already been on the
>> > gentoo wiki....
>> >
>> > Much of the older X + RAID pages are deprecated, when one considers
>> > the changes that accompany such an installation ( Grub2, UUID, fstab,
>> > partitioning of drives, Kernel options, just to name a few). We're
>> > talking about quite a bit of deviation from the standard handbook
>> > installation, fraught with hidden, fatal mis-steps.
>>
>> Any important points or key concepts a ZFS newbie should remember when
>> installing with it for the first time?
>>
>> - Grant
>
>
> You should definitely determine the right value for ashift on pool creation
> (it controls the alignment on the medium). It's an option that you afaik can only set
> on filesystem creation and therefore needs a restart from scratch if you get it
> wrong.
> According to the illumos wiki it's possible to run a mixed pool (if you have
> drives requiring different alignments[1])
> If in doubt: ask ryao (iirc given the right information he can tell you which
> are the right options for you if you can't deduce it yourself).
> Choosing the wrong alignment can cause severe performance loss (that's not
> a ZFS issue but happened when 4k sector drives appeared and tools like fdisk
> weren't aware of this).
>
> WKR
> Hinnerk
>
Especially with SSDs. One must find out the blocksize used by his/her SSDs.
With spinning disks, setting ashift=12 is enough since no spinning
disks have sectors larger than 2^12 bytes.
With SSDs, one might have to set ashift=13 or even ashift=14.
Rgds,
--
FdS Pandu E Poluan
~ IT Optimizer ~
• LOPSA Member #15248
• Blog : http://pepoluan.tumblr.com
• Linked-In : http://id.linkedin.com/in/pepoluan
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] Re: ZFS
2013-09-19 7:47 ` Pandu Poluan
@ 2013-09-19 8:04 ` Stefan G. Weichinger
2013-09-19 13:04 ` Grant
0 siblings, 1 reply; 72+ messages in thread
From: Stefan G. Weichinger @ 2013-09-19 8:04 UTC (permalink / raw
To: gentoo-user
Am 19.09.2013 09:47, schrieb Pandu Poluan:
> Especially with SSDs. One must find out the blocksize used by his/her SSDs.
>
> With spinning disks, setting ashift=12 is enough since no spinning
> disks have sectors larger than 2^12 bytes.
>
> With SSDs, one might have to set ashift=13 or even ashift=14.
May I suggest that we should somehow collect all these small but
important issues for reference? Wiki?
Stefan
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-19 4:44 ` Grant
2013-09-19 7:40 ` Dale
@ 2013-09-19 9:04 ` Joerg Schilling
1 sibling, 0 replies; 72+ messages in thread
From: Joerg Schilling @ 2013-09-19 9:04 UTC (permalink / raw
To: gentoo-user
Grant <emailgrant@gmail.com> wrote:
> >> Interesting news related to ZFS:
> >>
> >> http://open-zfs.org/wiki/Main_Page
> >
> > I wonder if this will be added to the kernel at some point in the
> > future? May even be their intention?
>
> I think the CDDL license is what's keeping ZFS out of the kernel,
> although some argue that it should be integrated anyway. OpenZFS
> retains the same license.
As long as there are people that claim ZFS was derived from the Linux kernel
(i.e. is a derived work from GPL code and thus needs to be put under GPL),
there seems to be a problem.
I am not sure whether it is possible to educate these people...
Jörg
--
EMail:joerg@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
js@cs.tu-berlin.de (uni)
joerg.schilling@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-19 7:40 ` Dale
2013-09-19 7:45 ` Pandu Poluan
@ 2013-09-19 9:07 ` Joerg Schilling
2013-09-19 11:22 ` Dale
1 sibling, 1 reply; 72+ messages in thread
From: Joerg Schilling @ 2013-09-19 9:07 UTC (permalink / raw
To: gentoo-user
Dale <rdalek1967@gmail.com> wrote:
> Grant wrote:
> >>> Interesting news related to ZFS:
> >>>
> >>> http://open-zfs.org/wiki/Main_Page
> >> I wonder if this will be added to the kernel at some point in the
> >> future? May even be their intention?
> > I think the CDDL license is what's keeping ZFS out of the kernel,
> > although some argue that it should be integrated anyway. OpenZFS
> > retains the same license.
> >
> > - Grant
> >
> > .
> >
>
> Then I wonder why it seems to have forked? <scratches head >
Why do you believe it has forked?
This project does not even has a source code repository and the fact that
they refer to illumos for sources makes me wonder whether it is open for
contributing.
Jörg
--
EMail:joerg@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
js@cs.tu-berlin.de (uni)
joerg.schilling@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] Re: ZFS
2013-09-19 7:44 ` Hinnerk van Bruinehsen
2013-09-19 7:47 ` Pandu Poluan
@ 2013-09-19 10:37 ` Tanstaafl
2013-09-19 12:29 ` Grant
2013-09-19 12:54 ` Pandu Poluan
1 sibling, 2 replies; 72+ messages in thread
From: Tanstaafl @ 2013-09-19 10:37 UTC (permalink / raw
To: gentoo-user
On 2013-09-19 3:44 AM, Hinnerk van Bruinehsen <h.v.bruinehsen@fu-
> You should definitely determine the right value for ashift on pool creation
> (it controls the alignment on the medium). It's an option that you afaik can only set
> on filesystem creation and therefore needs a restart from scratch if you get it
> wrong.
> According to the illumos wiki it's possible to run a mixed pool (if you have
> drives requiring different alignments[1])
> If in doubt: ask ryao (iirc given the right information he can tell you which
> are the right options for you if you can't deduce it yourself).
> Choosing the wrong alignment can cause severe performance loss (that's not
> a ZFS issue but happened when 4k sector drives appeared and tools like fdisk
> weren't aware of this).
Yikes...
Ok, shouldn't there be a tool or tools to help with this? Ie, boot up on
a bootable tools disk on the system with all drives connected, then let
it 'analyze' your system, maybe ask you some questions (ie, how you will
be configuring the drives/RAID, etc), then spit out an optimized config
for you?
It is starting to sound like you need to be a dang engineer just to use
ZFS...
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-19 9:07 ` Joerg Schilling
@ 2013-09-19 11:22 ` Dale
2013-09-19 11:27 ` Joerg Schilling
0 siblings, 1 reply; 72+ messages in thread
From: Dale @ 2013-09-19 11:22 UTC (permalink / raw
To: gentoo-user
Joerg Schilling wrote:
> Dale <rdalek1967@gmail.com> wrote:
>
>> Grant wrote:
>>>>> Interesting news related to ZFS:
>>>>>
>>>>> http://open-zfs.org/wiki/Main_Page
>>>> I wonder if this will be added to the kernel at some point in the
>>>> future? May even be their intention?
>>> I think the CDDL license is what's keeping ZFS out of the kernel,
>>> although some argue that it should be integrated anyway. OpenZFS
>>> retains the same license.
>>>
>>> - Grant
>>>
>>> .
>>>
>> Then I wonder why it seems to have forked? <scratches head >
> Why do you believe it has forked?
> This project does not even has a source code repository and the fact that
> they refer to illumos for sources makes me wonder whether it is open for
> contributing.
>
> Jörg
>
Well, it seemed to me that it either changed its name or forked or
something. I was hoping that whatever the reason for this, it would
eventually be in the kernel like ext* and others. It seems that is not
the case. That's why I was asking questions.
Dale
:-) :-)
--
I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-19 11:22 ` Dale
@ 2013-09-19 11:27 ` Joerg Schilling
2013-09-22 1:19 ` Dale
0 siblings, 1 reply; 72+ messages in thread
From: Joerg Schilling @ 2013-09-19 11:27 UTC (permalink / raw
To: gentoo-user
Dale <rdalek1967@gmail.com> wrote:
> > Why do you believe it has forked?
> > This project does not even has a source code repository and the fact that
> > they refer to illumos for sources makes me wonder whether it is open for
> > contributing.
> >
> > Jörg
> >
>
> Well, it seemed to me that it either changed its name or forked or
> something. I was hoping that whatever the reason for this, it would
> eventually be in the kernel like ext* and others. It seems that is not
> the case. That's why I was asking questions.
It is in the Kernel...
It may not be in the Linux kernel ;-)
It seems that they just came out of their caves and created a web page.
Note that until recently, they used secret mailing lists.
Jörg
--
EMail:joerg@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
js@cs.tu-berlin.de (uni)
joerg.schilling@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] Re: ZFS
2013-09-19 10:37 ` Tanstaafl
@ 2013-09-19 12:29 ` Grant
2013-09-19 12:54 ` Pandu Poluan
1 sibling, 0 replies; 72+ messages in thread
From: Grant @ 2013-09-19 12:29 UTC (permalink / raw
To: Gentoo mailing list
>> You should definitely determine the right value for ashift on pool
>> creation
>> (it controls the alignment on the medium). It's an option that you afaik
>> can only set
>> on filesystem creation and therefore needs a restart from scratch if you
>> get it
>> wrong.
>> According to the illumos wiki it's possible to run a mixed pool (if you
>> have
>> drives requiring different alignments[1])
>> If in doubt: ask ryao (iirc given the right information he can tell you
>> which
>> are the right options for you if you can't deduce it yourself).
>> Choosing the wrong alignment can cause severe performance loss (that's not
>> a ZFS issue but happened when 4k sector drives appeared and tools like
>> fdisk
>> weren't aware of this).
>
> Yikes...
>
> Ok, shouldn't there be a tool or tools to help with this? Ie, boot up on a
> bootable tools disk on the system with all drives connected, then let it
> 'analyze' your system, maybe ask you some questions (ie, how you will be
> configuring the drives/RAID, etc), then spit out an optimized config for
> you?
I'm also interested to know the procedure for getting this right.
> It is starting to sound like you need to be a dang engineer just to use
> ZFS...
I thought the SSD issue was completely separate from ZFS and
applicable to any other filesystem as well. Someone please correct me
if I'm wrong.
- Grant
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] Re: ZFS
2013-09-19 10:37 ` Tanstaafl
2013-09-19 12:29 ` Grant
@ 2013-09-19 12:54 ` Pandu Poluan
2013-09-19 13:01 ` Grant
1 sibling, 1 reply; 72+ messages in thread
From: Pandu Poluan @ 2013-09-19 12:54 UTC (permalink / raw
To: gentoo-user
On Thu, Sep 19, 2013 at 5:37 PM, Tanstaafl <tanstaafl@libertytrek.org> wrote:
> On 2013-09-19 3:44 AM, Hinnerk van Bruinehsen <h.v.bruinehsen@fu-
>
>> You should definitely determine the right value for ashift on pool
>> creation
>> (it controls the alignment on the medium). It's an option that you afaik
>> can only set
>> on filesystem creation and therefore needs a restart from scratch if you
>> get it
>> wrong.
>> According to the illumos wiki it's possible to run a mixed pool (if you
>> have
>> drives requiring different alignments[1])
>> If in doubt: ask ryao (iirc given the right information he can tell you
>> which
>> are the right options for you if you can't deduce it yourself).
>> Choosing the wrong alignment can cause severe performance loss (that's not
>> a ZFS issue but happened when 4k sector drives appeared and tools like
>> fdisk
>> weren't aware of this).
>
>
> Yikes...
>
> Ok, shouldn't there be a tool or tools to help with this? Ie, boot up on a
> bootable tools disk on the system with all drives connected, then let it
> 'analyze' your system, maybe ask you some questions (ie, how you will be
> configuring the drives/RAID, etc), then spit out an optimized config for
> you?
>
> It is starting to sound like you need to be a dang engineer just to use
> ZFS...
>
Just do ashift=12 and you're good to go. No need to analyze further.
The reason I said that because in the future, *all* drives will have 4
KiB sectors. Currently, many drives still have 512 B sectors. But when
one day your drive dies and you need to replace it, will you be able
to find a drive with 512 B sectors?
Unlikely.
That's why, even if your drives are currently of the 'classic' 512 B
ones, go with ashift=12 anyway.
For SSDs, the situation is murkier. Many SSDs 'lie' about their actual
sector size, reporting to the OS that their sector size is 512 B (or 4
KiB). No tool can pierce this veil of smokescreen. The only way is to
do research on the Internet.
IIRC, a ZFS developer has embedded -- or planned to embed -- a small
database into the ZFS utilities to conclusively determine what
settings will be optimal. I forgot who exactly. Maybe @ryao can pipe
in (hello Richard! If you're watching this thread, feel free to add
more info).
Rgds,
--
FdS Pandu E Poluan
~ IT Optimizer ~
• LOPSA Member #15248
• Blog : http://pepoluan.tumblr.com
• Linked-In : http://id.linkedin.com/in/pepoluan
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] Re: ZFS
2013-09-19 12:54 ` Pandu Poluan
@ 2013-09-19 13:01 ` Grant
2013-09-19 13:12 ` Pandu Poluan
0 siblings, 1 reply; 72+ messages in thread
From: Grant @ 2013-09-19 13:01 UTC (permalink / raw
To: Gentoo mailing list
>>> You should definitely determine the right value for ashift on pool
>>> creation
>>> (it controls the alignment on the medium). It's an option that you afaik
>>> can only set
>>> on filesystem creation and therefore needs a restart from scratch if you
>>> get it
>>> wrong.
>>> According to the illumos wiki it's possible to run a mixed pool (if you
>>> have
>>> drives requiring different alignments[1])
>>> If in doubt: ask ryao (iirc given the right information he can tell you
>>> which
>>> are the right options for you if you can't deduce it yourself).
>>> Choosing the wrong alignment can cause severe performance loss (that's not
>>> a ZFS issue but happened when 4k sector drives appeared and tools like
>>> fdisk
>>> weren't aware of this).
>>
>> Yikes...
>>
>> Ok, shouldn't there be a tool or tools to help with this? Ie, boot up on a
>> bootable tools disk on the system with all drives connected, then let it
>> 'analyze' your system, maybe ask you some questions (ie, how you will be
>> configuring the drives/RAID, etc), then spit out an optimized config for
>> you?
>>
>> It is starting to sound like you need to be a dang engineer just to use
>> ZFS...
>>
>
> Just do ashift=12 and you're good to go. No need to analyze further.
>
> The reason I said that because in the future, *all* drives will have 4
> KiB sectors. Currently, many drives still have 512 B sectors. But when
> one day your drive dies and you need to replace it, will you be able
> to find a drive with 512 B sectors?
>
> Unlikely.
>
> That's why, even if your drives are currently of the 'classic' 512 B
> ones, go with ashift=12 anyway.
>
> For SSDs, the situation is murkier. Many SSDs 'lie' about their actual
> sector size, reporting to the OS that their sector size is 512 B (or 4
> KiB). No tool can pierce this veil of smokescreen. The only way is to
> do research on the Internet.
OK, so figure out what SSD you're using and Google to find the correct ashift?
- Grant
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] Re: ZFS
2013-09-19 8:04 ` Stefan G. Weichinger
@ 2013-09-19 13:04 ` Grant
0 siblings, 0 replies; 72+ messages in thread
From: Grant @ 2013-09-19 13:04 UTC (permalink / raw
To: Gentoo mailing list
>> Especially with SSDs. One must find out the blocksize used by his/her SSDs.
>>
>> With spinning disks, setting ashift=12 is enough since no spinning
>> disks have sectors larger than 2^12 bytes.
>>
>> With SSDs, one might have to set ashift=13 or even ashift=14.
>
> May I suggest that we should somehow collect all these small but
> important issues for reference? Wiki?
This could be useful:
http://www.funtoo.org/wiki/ZFS_Install_Guide
- Grant
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] Re: ZFS
2013-09-19 13:01 ` Grant
@ 2013-09-19 13:12 ` Pandu Poluan
0 siblings, 0 replies; 72+ messages in thread
From: Pandu Poluan @ 2013-09-19 13:12 UTC (permalink / raw
To: gentoo-user
On Thu, Sep 19, 2013 at 8:01 PM, Grant <emailgrant@gmail.com> wrote:
>>>> You should definitely determine the right value for ashift on pool
>>>> creation
>>>> (it controls the alignment on the medium). It's an option that you afaik
>>>> can only set
>>>> on filesystem creation and therefore needs a restart from scratch if you
>>>> get it
>>>> wrong.
>>>> According to the illumos wiki it's possible to run a mixed pool (if you
>>>> have
>>>> drives requiring different alignments[1])
>>>> If in doubt: ask ryao (iirc given the right information he can tell you
>>>> which
>>>> are the right options for you if you can't deduce it yourself).
>>>> Choosing the wrong alignment can cause severe performance loss (that's not
>>>> a ZFS issue but happened when 4k sector drives appeared and tools like
>>>> fdisk
>>>> weren't aware of this).
>>>
>>> Yikes...
>>>
>>> Ok, shouldn't there be a tool or tools to help with this? Ie, boot up on a
>>> bootable tools disk on the system with all drives connected, then let it
>>> 'analyze' your system, maybe ask you some questions (ie, how you will be
>>> configuring the drives/RAID, etc), then spit out an optimized config for
>>> you?
>>>
>>> It is starting to sound like you need to be a dang engineer just to use
>>> ZFS...
>>>
>>
>> Just do ashift=12 and you're good to go. No need to analyze further.
>>
>> The reason I said that because in the future, *all* drives will have 4
>> KiB sectors. Currently, many drives still have 512 B sectors. But when
>> one day your drive dies and you need to replace it, will you be able
>> to find a drive with 512 B sectors?
>>
>> Unlikely.
>>
>> That's why, even if your drives are currently of the 'classic' 512 B
>> ones, go with ashift=12 anyway.
>>
>> For SSDs, the situation is murkier. Many SSDs 'lie' about their actual
>> sector size, reporting to the OS that their sector size is 512 B (or 4
>> KiB). No tool can pierce this veil of smokescreen. The only way is to
>> do research on the Internet.
>
> OK, so figure out what SSD you're using and Google to find the correct ashift?
>
> - Grant
>
Kind of like that, yes. Find out exactly the size of the SSD's
"internal sectors" (for lack of better term), and find the log2 to it.
But don't go higher than ashift=14
Rgds,
--
FdS Pandu E Poluan
~ IT Optimizer ~
• LOPSA Member #15248
• Blog : http://pepoluan.tumblr.com
• Linked-In : http://id.linkedin.com/in/pepoluan
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 16:32 ` covici
@ 2013-09-19 22:41 ` Douglas J Hunley
2013-09-20 23:12 ` Hinnerk van Bruinehsen
2013-09-19 22:46 ` Douglas J Hunley
1 sibling, 1 reply; 72+ messages in thread
From: Douglas J Hunley @ 2013-09-19 22:41 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 373 bytes --]
On Tue, Sep 17, 2013 at 12:32 PM, <covici@ccs.covici.com> wrote:
> Spo do I need that overlay at all, or just emerge zfs and its module?
You do *not* need the overlay. Everything you need is in portage nowadays
--
Douglas J Hunley (doug.hunley@gmail.com)
Twitter: @hunleyd Web:
douglasjhunley.com
G+: http://goo.gl/sajR3
[-- Attachment #2: Type: text/html, Size: 903 bytes --]
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 17:54 ` Stefan G. Weichinger
2013-09-18 4:11 ` Grant
@ 2013-09-19 22:46 ` Douglas J Hunley
2013-09-20 9:17 ` Joerg Schilling
1 sibling, 1 reply; 72+ messages in thread
From: Douglas J Hunley @ 2013-09-19 22:46 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 960 bytes --]
On Tue, Sep 17, 2013 at 1:54 PM, Stefan G. Weichinger <lists@xunil.at>wrote:
> I have to set up a server w/ 8x 1TB in about 2 weeks and consider ZFS as
> well, at least for data. So root-fs would go onto 2x 1TB hdds with
> conventional partitioning and something like ext4.
>
> 6x 1TB would be available for data ... on one hand for a file-server
> part ... on the other hand for VMs based on KVM.
>
1TB drives are right on the border of switching from RAIDZ to RAIDZ2.
You'll see people argue for both sides at this size, but the 'saner
default' would be to use RAIDZ2. You're going to lose storage space, but
gain an extra parity drive (think RAID6). Consumer grade hard drives are
/going/ to fail during a resilver (Murphy's Law) and that extra parity
drive is going to save your bacon.
I create
--
Douglas J Hunley (doug.hunley@gmail.com)
Twitter: @hunleyd Web:
douglasjhunley.com
G+: http://goo.gl/sajR3
[-- Attachment #2: Type: text/html, Size: 1657 bytes --]
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 16:32 ` covici
2013-09-19 22:41 ` Douglas J Hunley
@ 2013-09-19 22:46 ` Douglas J Hunley
1 sibling, 0 replies; 72+ messages in thread
From: Douglas J Hunley @ 2013-09-19 22:46 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 373 bytes --]
On Tue, Sep 17, 2013 at 12:32 PM, <covici@ccs.covici.com> wrote:
> Spo do I need that overlay at all, or just emerge zfs and its module?
You do *not* need the overlay. Everything you need is in portage nowadays
--
Douglas J Hunley (doug.hunley@gmail.com)
Twitter: @hunleyd Web:
douglasjhunley.com
G+: http://goo.gl/sajR3
[-- Attachment #2: Type: text/html, Size: 909 bytes --]
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-19 22:46 ` Douglas J Hunley
@ 2013-09-20 9:17 ` Joerg Schilling
2013-09-20 11:17 ` Tanstaafl
0 siblings, 1 reply; 72+ messages in thread
From: Joerg Schilling @ 2013-09-20 9:17 UTC (permalink / raw
To: gentoo-user
Douglas J Hunley <doug.hunley@gmail.com> wrote:
> 1TB drives are right on the border of switching from RAIDZ to RAIDZ2.
> You'll see people argue for both sides at this size, but the 'saner
> default' would be to use RAIDZ2. You're going to lose storage space, but
> gain an extra parity drive (think RAID6). Consumer grade hard drives are
> /going/ to fail during a resilver (Murphy's Law) and that extra parity
> drive is going to save your bacon.
The main advantage of RAIDZ2 is that you can remove one disk and the RAID is
still operative. Now you put in a bigger disk..... repeat until you replaced
all disks and you did grow your storage.
Jörg
--
EMail:joerg@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
js@cs.tu-berlin.de (uni)
joerg.schilling@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-20 9:17 ` Joerg Schilling
@ 2013-09-20 11:17 ` Tanstaafl
0 siblings, 0 replies; 72+ messages in thread
From: Tanstaafl @ 2013-09-20 11:17 UTC (permalink / raw
To: gentoo-user
On 2013-09-20 5:17 AM, Joerg Schilling
<Joerg.Schilling@fokus.fraunhofer.de> wrote:
> Douglas J Hunley <doug.hunley@gmail.com> wrote:
>
>> 1TB drives are right on the border of switching from RAIDZ to RAIDZ2.
>> You'll see people argue for both sides at this size, but the 'saner
>> default' would be to use RAIDZ2. You're going to lose storage space, but
>> gain an extra parity drive (think RAID6). Consumer grade hard drives are
>> /going/ to fail during a resilver (Murphy's Law) and that extra parity
>> drive is going to save your bacon.
>
> The main advantage of RAIDZ2 is that you can remove one disk and the RAID is
> still operative. Now you put in a bigger disk..... repeat until you replaced
> all disks and you did grow your storage.
Interesting, thanks... :)
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-19 4:47 ` Grant
@ 2013-09-20 15:11 ` Volker Armin Hemmann
0 siblings, 0 replies; 72+ messages in thread
From: Volker Armin Hemmann @ 2013-09-20 15:11 UTC (permalink / raw
To: gentoo-user
Am 19.09.2013 06:47, schrieb Grant:
>>>> turn off readahead. ZFS' own readahead and the kernel's clash - badly.
>>>> Turn off kernel's readahead for a visible performance boon.
>>> You are probably not talking about ZFS readahead but about the ARC.
>> which does prefetching. So yes.
> I'm taking notes on this so I want to clarify, when using ZFS,
> readahead in the kernel should be disabled by using blockdev to set it
> to 8?
>
> - Grant
>
> .
>
you can't turn it off (afaik) but 8 is a good value - because it is just
a 4k block.
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-18 4:20 ` Grant
@ 2013-09-20 18:20 ` Grant
2013-09-20 23:07 ` Hinnerk van Bruinehsen
0 siblings, 1 reply; 72+ messages in thread
From: Grant @ 2013-09-20 18:20 UTC (permalink / raw
To: Gentoo mailing list
> How about hardened? Does ZFS have any problems interacting with
> grsecurity or a hardened profile?
Has anyone tried hardened and ZFS together?
- Grant
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-20 18:20 ` Grant
@ 2013-09-20 23:07 ` Hinnerk van Bruinehsen
2013-09-21 4:34 ` Grant
0 siblings, 1 reply; 72+ messages in thread
From: Hinnerk van Bruinehsen @ 2013-09-20 23:07 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 965 bytes --]
On Fri, Sep 20, 2013 at 11:20:53AM -0700, Grant wrote:
> > How about hardened? Does ZFS have any problems interacting with
> > grsecurity or a hardened profile?
>
> Has anyone tried hardened and ZFS together?
>
Hi,
I did - I had some problems, but I'm not sure if they were caused by the
combination of ZFS and hardened. There were some issues updating kernel and ZFS
(most likely due to ZFS on root and me using ~arch hardened-sources and the
live ebuild for zfs).
There are some hardened options that are known to be not working (constify was
one of them but that should be patched now). I think another one was HIDESYM.
There is a (more or less regularly updated blogpost by prometheanfire
(installation guide zfs+hardened+luks [1]).
So you could ask him or ryao (he seems to support hardened+zfs at least to
a certain degree).
WKR
Hinnerk
[1] https://mthode.org/posts/2013/Sep/gentoo-hardened-zfs-rootfs-with-dm-cryptluks-062/
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 490 bytes --]
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-19 22:41 ` Douglas J Hunley
@ 2013-09-20 23:12 ` Hinnerk van Bruinehsen
0 siblings, 0 replies; 72+ messages in thread
From: Hinnerk van Bruinehsen @ 2013-09-20 23:12 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 601 bytes --]
On Thu, Sep 19, 2013 at 06:41:47PM -0400, Douglas J Hunley wrote:
>
> On Tue, Sep 17, 2013 at 12:32 PM, <covici@ccs.covici.com> wrote:
>
> Spo do I need that overlay at all, or just emerge zfs and its module?
>
>
> You do *not* need the overlay. Everything you need is in portage nowadays
>
Afaik the overlay even comes with a warning from ryao not to use it unless
being told by him to do so (since it's very experimental and includes patches
that were not reviewed). Unless you want to do heavy testing (best while
communicating with ryao) you should use the ebuilds from portage.
WKR
Hinnerk
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 490 bytes --]
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-20 23:07 ` Hinnerk van Bruinehsen
@ 2013-09-21 4:34 ` Grant
0 siblings, 0 replies; 72+ messages in thread
From: Grant @ 2013-09-21 4:34 UTC (permalink / raw
To: Gentoo mailing list
>> > How about hardened? Does ZFS have any problems interacting with
>> > grsecurity or a hardened profile?
>>
>> Has anyone tried hardened and ZFS together?
>
> I did - I had some problems, but I'm not sure if they were caused by the
> combination of ZFS and hardened. There were some issues updating kernel and ZFS
> (most likely due to ZFS on root and me using ~arch hardened-sources and the
> live ebuild for zfs).
> There are some hardened options that are known to be not working (constify was
> one of them but that should be patched now). I think another one was HIDESYM.
>
> There is a (more or less regularly updated blogpost by prometheanfire
> (installation guide zfs+hardened+luks [1]).
> So you could ask him or ryao (he seems to support hardened+zfs at least to
> a certain degree).
> [1] https://mthode.org/posts/2013/Sep/gentoo-hardened-zfs-rootfs-with-dm-cryptluks-062/
Thanks for the link. It doesn't look too bad.
- Grant
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-17 7:20 [gentoo-user] ZFS Grant
` (5 preceding siblings ...)
2013-09-18 13:53 ` Stefan G. Weichinger
@ 2013-09-21 12:53 ` thegeezer
2013-09-21 16:49 ` Pandu Poluan
6 siblings, 1 reply; 72+ messages in thread
From: thegeezer @ 2013-09-21 12:53 UTC (permalink / raw
To: gentoo-user
On 09/17/2013 08:20 AM, Grant wrote:
> I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
> running. I'd also like to stripe for performance, resulting in
> RAID10. It sounds like most hardware controllers do not support
> 6-disk RAID10 so ZFS looks very interesting.
>
> Can I operate ZFS RAID without a hardware RAID controller?
>
> From a RAID perspective only, is ZFS a better choice than conventional
> software RAID?
>
> ZFS seems to have many excellent features and I'd like to ease into
> them slowly (like an old man into a nice warm bath). Does ZFS allow
> you to set up additional features later (e.g. snapshots, encryption,
> deduplication, compression) or is some forethought required when first
> making the filesystem?
>
> It looks like there are comprehensive ZFS Gentoo docs
> (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
> world about how much extra difficulty/complexity is added to
> installation and ongoing administration when choosing ZFS over ext4?
>
> Performance doesn't seem to be one of ZFS's strong points. Is it
> considered suitable for a high-performance server?
>
> http://www.phoronix.com/scan.php?page=news_item&px=MTM1NTA
>
> Besides performance, are there any drawbacks to ZFS compared to ext4?
>
> - Grant
>
Howdy,
been reading this thread and am pretty intrigued, ZFS is much more than
i thought it was.
I was wondering though does ZFS work as a multiple client single storage
cluster such as GFS/OCFS/VMFS/OrangeFS ?
I was also wondering if anyone could share their experience with ZFS on
iscsi - especially considering the readahead /proc changes required on
same system ?
thanks!
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-21 12:53 ` thegeezer
@ 2013-09-21 16:49 ` Pandu Poluan
0 siblings, 0 replies; 72+ messages in thread
From: Pandu Poluan @ 2013-09-21 16:49 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 2366 bytes --]
On Sep 21, 2013 7:54 PM, "thegeezer" <thegeezer@thegeezer.net> wrote:
>
> On 09/17/2013 08:20 AM, Grant wrote:
> > I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
> > running. I'd also like to stripe for performance, resulting in
> > RAID10. It sounds like most hardware controllers do not support
> > 6-disk RAID10 so ZFS looks very interesting.
> >
> > Can I operate ZFS RAID without a hardware RAID controller?
> >
> > From a RAID perspective only, is ZFS a better choice than conventional
> > software RAID?
> >
> > ZFS seems to have many excellent features and I'd like to ease into
> > them slowly (like an old man into a nice warm bath). Does ZFS allow
> > you to set up additional features later (e.g. snapshots, encryption,
> > deduplication, compression) or is some forethought required when first
> > making the filesystem?
> >
> > It looks like there are comprehensive ZFS Gentoo docs
> > (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
> > world about how much extra difficulty/complexity is added to
> > installation and ongoing administration when choosing ZFS over ext4?
> >
> > Performance doesn't seem to be one of ZFS's strong points. Is it
> > considered suitable for a high-performance server?
> >
> > http://www.phoronix.com/scan.php?page=news_item&px=MTM1NTA
> >
> > Besides performance, are there any drawbacks to ZFS compared to ext4?
> >
> > - Grant
> >
> Howdy,
> been reading this thread and am pretty intrigued, ZFS is much more than
> i thought it was.
> I was wondering though does ZFS work as a multiple client single storage
> cluster such as GFS/OCFS/VMFS/OrangeFS ?
Well... not really.
Of course you could run ZFS over DRBD, or run any of those filesystems on
top a zvol...
But I'll say, ZFS is not (yet?) a clustered filesystem.
> I was also wondering if anyone could share their experience with ZFS on
> iscsi - especially considering the readahead /proc changes required on
> same system ?
> thanks!
>
Although I have no experience of ZFS over iSCSI, I don't think that's any
problem.
As long as ZFS can 'see' the block device comes time for it to mount the
pool and all 'child' datasets (or zvols), all should be well.
In this case, however, you would want the iSCSI target to not perform a
readahead. Let ZFS 'instructs' the iSCSI target on which sectors to read.
Rgds,
--
[-- Attachment #2: Type: text/html, Size: 3206 bytes --]
^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [gentoo-user] ZFS
2013-09-19 11:27 ` Joerg Schilling
@ 2013-09-22 1:19 ` Dale
0 siblings, 0 replies; 72+ messages in thread
From: Dale @ 2013-09-22 1:19 UTC (permalink / raw
To: gentoo-user
Joerg Schilling wrote:
> Dale <rdalek1967@gmail.com> wrote:
>
>>> Why do you believe it has forked?
>>> This project does not even has a source code repository and the fact that
>>> they refer to illumos for sources makes me wonder whether it is open for
>>> contributing.
>>>
>>> Jörg
>>>
>> Well, it seemed to me that it either changed its name or forked or
>> something. I was hoping that whatever the reason for this, it would
>> eventually be in the kernel like ext* and others. It seems that is not
>> the case. That's why I was asking questions.
> It is in the Kernel...
>
> It may not be in the Linux kernel ;-)
>
> It seems that they just came out of their caves and created a web page.
> Note that until recently, they used secret mailing lists.
>
> Jörg
>
Well, I only use the Linux kernel. When I mention the kernel, I'm only
concerned with the Linux one which I use.
Dale
:-) :-)
--
I am only responsible for what I said ... Not for what you understood or how you interpreted my words!
^ permalink raw reply [flat|nested] 72+ messages in thread
end of thread, other threads:[~2013-09-22 1:19 UTC | newest]
Thread overview: 72+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-09-17 7:20 [gentoo-user] ZFS Grant
2013-09-17 7:36 ` Marc Stürmer
2013-09-17 8:05 ` Pandu Poluan
2013-09-17 8:22 ` Alan McKinnon
2013-09-17 9:44 ` Grant
2013-09-17 9:42 ` Grant
2013-09-17 10:11 ` Tanstaafl
2013-09-17 16:32 ` covici
2013-09-19 22:41 ` Douglas J Hunley
2013-09-20 23:12 ` Hinnerk van Bruinehsen
2013-09-19 22:46 ` Douglas J Hunley
2013-09-17 9:52 ` Joerg Schilling
2013-09-17 13:22 ` Grant
2013-09-17 13:30 ` Joerg Schilling
2013-09-17 16:39 ` Alan McKinnon
2013-09-18 4:06 ` Grant
2013-09-17 10:19 ` Tanstaafl
2013-09-17 13:21 ` Grant
2013-09-17 15:18 ` Michael Orlitzky
2013-09-17 15:40 ` Tanstaafl
2013-09-17 16:34 ` Michael Orlitzky
2013-09-17 17:00 ` Tanstaafl
2013-09-17 17:07 ` Michael Orlitzky
2013-09-17 17:34 ` Tanstaafl
2013-09-17 17:54 ` Stefan G. Weichinger
2013-09-18 4:11 ` Grant
2013-09-18 7:26 ` Stefan G. Weichinger
2013-09-18 15:17 ` Stefan G. Weichinger
2013-09-19 22:46 ` Douglas J Hunley
2013-09-20 9:17 ` Joerg Schilling
2013-09-20 11:17 ` Tanstaafl
2013-09-18 4:02 ` Grant
2013-09-17 18:00 ` Volker Armin Hemmann
2013-09-17 18:11 ` covici
2013-09-17 19:30 ` Volker Armin Hemmann
2013-09-18 4:20 ` Grant
2013-09-20 18:20 ` Grant
2013-09-20 23:07 ` Hinnerk van Bruinehsen
2013-09-21 4:34 ` Grant
2013-09-17 18:11 ` Tanstaafl
2013-09-17 19:30 ` Volker Armin Hemmann
2013-09-18 4:22 ` Bruce Hill
2013-09-18 8:03 ` Neil Bothwick
2013-09-18 12:55 ` [gentoo-user] ZFS James
2013-09-19 4:49 ` Grant
2013-09-19 7:43 ` Pandu Poluan
2013-09-19 7:44 ` Hinnerk van Bruinehsen
2013-09-19 7:47 ` Pandu Poluan
2013-09-19 8:04 ` Stefan G. Weichinger
2013-09-19 13:04 ` Grant
2013-09-19 10:37 ` Tanstaafl
2013-09-19 12:29 ` Grant
2013-09-19 12:54 ` Pandu Poluan
2013-09-19 13:01 ` Grant
2013-09-19 13:12 ` Pandu Poluan
2013-09-18 4:12 ` [gentoo-user] ZFS Grant
2013-09-18 9:56 ` Joerg Schilling
2013-09-18 17:04 ` Volker Armin Hemmann
2013-09-19 4:47 ` Grant
2013-09-20 15:11 ` Volker Armin Hemmann
2013-09-18 13:53 ` Stefan G. Weichinger
2013-09-19 1:02 ` Dale
2013-09-19 4:44 ` Grant
2013-09-19 7:40 ` Dale
2013-09-19 7:45 ` Pandu Poluan
2013-09-19 9:07 ` Joerg Schilling
2013-09-19 11:22 ` Dale
2013-09-19 11:27 ` Joerg Schilling
2013-09-22 1:19 ` Dale
2013-09-19 9:04 ` Joerg Schilling
2013-09-21 12:53 ` thegeezer
2013-09-21 16:49 ` Pandu Poluan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox