public inbox for gentoo-user@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-user] dmraid, mdraid, lvm, btrfs, what?
@ 2011-11-29  4:10 Michael Mol
  2011-11-29  7:07 ` Florian Philipp
  2011-11-29 14:10 ` [gentoo-user] " Mark Knecht
  0 siblings, 2 replies; 13+ messages in thread
From: Michael Mol @ 2011-11-29  4:10 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 604 bytes --]

I've got four 750GB drives in addition to the installed system drive.

I'd like to aggregate them and split them into a few volumes. My first
inclination would be to raid them and drop lvm on top.  I know lvm well
enough, but I don't remember md that well.

Since I don't recall md well, and this isn't urgent, I figure I can look at
the options.

The obvious ones appear tobe mdraid, dmraid and btrfs. I'm not sure I'm
interested in btrfs until it's got a fsck that will repair errors, but I'm
looking forward to it once it's ready.

Any options I missed? What are the advantages and disadvantages?

ZZ

[-- Attachment #2: Type: text/html, Size: 688 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [gentoo-user] dmraid, mdraid, lvm, btrfs, what?
  2011-11-29  4:10 [gentoo-user] dmraid, mdraid, lvm, btrfs, what? Michael Mol
@ 2011-11-29  7:07 ` Florian Philipp
  2011-11-29 13:44   ` Michael Mol
  2011-11-29 17:35   ` [gentoo-user] " Jack Byer
  2011-11-29 14:10 ` [gentoo-user] " Mark Knecht
  1 sibling, 2 replies; 13+ messages in thread
From: Florian Philipp @ 2011-11-29  7:07 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1656 bytes --]

Am 29.11.2011 05:10, schrieb Michael Mol:
> I've got four 750GB drives in addition to the installed system drive.
> 
> I'd like to aggregate them and split them into a few volumes. My first
> inclination would be to raid them and drop lvm on top.  I know lvm well
> enough, but I don't remember md that well.
> 
> Since I don't recall md well, and this isn't urgent, I figure I can look
> at the options.
> 
> The obvious ones appear tobe mdraid, dmraid and btrfs. I'm not sure I'm
> interested in btrfs until it's got a fsck that will repair errors, but
> I'm looking forward to it once it's ready.
> 
> Any options I missed? What are the advantages and disadvantages?
> 
> ZZ
> 

Sounds good so far. Of course, you only need mdraid OR dmraid (md
recommended). What kind of RAID level do you want to use, 10 or 5? You
can also split it: Use a smaller RAID 10 for performance-critical
partitions like /usr and the more space-efficient RAID 5 for bulk like
videos. You can handle this with one LVM volume group consisting of two
physical volumes. Then you can decide on a per-logical-volume basis
where it should allocate space and also migrate LVs between the two PVs.

Another thing you can think of is whether you want encryption. I've done
this for my laptop. The usual setup would by md->lvm->crypt. I've done
it crypt->lvm (an LVM physical volume on top of an encrypted partition).
This way, I only need to enter the password once. You can enforce a
specific order between lvm, md and dmcrypt by putting stuff like this in
/etc/rc.conf:
rc_dmcrypt_before="lvm"
rc_dmcrypt_after="udev"

Regards,
Florian Philipp


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 262 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [gentoo-user] dmraid, mdraid, lvm, btrfs, what?
  2011-11-29  7:07 ` Florian Philipp
@ 2011-11-29 13:44   ` Michael Mol
  2011-11-29 18:20     ` Florian Philipp
  2011-11-29 17:35   ` [gentoo-user] " Jack Byer
  1 sibling, 1 reply; 13+ messages in thread
From: Michael Mol @ 2011-11-29 13:44 UTC (permalink / raw
  To: gentoo-user

On Tue, Nov 29, 2011 at 2:07 AM, Florian Philipp <lists@binarywings.net> wrote:
> Am 29.11.2011 05:10, schrieb Michael Mol:
>> I've got four 750GB drives in addition to the installed system drive.
>>
>> I'd like to aggregate them and split them into a few volumes. My first
>> inclination would be to raid them and drop lvm on top.  I know lvm well
>> enough, but I don't remember md that well.
>>
>> Since I don't recall md well, and this isn't urgent, I figure I can look
>> at the options.
>>
>> The obvious ones appear tobe mdraid, dmraid and btrfs. I'm not sure I'm
>> interested in btrfs until it's got a fsck that will repair errors, but
>> I'm looking forward to it once it's ready.
>>
>> Any options I missed? What are the advantages and disadvantages?
>>
>> ZZ
>>
>
> Sounds good so far. Of course, you only need mdraid OR dmraid (md
> recommended).

dmraid looks rather new on the block. Or, at least, I've been more
aware of md than dm over the years. What's its purpose, as compared to
dmraid? Why is mdraid recommended over it?

> What kind of RAID level do you want to use, 10 or 5? You
> can also split it: Use a smaller RAID 10 for performance-critical
> partitions like /usr and the more space-efficient RAID 5 for bulk like
> videos. You can handle this with one LVM volume group consisting of two
> physical volumes. Then you can decide on a per-logical-volume basis
> where it should allocate space and also migrate LVs between the two PVs.

Since I've got four disks for the pool, I was thinking raid10 with lvm
on top, and a single lvm pv above that.

> Another thing you can think of is whether you want encryption. I've done
> this for my laptop. The usual setup would by md->lvm->crypt. I've done
> it crypt->lvm (an LVM physical volume on top of an encrypted partition).
> This way, I only need to enter the password once. You can enforce a
> specific order between lvm, md and dmcrypt by putting stuff like this in
> /etc/rc.conf:
> rc_dmcrypt_before="lvm"
> rc_dmcrypt_after="udev"

Really not interested in encryption for this box. No need.

-- 
:wq



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [gentoo-user] dmraid, mdraid, lvm, btrfs, what?
  2011-11-29  4:10 [gentoo-user] dmraid, mdraid, lvm, btrfs, what? Michael Mol
  2011-11-29  7:07 ` Florian Philipp
@ 2011-11-29 14:10 ` Mark Knecht
  2011-11-29 16:53   ` Michael Mol
  1 sibling, 1 reply; 13+ messages in thread
From: Mark Knecht @ 2011-11-29 14:10 UTC (permalink / raw
  To: gentoo-user

On Mon, Nov 28, 2011 at 8:10 PM, Michael Mol <mikemol@gmail.com> wrote:
> I've got four 750GB drives in addition to the installed system drive.
>
> I'd like to aggregate them and split them into a few volumes. My first
> inclination would be to raid them and drop lvm on top.  I know lvm well
> enough, but I don't remember md that well.
>
> Since I don't recall md well, and this isn't urgent, I figure I can look at
> the options.
>
> The obvious ones appear tobe mdraid, dmraid and btrfs. I'm not sure I'm
> interested in btrfs until it's got a fsck that will repair errors, but I'm
> looking forward to it once it's ready.
>
> Any options I missed? What are the advantages and disadvantages?
>
> ZZ

Hi Michael,
   Welcome to the world of what ever sort of multi-disk environment
you choose. It's a HUGE topic and a conversation I look forward to
having as you dig through it.

   My main compute system here at home has six 500GB WD RE3 drives.
Five are in use with one as a cold spare.  I'm using md. It's pretty
mature and you have good access to the main developer through the
email list. I don't know much about dm. If this is your first time
putting RAID on a box (it was for me) then I think md is a good
choice. On the other hand you're more system software savy than I am
so go with what you think is best for you.

1) First lesson - not all hard drives make good RAID hard drives. I
started with six 1TB WD Green drives and found they made _terrible_
RAID units so I took them out and bought _real_ RAID drives. They were
only half as large for the same price but they have worked perfectly
for nearly 2 years.

2) Second lesson - prepare to build a few RAID configurations and
TEST, TEST, TEST __BEFORE__ (BEFORE!!!) you make _ANY_ decision about
what sort of RAID you really want. There are a LOT of parameter
choices that effect performance, reliability, capacity and I think to
some extent your ability to change RAID types later on. To name a few:
The obvious RAID type (0,1,2,3,4,5,6,10, etc.) but also chunk size,
metadata type, physical layout for certain RAID types, etc. I strongly
suggest building 5-10 different configurations and testing them with
bonnie++ to gauge speed. I didn't do enough of this before I built
this system and I've been dealing with the effects ever since.

3) Third lesson - think deeply about what happens when 1 drive goes
bad and you are in the process of fixing the system. Do you have a
spare drive ready? Is it in the box? Hot or cold? What happens if a
second drive in the system fails while you're rebuilding the RAID?
It's from the same manufacturing lot so it probably suffers from the
same weaknesses. My decision for the most part was (for data or system
drives) 3-drive RAID1 or 5-drive RAID6. For backup I went with 5-drive
RAID5. It all makes me feel good, but it's too complicated.

4) Lastly - as they say all the time on the mdadm list: RAID is not a backup.

   Personally I like your idea of one big RAID with lvm on top but I
haven't done it myself. I think it's what I would look at today if I
was starting from scratch, but I'm not sure. It would take some study.

Hope this helps even a little,
Mark



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [gentoo-user] dmraid, mdraid, lvm, btrfs, what?
  2011-11-29 14:10 ` [gentoo-user] " Mark Knecht
@ 2011-11-29 16:53   ` Michael Mol
  2011-11-29 19:02     ` Jarry
  0 siblings, 1 reply; 13+ messages in thread
From: Michael Mol @ 2011-11-29 16:53 UTC (permalink / raw
  To: gentoo-user

On Tue, Nov 29, 2011 at 9:10 AM, Mark Knecht <markknecht@gmail.com> wrote:
> On Mon, Nov 28, 2011 at 8:10 PM, Michael Mol <mikemol@gmail.com> wrote:
>
> Hi Michael,
>   Welcome to the world of what ever sort of multi-disk environment
> you choose. It's a HUGE topic and a conversation I look forward to
> having as you dig through it.
>
>   My main compute system here at home has six 500GB WD RE3 drives.
> Five are in use with one as a cold spare.  I'm using md. It's pretty
> mature and you have good access to the main developer through the
> email list. I don't know much about dm. If this is your first time
> putting RAID on a box (it was for me) then I think md is a good
> choice. On the other hand you're more system software savy than I am
> so go with what you think is best for you.

Last time I set up RAID was three or four years ago. Two volumes, on
RAID5 of three 1.5TB drives (Seagate econo drives, but they worked
well enough for me), one RIAD0 of three 1TB drives (WD Caviar Black).

The RAID0 was for some video munging scratch space. The RAID5, I
mounted as /home. Those volumes lasted a couple years, before I
rebuilt all of them as two LVM pvgs, using the same drive sets.

>
> 1) First lesson - not all hard drives make good RAID hard drives. I
> started with six 1TB WD Green drives and found they made _terrible_
> RAID units so I took them out and bought _real_ RAID drives. They were
> only half as large for the same price but they have worked perfectly
> for nearly 2 years.

What makes a good RAID unit, and what makes a terrible RAID unit?
Unless we're talking rapid failure, I'd think anything striped would
be faster than the bare drive alone.

>
> 2) Second lesson - prepare to build a few RAID configurations and
> TEST, TEST, TEST __BEFORE__ (BEFORE!!!) you make _ANY_ decision about
> what sort of RAID you really want. There are a LOT of parameter
> choices that effect performance, reliability, capacity and I think to
> some extent your ability to change RAID types later on. To name a few:
> The obvious RAID type (0,1,2,3,4,5,6,10, etc.) but also chunk size,
> metadata type, physical layout for certain RAID types, etc. I strongly
> suggest building 5-10 different configurations and testing them with
> bonnie++ to gauge speed. I didn't do enough of this before I built
> this system and I've been dealing with the effects ever since.

I'm familiar with the different RAID types and how they operate. I'm
familiar with some of the impacts of chunk size, what that can mean in
impacts on caching and sector overlap (for SSD and 2TB+ drives, at
least).

The purpose of this array (or set of arrays) is for volume aggregation
with a touch of redundancy. Speed is a tertiary concern, and if it
becomes a real issue, I'll adapt; I've got 730GB left free on the
system's primary disk which I can throw into the mix any which way.
(use it raw as I currently am, or stripe a logical volume into it...)

> 3) Third lesson - think deeply about what happens when 1 drive goes
> bad and you are in the process of fixing the system. Do you have a
> spare drive ready?

Don't plan to, but I don't plan on storing vital or
operations-dependent data in the volume without backup. These are
going to be volumes of convenience.

> Is it in the box? Hot or cold? What happens if a
> second drive in the system fails while you're rebuilding the RAID?

Drop the failed drives, rebuild with the remaining drives, copy back a backup.

> It's from the same manufacturing lot so it probably suffers from the
> same weaknesses. My decision for the most part was (for data or system
> drives) 3-drive RAID1 or 5-drive RAID6. For backup I went with 5-drive
> RAID5. It all makes me feel good, but it's too complicated.
>
> 4) Lastly - as they say all the time on the mdadm list: RAID is not a backup.

Absolutely. I've had discussions of RAID and disk storage many times
with some rather apt and experienced friends, but dmraid and btrfs are
relatively new on the block, and the gentoo-user list is a new,
mostly-untapped resource of expertise. I wanted to pick up any
additional knowledge or references I hadn't heard before. :)

>   Personally I like your idea of one big RAID with lvm on top but I
> haven't done it myself. I think it's what I would look at today if I
> was starting from scratch, but I'm not sure. It would take some study.

It's probably the simplest way forward. I notice there are some
network-syncing block devices in the kernel (acting as RAID1 over a
network) I'd like to play with, but I haven't done anything with OCFS2
(or whatever other multi-operator filesystems are in the 3.0.6 kernel)
before.

>
> Hope this helps even a little,
> Mark

Certainly does. Also, your email has a permanent URL through at least
a couple mailing list archivers, so it'll be a good thing to link to
in the future. :)


-- 
:wq



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [gentoo-user] Re: dmraid, mdraid, lvm, btrfs, what?
  2011-11-29  7:07 ` Florian Philipp
  2011-11-29 13:44   ` Michael Mol
@ 2011-11-29 17:35   ` Jack Byer
  1 sibling, 0 replies; 13+ messages in thread
From: Jack Byer @ 2011-11-29 17:35 UTC (permalink / raw
  To: gentoo-user

Florian Philipp wrote:

> Another thing you can think of is whether you want encryption. I've done
> this for my laptop. The usual setup would by md->lvm->crypt. I've done
> it crypt->lvm (an LVM physical volume on top of an encrypted partition).
> This way, I only need to enter the password once. You can enforce a
> specific order between lvm, md and dmcrypt by putting stuff like this in
> /etc/rc.conf:
> rc_dmcrypt_before="lvm"
> rc_dmcrypt_after="udev"

I like to use whole disk encryption so I'll format each drive with LUKS and 
then use Dracut for an initramfs when I boot so that it takes care of 
setting up dmcrypt/lvm/md before OpenRC ever starts up.




^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [gentoo-user] dmraid, mdraid, lvm, btrfs, what?
  2011-11-29 13:44   ` Michael Mol
@ 2011-11-29 18:20     ` Florian Philipp
  2011-11-29 18:39       ` Michael Mol
  0 siblings, 1 reply; 13+ messages in thread
From: Florian Philipp @ 2011-11-29 18:20 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 2951 bytes --]

Am 29.11.2011 14:44, schrieb Michael Mol:
> On Tue, Nov 29, 2011 at 2:07 AM, Florian Philipp <lists@binarywings.net> wrote:
>> Am 29.11.2011 05:10, schrieb Michael Mol:
>>> I've got four 750GB drives in addition to the installed system drive.
>>>
>>> I'd like to aggregate them and split them into a few volumes. My first
>>> inclination would be to raid them and drop lvm on top.  I know lvm well
>>> enough, but I don't remember md that well.
>>>
>>> Since I don't recall md well, and this isn't urgent, I figure I can look
>>> at the options.
>>>
>>> The obvious ones appear tobe mdraid, dmraid and btrfs. I'm not sure I'm
>>> interested in btrfs until it's got a fsck that will repair errors, but
>>> I'm looking forward to it once it's ready.
>>>
>>> Any options I missed? What are the advantages and disadvantages?
>>>
>>> ZZ
>>>
>>
>> Sounds good so far. Of course, you only need mdraid OR dmraid (md
>> recommended).
> 
> dmraid looks rather new on the block. Or, at least, I've been more
> aware of md than dm over the years. What's its purpose, as compared to
> dmraid? Why is mdraid recommended over it?
>

dmraid being new? Not really. Anyway: Under the hood, md and dm use the
exactly same code in the kernel. They just provide different interfaces.
mdraid is a linux-specific software RAID implemented on top of ordinary
single-disk disk controllers. It works like a charm and any Linux system
with any disk controller can work with it (if you ever change your
hardware).

dmraid provides a "fake-RAID": A software RAID with support of (or
rather, under control of) a cheap on-board RAID controller.
Performance-wise, it usually doesn't provide any kind of advantage
because the kernel driver still has to do all the heavy lifting
(therefore it uses the same code base as mdraid). Its most important
disadvantage is that it binds you to the vendor of the chipset who
determines the on-disk layout. Apparently, this gets better in the last
few years because of some pretty major consolidations on the chipset
market. It might be helpful if you consider dual-booting Windows on the
same RAID (both systems ought to use the same disk layout by means of
their respective drivers).


>> What kind of RAID level do you want to use, 10 or 5? You
>> can also split it: Use a smaller RAID 10 for performance-critical
>> partitions like /usr and the more space-efficient RAID 5 for bulk like
>> videos. You can handle this with one LVM volume group consisting of two
>> physical volumes. Then you can decide on a per-logical-volume basis
>> where it should allocate space and also migrate LVs between the two PVs.
> 
> Since I've got four disks for the pool, I was thinking raid10 with lvm
> on top, and a single lvm pv above that.
>

Yeah, that would also be my recommendation. But if storage efficiency is
more relevant, RAID-5 with 4 disks brings you 750GB more usable storage.



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 262 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [gentoo-user] dmraid, mdraid, lvm, btrfs, what?
  2011-11-29 18:20     ` Florian Philipp
@ 2011-11-29 18:39       ` Michael Mol
  2011-11-29 19:23         ` Florian Philipp
  0 siblings, 1 reply; 13+ messages in thread
From: Michael Mol @ 2011-11-29 18:39 UTC (permalink / raw
  To: gentoo-user

On Tue, Nov 29, 2011 at 1:20 PM, Florian Philipp <lists@binarywings.net> wrote:
> Am 29.11.2011 14:44, schrieb Michael Mol:
>> On Tue, Nov 29, 2011 at 2:07 AM, Florian Philipp <lists@binarywings.net> wrote:
>>> Am 29.11.2011 05:10, schrieb Michael Mol:
>>>> I've got four 750GB drives in addition to the installed system drive.
>>>>
>>>> I'd like to aggregate them and split them into a few volumes. My first
>>>> inclination would be to raid them and drop lvm on top.  I know lvm well
>>>> enough, but I don't remember md that well.
>>>>
>>>> Since I don't recall md well, and this isn't urgent, I figure I can look
>>>> at the options.
>>>>
>>>> The obvious ones appear tobe mdraid, dmraid and btrfs. I'm not sure I'm
>>>> interested in btrfs until it's got a fsck that will repair errors, but
>>>> I'm looking forward to it once it's ready.
>>>>
>>>> Any options I missed? What are the advantages and disadvantages?
>>>>
>>>> ZZ
>>>>
>>>
>>> Sounds good so far. Of course, you only need mdraid OR dmraid (md
>>> recommended).
>>
>> dmraid looks rather new on the block. Or, at least, I've been more
>> aware of md than dm over the years. What's its purpose, as compared to
>> dmraid? Why is mdraid recommended over it?
>>
>
> dmraid being new? Not really. Anyway: Under the hood, md and dm use the
> exactly same code in the kernel. They just provide different interfaces.
> mdraid is a linux-specific software RAID implemented on top of ordinary
> single-disk disk controllers. It works like a charm and any Linux system
> with any disk controller can work with it (if you ever change your
> hardware).
>
> dmraid provides a "fake-RAID": A software RAID with support of (or
> rather, under control of) a cheap on-board RAID controller.
> Performance-wise, it usually doesn't provide any kind of advantage
> because the kernel driver still has to do all the heavy lifting
> (therefore it uses the same code base as mdraid). Its most important
> disadvantage is that it binds you to the vendor of the chipset who
> determines the on-disk layout. Apparently, this gets better in the last
> few years because of some pretty major consolidations on the chipset
> market. It might be helpful if you consider dual-booting Windows on the
> same RAID (both systems ought to use the same disk layout by means of
> their respective drivers).
>
>
>>> What kind of RAID level do you want to use, 10 or 5? You
>>> can also split it: Use a smaller RAID 10 for performance-critical
>>> partitions like /usr and the more space-efficient RAID 5 for bulk like
>>> videos. You can handle this with one LVM volume group consisting of two
>>> physical volumes. Then you can decide on a per-logical-volume basis
>>> where it should allocate space and also migrate LVs between the two PVs.
>>
>> Since I've got four disks for the pool, I was thinking raid10 with lvm
>> on top, and a single lvm pv above that.
>>
>
> Yeah, that would also be my recommendation. But if storage efficiency is
> more relevant, RAID-5 with 4 disks brings you 750GB more usable storage.
>
>

It looks like I'll want to try two different configurations. RAID5 and
RAID10. Not for different storage requirements, but I want to see
exactly what the performance drop is.

I wish lvm striping supported data redundancy. But, then, I wish btrfs
was ready...

-- 
:wq



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [gentoo-user] dmraid, mdraid, lvm, btrfs, what?
  2011-11-29 16:53   ` Michael Mol
@ 2011-11-29 19:02     ` Jarry
  2011-11-29 19:34       ` Mark Knecht
  0 siblings, 1 reply; 13+ messages in thread
From: Jarry @ 2011-11-29 19:02 UTC (permalink / raw
  To: gentoo-user

On 29-Nov-11 17:53, Michael Mol wrote:

>> 1) First lesson - not all hard drives make good RAID hard drives.
>
> What makes a good RAID unit, and what makes a terrible RAID unit?

Some hard-drives are not suitable for raid at all. There
are many reasons for that, one example is error-recovery.
Check wiki for more info:

http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery

In the first place, I would not recommend those "eco" and
"green" versions for raid at all. They have some saving
mechanisms which tend to activate at wrong time and cause
problems for raid-controllers (be it SW or HW). I'd say
it is worth to pay a few bucks more for enterprise-class
24/7 (or special "raid-edition") drives.

Jarry
-- 
_______________________________________________________________
This mailbox accepts e-mails only from selected mailing-lists!
Everything else is considered to be spam and therefore deleted.



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [gentoo-user] dmraid, mdraid, lvm, btrfs, what?
  2011-11-29 18:39       ` Michael Mol
@ 2011-11-29 19:23         ` Florian Philipp
  2011-11-29 19:27           ` Michael Mol
  0 siblings, 1 reply; 13+ messages in thread
From: Florian Philipp @ 2011-11-29 19:23 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1938 bytes --]

Am 29.11.2011 19:39, schrieb Michael Mol:
> On Tue, Nov 29, 2011 at 1:20 PM, Florian Philipp <lists@binarywings.net> wrote:
>> Am 29.11.2011 14:44, schrieb Michael Mol:
>>> On Tue, Nov 29, 2011 at 2:07 AM, Florian Philipp <lists@binarywings.net> wrote:
>>>> Am 29.11.2011 05:10, schrieb Michael Mol:
>>>>> I've got four 750GB drives in addition to the installed system drive.
>>>>>
>>>>> I'd like to aggregate them and split them into a few volumes. My first
>>>>> inclination would be to raid them and drop lvm on top.  I know lvm well
>>>>> enough, but I don't remember md that well.
>>>>>
>>>>> Since I don't recall md well, and this isn't urgent, I figure I can look
>>>>> at the options.
>>>>>
[...]
>>>> What kind of RAID level do you want to use, 10 or 5? You
>>>> can also split it: Use a smaller RAID 10 for performance-critical
>>>> partitions like /usr and the more space-efficient RAID 5 for bulk like
>>>> videos. You can handle this with one LVM volume group consisting of two
>>>> physical volumes. Then you can decide on a per-logical-volume basis
>>>> where it should allocate space and also migrate LVs between the two PVs.
>>>
>>> Since I've got four disks for the pool, I was thinking raid10 with lvm
>>> on top, and a single lvm pv above that.
>>>
>>
>> Yeah, that would also be my recommendation. But if storage efficiency is
>> more relevant, RAID-5 with 4 disks brings you 750GB more usable storage.
>>
>>
> 
> It looks like I'll want to try two different configurations. RAID5 and
> RAID10. Not for different storage requirements, but I want to see
> exactly what the performance drop is.
> 
> I wish lvm striping supported data redundancy. But, then, I wish btrfs
> was ready...
> 

Just out of curiosity: What happens if you do `lvcreate --mirrors 1
--stripes 2 ...`? Does it create something similar to a RAID-10 or does
it simply fail?

Regards,
Florian Philipp


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 262 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [gentoo-user] dmraid, mdraid, lvm, btrfs, what?
  2011-11-29 19:23         ` Florian Philipp
@ 2011-11-29 19:27           ` Michael Mol
  0 siblings, 0 replies; 13+ messages in thread
From: Michael Mol @ 2011-11-29 19:27 UTC (permalink / raw
  To: gentoo-user

On Tue, Nov 29, 2011 at 2:23 PM, Florian Philipp <lists@binarywings.net> wrote:
> Am 29.11.2011 19:39, schrieb Michael Mol:
>> On Tue, Nov 29, 2011 at 1:20 PM, Florian Philipp <lists@binarywings.net> wrote:
>>> Am 29.11.2011 14:44, schrieb Michael Mol:
>>>> On Tue, Nov 29, 2011 at 2:07 AM, Florian Philipp <lists@binarywings.net> wrote:
>>>>> Am 29.11.2011 05:10, schrieb Michael Mol:
>>>>>> I've got four 750GB drives in addition to the installed system drive.
>>>>>>
>>>>>> I'd like to aggregate them and split them into a few volumes. My first
>>>>>> inclination would be to raid them and drop lvm on top.  I know lvm well
>>>>>> enough, but I don't remember md that well.
>>>>>>
>>>>>> Since I don't recall md well, and this isn't urgent, I figure I can look
>>>>>> at the options.
>>>>>>
> [...]
>>>>> What kind of RAID level do you want to use, 10 or 5? You
>>>>> can also split it: Use a smaller RAID 10 for performance-critical
>>>>> partitions like /usr and the more space-efficient RAID 5 for bulk like
>>>>> videos. You can handle this with one LVM volume group consisting of two
>>>>> physical volumes. Then you can decide on a per-logical-volume basis
>>>>> where it should allocate space and also migrate LVs between the two PVs.
>>>>
>>>> Since I've got four disks for the pool, I was thinking raid10 with lvm
>>>> on top, and a single lvm pv above that.
>>>>
>>>
>>> Yeah, that would also be my recommendation. But if storage efficiency is
>>> more relevant, RAID-5 with 4 disks brings you 750GB more usable storage.
>>>
>>>
>>
>> It looks like I'll want to try two different configurations. RAID5 and
>> RAID10. Not for different storage requirements, but I want to see
>> exactly what the performance drop is.
>>
>> I wish lvm striping supported data redundancy. But, then, I wish btrfs
>> was ready...
>>
>
> Just out of curiosity: What happens if you do `lvcreate --mirrors 1
> --stripes 2 ...`? Does it create something similar to a RAID-10 or does
> it simply fail?

Hm. I don't know. Honestly, I didn't know about that functionality.
Perhaps it's time I catch up on the docs again.

-- 
:wq



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [gentoo-user] dmraid, mdraid, lvm, btrfs, what?
  2011-11-29 19:02     ` Jarry
@ 2011-11-29 19:34       ` Mark Knecht
  2011-11-29 19:44         ` Michael Mol
  0 siblings, 1 reply; 13+ messages in thread
From: Mark Knecht @ 2011-11-29 19:34 UTC (permalink / raw
  To: gentoo-user

On Tue, Nov 29, 2011 at 11:02 AM, Jarry <mr.jarry@gmail.com> wrote:
> On 29-Nov-11 17:53, Michael Mol wrote:
>
>>> 1) First lesson - not all hard drives make good RAID hard drives.
>>
>>
>> What makes a good RAID unit, and what makes a terrible RAID unit?
>
>
> Some hard-drives are not suitable for raid at all. There
> are many reasons for that, one example is error-recovery.
> Check wiki for more info:
>
> http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery
>
> In the first place, I would not recommend those "eco" and
> "green" versions for raid at all. They have some saving
> mechanisms which tend to activate at wrong time and cause
> problems for raid-controllers (be it SW or HW). I'd say
> it is worth to pay a few bucks more for enterprise-class
> 24/7 (or special "raid-edition") drives.
>
> Jarry

This is a good representation of what happened on my first pass with
RAID. I bought a bunch of WD 1TB Green drives. They work fine, but
when I put them together in even a RAID1they had very long wait times
in 'top' and the speed was horrible.

That's not to say all Green drives do this because they don't. It's
just hard to say what will work before you buy the drives _unless_ you
buy RAID edition drives.

In Micheal's case he already has his drives so the will either work or
they won't. That's one reason I suggested he put together a couple of
configurations. He's looking at RAID5 & RAID10, which to me makes
sense with 4 drives. We'll just have to wait and see how they work I
think.

- Mark



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [gentoo-user] dmraid, mdraid, lvm, btrfs, what?
  2011-11-29 19:34       ` Mark Knecht
@ 2011-11-29 19:44         ` Michael Mol
  0 siblings, 0 replies; 13+ messages in thread
From: Michael Mol @ 2011-11-29 19:44 UTC (permalink / raw
  To: gentoo-user

On Tue, Nov 29, 2011 at 2:34 PM, Mark Knecht <markknecht@gmail.com> wrote:
> On Tue, Nov 29, 2011 at 11:02 AM, Jarry <mr.jarry@gmail.com> wrote:
>> On 29-Nov-11 17:53, Michael Mol wrote:
>>>> 1) First lesson - not all hard drives make good RAID hard drives.
>>> What makes a good RAID unit, and what makes a terrible RAID unit?
>> Some hard-drives are not suitable for raid at all. There
>> are many reasons for that, one example is error-recovery.
>> Check wiki for more info:
>>
>> http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery
>>
>> In the first place, I would not recommend those "eco" and
>> "green" versions for raid at all. They have some saving
>> mechanisms which tend to activate at wrong time and cause
>> problems for raid-controllers (be it SW or HW). I'd say
>> it is worth to pay a few bucks more for enterprise-class
>> 24/7 (or special "raid-edition") drives.
>>
>> Jarry
>
> This is a good representation of what happened on my first pass with
> RAID. I bought a bunch of WD 1TB Green drives. They work fine, but
> when I put them together in even a RAID1they had very long wait times
> in 'top' and the speed was horrible.
>
> That's not to say all Green drives do this because they don't. It's
> just hard to say what will work before you buy the drives _unless_ you
> buy RAID edition drives.
>
> In Micheal's case he already has his drives so the will either work or
> they won't. That's one reason I suggested he put together a couple of
> configurations. He's looking at RAID5 & RAID10, which to me makes
> sense with 4 drives. We'll just have to wait and see how they work I
> think.

In this system, I have five Seagate Barracude ES drives.

http://personal.rosettacode.org/smart.txt

Which reminds me, I need to fix the tz settings on that box.
-- 
:wq



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2011-11-29 19:46 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-11-29  4:10 [gentoo-user] dmraid, mdraid, lvm, btrfs, what? Michael Mol
2011-11-29  7:07 ` Florian Philipp
2011-11-29 13:44   ` Michael Mol
2011-11-29 18:20     ` Florian Philipp
2011-11-29 18:39       ` Michael Mol
2011-11-29 19:23         ` Florian Philipp
2011-11-29 19:27           ` Michael Mol
2011-11-29 17:35   ` [gentoo-user] " Jack Byer
2011-11-29 14:10 ` [gentoo-user] " Mark Knecht
2011-11-29 16:53   ` Michael Mol
2011-11-29 19:02     ` Jarry
2011-11-29 19:34       ` Mark Knecht
2011-11-29 19:44         ` Michael Mol

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox