public inbox for gentoo-user@lists.gentoo.org
 help / color / mirror / Atom feed
* Re: Re: [gentoo-user] raid/partition question
@ 2006-02-20 17:51 brettholcomb
  2006-02-20 18:30 ` Boyd Stephen Smith Jr.
  0 siblings, 1 reply; 3+ messages in thread
From: brettholcomb @ 2006-02-20 17:51 UTC (permalink / raw
  To: gentoo-user

As an extension of this question since I'm working on setting up a system now.  

What is better to do with LVM2 after the RAID is created.  I am using EVMS also.

1.  Make all the RAID freespace a big LVM2 container and then and then create LVM2 volumes on top of this big container.

or 

2.  Parcel out the RAID freespace into LVM2 containers for each partiton (/, /user, etc.).



> 
> From: "Richard Fish" <bigfish@asmallpond.org>
> Date: 2006/02/20 Mon AM 11:04:55 EST
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] raid/partition question
> 
> On 2/20/06, Nick Smith <nick.smith79@gmail.com> wrote:
> > i think im confusing myself here. can you partition a raid device aka
> > /dev/md0?
> 
> Yes.  You can either use mdadm to create a partitionable raid device,
> or use LVM/EVMS (which would be my recommendation) to create logical
> volumes on the array.
> 
> Just beware that /boot should either be it's own partition (non-raid),
> or on a RAID-1 array (with no partitions).  Otherwise the boot loader
> will have trouble locating and loading the kernel.
> 
> -Richard
> 
> -- 
> gentoo-user@gentoo.org mailing list
> 
> 

-- 
gentoo-user@gentoo.org mailing list



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [gentoo-user] raid/partition question
  2006-02-20 17:51 Re: [gentoo-user] raid/partition question brettholcomb
@ 2006-02-20 18:30 ` Boyd Stephen Smith Jr.
  0 siblings, 0 replies; 3+ messages in thread
From: Boyd Stephen Smith Jr. @ 2006-02-20 18:30 UTC (permalink / raw
  To: gentoo-user

On Monday 20 February 2006 11:51, brettholcomb@bellsouth.net wrote about 
'Re: Re: [gentoo-user] raid/partition question':
> As an extension of this question since I'm working on setting up a
> system now.
>
> What is better to do with LVM2 after the RAID is created.  I am using
> EVMS also.
>
> 1.  Make all the RAID freespace a big LVM2 container and then and then
> create LVM2 volumes on top of this big container.
>
> or
>
> 2.  Parcel out the RAID freespace into LVM2 containers for each partiton
> (/, /user, etc.).

3. Neither.  See below.  First a discussion of the two options.

1. Is fine, but it forces you to choose a single raid level for all your 
data.  I like raid 0 for filesystems that are used a lot, but can easily 
be reconstructed given time (/usr) and especially filesystems that don't 
need to be reconstructed (/var/tmp), raid 5 or 6 for large filesystems 
that I don't want to lose (/home, particularly), and raid 1 for critical, 
but small, filesystems (/boot, maybe).  

2. Is a little silly, since LVM is designed so that you can treat multiple 
pvs as a single pool of data OR you can allocate from a certain pv -- 
whatever suits the task at hand.  So, it rarely makes sense to have 
multiple volume groups; you'd only do this when you want a fault-tolerant 
"air-gap" between two filesystems.

Failure of a single pv in a vg will require some damage control, maybe a 
little, maybe a lot, but having production encounter any problems just 
because development had a disk go bad is unacceptable is many 
environments.  So, you have a strong argument for separate vgs there.

3. My approach: While I don't use EVMS (the LVM tools are fine with me, at 
least for now) I have a software raid 0 and a hw raid 5 as separate pvs in 
a single vg.  I create and expand lvs on the pv that suits the data.  I 
also have a separate (not under lvm) hw raid 0 for swap and hw raid 6 for 
boot.  I may migrate my swap to LVM in the near future; during my initial 
setup, I feared it was unsafe.  Recent experience tells me that's (most 
likely) not the case.

For the uninitiated, you can specify the pv to place lv data on like so:
lvcreate -L <size> -n <name> <vg> <pv>
lvresize -L <size> <vg>/<lv> <pv>
The second command only affect where new extents are allocated, it will not 
move old extents; use pvmove for that.

-- 
Boyd Stephen Smith Jr.
bss03@volumehost.com
ICQ: 514984 YM/AIM: DaTwinkDaddy
-- 
gentoo-user@gentoo.org mailing list



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Re: [gentoo-user] raid/partition question
@ 2006-02-20 18:45 brettholcomb
  0 siblings, 0 replies; 3+ messages in thread
From: brettholcomb @ 2006-02-20 18:45 UTC (permalink / raw
  To: gentoo-user

Thank you very much.  I'll need to go back and reread this and digest it some more.  I hadn't thought of doing multiple RAID types on the drives.  I have two and did RAID1 for /boot and was going to RAID1 the rest.  However, I really want RAID0 for speed and capacity on some file systems.  The swap comment is interesting, too.  I have two small partitons for swap - one on each drive and I was going to parallel them per one of  DRobbins articles.



> 
> From: "Boyd Stephen Smith Jr." <bss03@volumehost.com>
> Date: 2006/02/20 Mon PM 01:30:59 EST
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] raid/partition question
> 
> On Monday 20 February 2006 11:51, brettholcomb@bellsouth.net wrote about 
> 'Re: Re: [gentoo-user] raid/partition question':
> > As an extension of this question since I'm working on setting up a
> > system now.
> >
> 
> 3. Neither.  See below.  First a discussion of the two options.
> 
> 1. Is fine, but it forces you to choose a single raid level for all your 
> data.  I like raid 0 for filesystems that are used a lot, but can easily 
> be reconstructed given time (/usr) and especially filesystems that don't 
> need to be reconstructed (/var/tmp), raid 5 or 6 for large filesystems 
> that I don't want to lose (/home, particularly), and raid 1 for critical, 
> but small, filesystems (/boot, maybe).  
> 
> 2. Is a little silly, since LVM is designed so that you can treat multiple 
> pvs as a single pool of data OR you can allocate from a certain pv -- 
> whatever suits the task at hand.  So, it rarely makes sense to have 
> multiple volume groups; you'd only do this when you want a fault-tolerant 
> "air-gap" between two filesystems.
> 
> Failure of a single pv in a vg will require some damage control, maybe a 
> little, maybe a lot, but having production encounter any problems just 
> because development had a disk go bad is unacceptable is many 
> environments.  So, you have a strong argument for separate vgs there.
> 
> 3. My approach: While I don't use EVMS (the LVM tools are fine with me, at 
> least for now) I have a software raid 0 and a hw raid 5 as separate pvs in 
> a single vg.  I create and expand lvs on the pv that suits the data.  I 
> also have a separate (not under lvm) hw raid 0 for swap and hw raid 6 for 
> boot.  I may migrate my swap to LVM in the near future; during my initial 
> setup, I feared it was unsafe.  Recent experience tells me that's (most 
> likely) not the case.
> 
> For the uninitiated, you can specify the pv to place lv data on like so:
> lvcreate -L <size> -n <name> <vg> <pv>
> lvresize -L <size> <vg>/<lv> <pv>
> The second command only affect where new extents are allocated, it will not 
> move old extents; use pvmove for that.
> 
> -- 
> Boyd Stephen Smith Jr.
> bss03@volumehost.com
> ICQ: 514984 YM/AIM: DaTwinkDaddy
> -- 
> gentoo-user@gentoo.org mailing list
> 
> 

-- 
gentoo-user@gentoo.org mailing list



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2006-02-20 19:10 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-02-20 17:51 Re: [gentoo-user] raid/partition question brettholcomb
2006-02-20 18:30 ` Boyd Stephen Smith Jr.
  -- strict thread matches above, loose matches on Subject: below --
2006-02-20 18:45 brettholcomb

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox