From: Duncan <1i5t5.duncan@cox.net>
To: gentoo-amd64@lists.gentoo.org
Subject: [gentoo-amd64] Re: conversion sda to lvm2 questions
Date: Fri, 12 Oct 2007 10:56:22 +0000 (UTC) [thread overview]
Message-ID: <pan.2007.10.12.10.56.21@cox.net> (raw)
In-Reply-To: d257c3560710111422ne24dda4gf8bfd77dbd07f556@mail.gmail.com
Beso <givemesugarr@gmail.com> posted
d257c3560710111422ne24dda4gf8bfd77dbd07f556@mail.gmail.com, excerpted
below, on Thu, 11 Oct 2007 23:22:28 +0200:
> i'd like to commute my laptop system to lvm2, but before doing that i'd
> like some hints.
I run LVM on RAID, so let's see... =8^)
> first, i'd like to know if there's a way of passing an existing gentoo
> installation on a hda disk going through the sda stack (i needed to do
> that cause it was the only thing that fixed a problem with my ata disk
> going only on udma-33 after kernel 2.20) with sata-pata piix controller
> which is still experimental. i've taken a look around but haven't
> actually seen a document explaining if it is possible to do that and how
> to do that.
I don't know that specific controller, but in general, if it's the
correct libata SATA/PATA driver, the conversion from using the old IDE
drivers is pretty straightforward. On the kernel side, it's just a
matter of selecting the libata driver instead of the ide driver.
Configuring mount-devices, you use /dev/sdX or /dev/srX, depending.
(I'm not actually sure which one IDE drives use -- I run SATA drives but
have my DVD burners on PATA still, using the libata drivers, and they
get /dev/srX not /dev/sdX.) Basically, try both from grub and see which
works. Once you figure out which devices it's using (srX or sdX), you'll
need to setup your fstab using the correct ones.
Keep in mind that it's possible your device numbering will change as
well, particularly if you already had some SATA or SCSI devices. That's
the most common problem people run into -- their device numbers changing
around on them, because the kernel will load the drivers and test the
drives in a different order than it did previously. Another variation on
the same problem is that devices such as thumb drives and MP3 players
typically load as SCSI devices, so once you switch to SCSI for your
normal drives, you may find the device numbering changing depending on
whether your MP3player/thumb-drive is plugged in or not at boot. One
typical solution to this problem, if you find you have it, is using the
other shortcuts automatically created by UDEV. Here, I use /dev/disk/by-
label/xxxxx for my USB drive mount-devices in fstab, thus allowing me to
keep them straight, and access them consistently regardless of whether I
have one or more than one plugged in at once.
Finally, there's one more possible complication. SCSI drives are
normally limited to 15 partitions, while the old IDE drivers allowed 63.
If you run >15 partitions, better do some reconfiguring... but you're
already on the right track, as that's right where LVM comes in! =8^)
> second, i'd like to know if there's a need for a raid
> enabled motherboard and more than one disk to go on lvm. i only have a
> 100gb disk that i'd like to convert to lvm with no raid.
LVM and RAID are two different layers. While it's common to run LVM on
top of RAID, it's not necessary at all. It's perfectly fine to run LVM
on a single drive, and in fact, a lot of folks run LVM not because it
helps manage RAID, but because it makes managing their volumes (think
generic for partitions) easier, regardless of /what/ they are on. =8^)
[I reordered this and the next question]
> i'd like to use it on amd64. is there any problem? i have seen around
> some problems with lvm and amd64 some of them marked as solved, so i'd
> like to know if there could be problems with this arch. thanks for your
> help.
I've had absolutely no issues at all with LVM on amd64, here. Once
configured, it has "just worked". Note that I'm running LVM2 and NOT
running LVM1 compatibility mode, as I setup LVM long after LVM2 was the
recommended deployment. It's possible/likely that the amd64 problems
were long ago, with LVM1 configurations.
> and last, does it make sense doing a passage to lvm? i currently run
> into some problems with my root partition that gets filled and that i
> always have to watch the free space on it, so if i don't pass to raid
> i'll try to duplicate the partition on a greater one.
As I mentioned, a lot of folks swear by LVM for managing all their
volumes, as once you get the hang of it, it's simply easier. They'd
CERTAINLY say it's worth it.
There is one caveat. Unlike RAID, which you can run your root filesystem
off of, and /boot as well for RAID-1, LVM requires user mode
configuration. Therefore, you can't directly boot off of LVM. I don't
believe there's a way to put /boot on LVM at all because to my knowledge,
neither grub nor lilo grok lvm (tho it's possible I'm wrong, I've just
never seen anything saying it's possible, neither have I had anyone
correct this statement of belief when I've made it in the past).
The root filesystem CAN be on LVM, but it requires running an initrd/
initramfs and proper configuration thereof in ordered to do. Here, I
configure my own kernel directly (without using genkernel or whatever),
and while I understand the general idea, I've never taken the time to
figure out initrd/initramfs as I've never needed it, and strongly prefer
continuing to omit that extra level of complexity from my boot process if
at all possible. Thus, while I have my root filesystem (and my emergency
backup image of same) on RAID (which the kernel can handle on its own
entirely automatically in the simple case, or with a couple simple kernel
command line parameters in more complex situations), I deliberately chose
NOT to put my root filesystem on LVM, thereby making it possible to
continue to boot to the main root filesystem directly, no initramfs/
initrd necessary, as it would be if the root filesystem was on LVM.
So... at minimum, you'll need to have a traditional /boot partition, and
depending on whether you want to run LVM in an initramfs/initrd or would
prefer to avoid that, as I did, you may want your root filesystem on a
traditional partition as well.
FWIW, my root filesystems (the main/working copy, and the backup copy...
next time I redo it, I'll make TWO backup copies, so I'll always have one
to fall back to if the worst happens and the system goes down while I'm
updating my backup copy) are 10 GB each. That includes all of what's
traditionally on /, plus most of /usr (but not /usr/local, /usr/src, or
/usr/portage), and most of /var (but not /var/log, and I keep mail and
the like on its own partition as well).
The idea with what I put on / here was that I wanted keep everything that
portage touched on one partition, so it was always in sync. This was
based on passed experience with a dying hard drive in which my main
partition went out. I was able to boot to my backup root, but /usr was
on a separate partition, as was /var. /var of course contains the
installed package database, so what happened is that what the package
database said I had installed only partly matched what was really on
disk, depending on whether it was installed to /usr or to /. *That*
*was* *a* *BIG* *mess* to clean up!!! As a result, I decided from then
on, everything that portage installed had to be on the same partition as
its package database, so everything stayed in sync. With it all in sync,
if I had to boot the backup, it might not be current, but at least the
database would match what was actually installed since it was all the
same backup, and it would be *far* easier to /make/ current, avoiding the
problems with orphaned libraries and etc that I had for months as a
result of getting out of sync.
Of course, as an additional bonus, since I keep root backup volumes as
well, with the entire operational system on root and therefore on the
backups, if I ever have to boot to the backup, I have a fully configured
and operational system there, just as complete and ready to run as was my
main system the day I created the backup off of it.
So FWIW, here, as I said, 10 gig root volumes, designed to everything
portage installs along with its package database is on root, and
according to df /:
Filesystem Size Used Avail Use% Mounted on
/dev/md_d1p1 9.6G 1.8G 7.8G 19% /
So with / configured as detailed above, at 19% usage, 10 gigs is plenty
and to spare. I could actually do it in 5 gig and still be at <50%
usage, but I wanted to be SURE I never ran into a full root filesystem
issue. I'd recommend a similar strategy for others as well, and assuming
one implements it, 10 gig should be plenty and to spare for current and
future system expansion for quite some time. (While I only have KDE
installed, even if one were to have GNOME AND KDE 3.5.x AND the upcoming
KDE 4.x AND XFCE AND..., even then, I find it difficult to see how a 10
gig / wouldn't be more than enough.)
Swap: If you hibernate, aka suspend to disk, using the swap partition
for your suspend image, I /think/ it has to be on kernel-only configured
volumes, thus not on LVM. At least with the mainline kernel suspend
(which I use, suspend2 may well be different), the default image is half
a gig. However, by writing a value (number of bytes, why they couldn't
have made it KB or MB I don't know...) to /sys/power/image_size, you can
change this if desired. If you make it at least the size of your memory
(but not larger than the size of the single swap partition it's going
to), you won't lose all your cache at suspend, and things will be more
responsive after resume. The cost is a somewhat longer suspend and
resume cycle, as it's writing out and reading in more data. Still, I've
found it well worth it, here. You can of course create additional swap
space on the LVM if you want to later, but can't use it as suspend image,
and the additional layer of processing will make it slightly less
efficient (at least in theory, but given the bottleneck is the speed of
the disk, in practice, it's going to be very slight if even measureable).
So... assuming you deploy based on the above, here's what you will want
directly on your hard drive as partitions:
/boot 128 meg, half a gig, whatever...
/ \ These three, 10 gig each,
rtbk1 } again, as partitions
rtbk2 / directly on the hard drive.
swap Size as desired. Particularly if you suspend to disk,
you'll want this on a real partition, not on LVM.
LVM2 Large partition, probably the rest of the disk.
You then create "logical volumes" managed with LVM2
on top of this, and then mkfs/format the created logical
volumes with whatever filesystems you find most
appropriate, in whatever size you need.
If you keep some extra space in the "volume group", you can then expand
any of the volumes within as necessary. When you do so, LVM simply
allocates the necessary additional space at the end of what's already
used, marking it as in use and assigning it to the logical volume you
want to expand.
If you run out of space on the drive, you can then add another drive (or
partition on another drive, or RAID device, or whatever, as long as it's
a block device, LVM should be able to use it), then tell LVM about it and
that you want it added to your existing volume group, and you'll have
additional room to continue to expand into, all without worrying about
further partitioning or whatever.
Correspondingly, if you want to reorganize your logical volumes (LVs)
under LVM, moving the data to other volumes and deleting the freed LVs,
that's fine too. Simply do that. The space freed up by the deleted LV
is then free to be used by other logical volumes as necessary. Just tell
LVM you want to do it, and it does it. All you worry about is the
available space to expand LVs into, and LVM takes care of where they are
actually located within the space you've told it to manage as that volume
group (VG).
THE BIG CAVEAT, however, is simply that with LVM managing all of it, it's
all too easy to just continue adding drives to your existing volume
groups, forgetting how old your first drive is, and that it'll eventually
wear out and fail. This is what's so nice about putting LVM on RAID,
since the redundancy of RAID allows drives to fail and be replaced, while
the LVM continues to live on and adapt to your ever changing and normally
ever growing data needs.
As long as you catch it before the failure, however, you can simply
create a new VG (volume group) on your new drive (or partition, or set of
drives or partitions, or RAID device(s), or other block device(s)), size
it as necessary to hold your existing data, and copy or move stuff over.
As with the above, you'll have to worry about any non-LVM partitions/
volumes/whatever, but the bulk of your data including all your user and/
or server data will be in the LVM, with the flexibility it provides, so
even with the limited number of non-LVM partitions I recommended above,
it'll still be vastly easier to manage upgrading drives than it would be
if you were handling all that data in individual partitions.
So the big thing is, just because it's all on LVM now, and easy to expand
to a new drive, doesn't mean you can forget about the age of your old
drive, and that if you simply expand, the old drive will eventually fail,
taking all the data on it, and your working system if you've not made
proper arrangements, with it. Remember that (or put it on appropriate
RAID so you have its redundancy backing up the LVM) and LVM can certainly
be worth the trouble of learning how it works and the initial deployment,
yes, even on single disks.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
--
gentoo-amd64@gentoo.org mailing list
next prev parent reply other threads:[~2007-10-12 11:09 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-10-11 21:22 [gentoo-amd64] conversion sda to lvm2 questions Beso
2007-10-11 23:06 ` Mark Haney
2007-10-12 10:55 ` Richard Freeman
2007-10-12 10:56 ` Duncan [this message]
2007-10-12 12:29 ` [gentoo-amd64] " Volker Armin Hemmann
2007-10-12 18:35 ` [gentoo-amd64] " Bernhard Auzinger
2007-10-12 18:43 ` Daniel Gryniewicz
2007-10-12 19:03 ` Beso
2007-10-13 0:06 ` Richard Freeman
2007-10-13 9:33 ` Beso
2007-10-13 14:46 ` Bernhard Auzinger
2007-10-13 15:26 ` Richard Freeman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=pan.2007.10.12.10.56.21@cox.net \
--to=1i5t5.duncan@cox.net \
--cc=gentoo-amd64@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox