On Wed, Mar 30, 2011 at 8:21 PM, Einux wrote: > thank you guys, you've been helpful :) > > On Wed, Mar 30, 2011 at 3:31 PM, Joost Roeleveld wrote: > >> On Wednesday 30 March 2011 07:28:40 Florian Philipp wrote: >> > Am 30.03.2011 05:02, schrieb Einux: >> > > Hi, >> > > >> > > I bought a new 1T harddrive which is exactly the same as my previous >> > > harddrive. So I'm planning to make a Raid-1 layout(for security >> > > reasons). But here's the problem: I've already setup LVM2 on the >> > > existing harddrive and I don't want to destroy the existing LVM volume >> > > groups. I tried to google it, but I'm not sure which is the right >> > > keyword. Could you guys help me out? >> > > >> > > Thanks in advance:) >> > >> > 1. Create a degenerated RAID1 with your new disk >> > mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb >> > >> > 2. Partition the raid device >> > >> > 3. Add one of the partitions to your LVM volume group. >> > pvcreate /dev/sdb2 >> > vgextend volume_group /dev/sdb2 >> > >> > 4. Move everything from the old physical volumes to the new pv. >> > pvmove /dev/sda3 /dev/sdb2 >> > >> > 5. Remove the old and now empty physical volume >> > vgreduce volume_group /dev/sda3 >> > >> > 6. Move everything else which is not on LVM to your new raid. Guess you >> > need to go to single user mode to do this safely. >> > >> > 7. Grow your raid to also contain the old disk. >> > mdadm /dev/md0 -a /dev/sda >> > >> > No, I have not tested this and you should double-check everything. No >> > guarantees, etc. >> > >> > One warning, though: pvmove is known to create problems from time to >> > time. Leaking memory, bogging systems with infinite system load and so >> > on. If it gives you trouble, you can abort it with `pvmove --abort` and >> > try it again later by calling `pvmove volume_group` (without physical >> > device specified) to resume it. It SHOULD survive system crashes. >> > Trying another kernel version sometimes helps when pvmove gives you >> trouble. >> >> To avoid that, with "large" moves, do the following: >> # pvmove -i 600 /dev/sda3 >> >> The "-i 600" means, only report every 10 minutes. It's the "reporting" >> that >> causes the memory leak. >> >> Also, when just wanting to "empty" one physical volume, it is not >> necessary to >> specify the "target". >> It's a good idea to mark the PVs on the existing drive "non-allocatable". >> Then >> LVM won't try to move anything to that PV: >> # pvchange -xn /dev/sda3 >> >> The rest of the steps read correct. It's how I did a similar operation, >> but >> still double-check all the parameters and when in doubt, read the manual >> and/or ask on the list. >> >> -- >> Joost Roeleveld >> >> >> > > > -- > Best Regards, > Einux > > I starred this in Gmail in case I ever need to do something like this. Thanks guys!