* [gentoo-user] How to get raid @ 2012-01-04 2:57 Jeff Cranmer 2012-01-04 4:21 ` Paul Hartman ` (2 more replies) 0 siblings, 3 replies; 30+ messages in thread From: Jeff Cranmer @ 2012-01-04 2:57 UTC (permalink / raw To: gentoo-user Hi all, I have recently built a new system, running Gentoo on a Sabertooth 990FX motherboard. The board has a raid controller on which I'm running a 120GB solid state drive for the OS (Raid 0) and a set of three 1.5TB drives which were previously running as a RAID5 array. I can see the sda 120GB drive and have installed the operating system on that. I can't see one device for the three disk RAID5 array, even though the RAID BIOS reports it as a healthy 3TB disk. Instead I see three separate devices, sdb, sdc and sdd What do I need to do to mount the 3TB RAID disk? I'm running genkernel, and compiled it with genkernel --dmraid all. It should already have data on it, if I can only get gentoo to recognise it. I can see the RAID controller when I use lspci 00:11.0 RAID bus controller: ATI Technologies Inc SB7x0,SB8x0,SB9x0 SATA Controller [RAID5 mode] (rev 40) One possible clue may be in dmesg, where I get the error device-mapper: table: 253:0: raid45: unknown target type Any assistance gratefully received. Thanks Jeff ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-04 2:57 [gentoo-user] How to get raid Jeff Cranmer @ 2012-01-04 4:21 ` Paul Hartman 2012-01-05 0:37 ` Jeff Cranmer 2012-01-04 13:35 ` Alexander Puchmayr 2012-01-04 13:39 ` Volker Armin Hemmann 2 siblings, 1 reply; 30+ messages in thread From: Paul Hartman @ 2012-01-04 4:21 UTC (permalink / raw To: gentoo-user On 01/03/2012 08:57 PM, Jeff Cranmer wrote: > device-mapper: table: 253:0: raid45: unknown target type Maybe a dumb question, but is the raid45 module enabled in your kernel config? ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-04 4:21 ` Paul Hartman @ 2012-01-05 0:37 ` Jeff Cranmer 0 siblings, 0 replies; 30+ messages in thread From: Jeff Cranmer @ 2012-01-05 0:37 UTC (permalink / raw To: gentoo-user On Tue, 2012-01-03 at 22:21 -0600, Paul Hartman wrote: > On 01/03/2012 08:57 PM, Jeff Cranmer wrote: > > device-mapper: table: 253:0: raid45: unknown target type > > Maybe a dumb question, but is the raid45 module enabled in your kernel > config? > genkernel --dmraid all Not sure how to check those details in genkernel. ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-04 2:57 [gentoo-user] How to get raid Jeff Cranmer 2012-01-04 4:21 ` Paul Hartman @ 2012-01-04 13:35 ` Alexander Puchmayr 2012-01-05 1:14 ` Jeff Cranmer 2012-01-04 13:39 ` Volker Armin Hemmann 2 siblings, 1 reply; 30+ messages in thread From: Alexander Puchmayr @ 2012-01-04 13:35 UTC (permalink / raw To: gentoo-user On Wednesday 04 January 2012 11:57:18 Jeff Cranmer wrote: > Hi all, > > I have recently built a new system, running Gentoo on a Sabertooth 990FX > motherboard. The board has a raid controller on which I'm running a > 120GB solid state drive for the OS (Raid 0) and a set of three 1.5TB > drives which were previously running as a RAID5 array. > > I can see the sda 120GB drive and have installed the operating system on > that. I can't see one device for the three disk RAID5 array, even > though the RAID BIOS reports it as a healthy 3TB disk. Instead I see > three separate devices, sdb, sdc and sdd > > What do I need to do to mount the 3TB RAID disk? I'm running genkernel, > and compiled it with genkernel --dmraid all. It should already have > data on it, if I can only get gentoo to recognise it. > > I can see the RAID controller when I use lspci > > 00:11.0 RAID bus controller: ATI Technologies Inc SB7x0,SB8x0,SB9x0 SATA > Controller [RAID5 mode] (rev 40) > > One possible clue may be in dmesg, where I get the error > device-mapper: table: 253:0: raid45: unknown target type > The first question is: What type of raid are you using? a) Software-Raid created with mdadm & co b) Hardware-Controller based raid. While in the first case you see all individual disks with their partitions and a /dev/mdX entry that actually contains the raid failsystem, the second one shows only a /dev/sdX holding the final raid drive. Additionally, for the hardware based raid, you'll need a driver for the controller that supports the raid5. I think this is the configuration you're trying to run, since you mentioned that you created your raid in the RAID BIOS. I'm not sure (I've never tried this) whether there is a driver for Linux supporting raid modes on board-embedded HW raid controllers. Alex ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-04 13:35 ` Alexander Puchmayr @ 2012-01-05 1:14 ` Jeff Cranmer 0 siblings, 0 replies; 30+ messages in thread From: Jeff Cranmer @ 2012-01-05 1:14 UTC (permalink / raw To: gentoo-user I was using a hardware-based 'fakeRAID'. It used to work on my old OpenSuse install, but that broke and I installed gentoo instead. I wasn't able to get that to work, and then the motherboard died, so I built a new system and reused the 3-drive RAID5 array. > > While in the first case you see all individual disks with their partitions and > a /dev/mdX entry that actually contains the raid failsystem, the second one > shows only a /dev/sdX holding the final raid drive. > > Additionally, for the hardware based raid, you'll need a driver for the > controller that supports the raid5. I think this is the configuration you're > trying to run, since you mentioned that you created your raid in the RAID > BIOS. > > I'm not sure (I've never tried this) whether there is a driver for Linux > supporting raid modes on board-embedded HW raid controllers. > > Alex > > > > > ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-04 2:57 [gentoo-user] How to get raid Jeff Cranmer 2012-01-04 4:21 ` Paul Hartman 2012-01-04 13:35 ` Alexander Puchmayr @ 2012-01-04 13:39 ` Volker Armin Hemmann 2012-01-05 2:28 ` Jeff Cranmer 2 siblings, 1 reply; 30+ messages in thread From: Volker Armin Hemmann @ 2012-01-04 13:39 UTC (permalink / raw To: gentoo-user Am Dienstag, 3. Januar 2012, 21:57:18 schrieb Jeff Cranmer: > Hi all, > > I have recently built a new system, running Gentoo on a Sabertooth 990FX > motherboard. The board has a raid controller on which I'm running a > 120GB solid state drive for the OS (Raid 0) and a set of three 1.5TB > drives which were previously running as a RAID5 array. no, it does not have a raid controller. It is bios raid. AKA fake raid. You will have less trouble if you stop using it. google for mdadm. There are some very nice howto's. ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-04 13:39 ` Volker Armin Hemmann @ 2012-01-05 2:28 ` Jeff Cranmer 2012-01-05 3:01 ` Volker Armin Hemmann 0 siblings, 1 reply; 30+ messages in thread From: Jeff Cranmer @ 2012-01-05 2:28 UTC (permalink / raw To: gentoo-user On Wed, 2012-01-04 at 14:39 +0100, Volker Armin Hemmann wrote: > Am Dienstag, 3. Januar 2012, 21:57:18 schrieb Jeff Cranmer: > > Hi all, > > > > I have recently built a new system, running Gentoo on a Sabertooth 990FX > > motherboard. The board has a raid controller on which I'm running a > > 120GB solid state drive for the OS (Raid 0) and a set of three 1.5TB > > drives which were previously running as a RAID5 array. > > no, it does not have a raid controller. It is bios raid. AKA fake raid. You > will have less trouble if you stop using it. > > google for mdadm. There are some very nice howto's. > Not sure I'd agree with you about the howtos being nice. They mostly deal with trying to boot from a RAID array (don't want that, as I have my OS on a non-RAID 120GB SSD). They're also contradictory, with some saying I need dmraid, and some saying not. Most seem to make no more than a passing nod towards genkernel. So, given that from the links that I've found, here's my starting set of questions. In /etc/genkernel.conf, which options do I need to enable. One guide suggested the following settings DMRAID="no" MDADM="yes" MDADM_CONFIG="/etc/mdadm.conf" MDADM_VER="3.1.4" If this is correct, does it matter that my mdadm version which I emerged is 3.1.5? The tarball in /var/cache/genkernel/src is mdadm-3.1.4.tar.bz2 Should I copy mdadm-3.1.5.tar.bz2 from /etc/portage/distfiles into there and rebuild genkernel. Do I need the dodmraid option compiled into genkernel, or is that only for fakeraid, or situations where I need to boot from a raid partition? Do I need the dodmraid option set true in the grub.conf file, or is 'domdadm' more appropriate? Jeff ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-05 2:28 ` Jeff Cranmer @ 2012-01-05 3:01 ` Volker Armin Hemmann 2012-01-05 3:45 ` Jeff Cranmer 0 siblings, 1 reply; 30+ messages in thread From: Volker Armin Hemmann @ 2012-01-05 3:01 UTC (permalink / raw To: gentoo-user Am Mittwoch, 4. Januar 2012, 21:28:32 schrieb Jeff Cranmer: > On Wed, 2012-01-04 at 14:39 +0100, Volker Armin Hemmann wrote: > > Am Dienstag, 3. Januar 2012, 21:57:18 schrieb Jeff Cranmer: > > > Hi all, > > > > > > I have recently built a new system, running Gentoo on a Sabertooth 990FX > > > motherboard. The board has a raid controller on which I'm running a > > > 120GB solid state drive for the OS (Raid 0) and a set of three 1.5TB > > > drives which were previously running as a RAID5 array. > > > > no, it does not have a raid controller. It is bios raid. AKA fake raid. > > You > > will have less trouble if you stop using it. > > > > google for mdadm. There are some very nice howto's. > > Not sure I'd agree with you about the howtos being nice. They mostly > deal with trying to boot from a RAID array (don't want that, as I have > my OS on a non-RAID 120GB SSD). They're also contradictory, with some > saying I need dmraid, and some saying not. Most seem to make no more > than a passing nod towards genkernel. the short one: partition one disk with (c)fdisk. Use sfdisk to transfer the partition scheme to the other disks. run mdadm --create /dev/md0 level=whatever you want --raid- devices=thenumberofdevices /dev/sdXY /dev/sdZY ... mdadm --detail --scan >> /etc/mdadm.conf done > > So, given that from the links that I've found, here's my starting set of > questions. > > In /etc/genkernel.conf, which options do I need to enable. > One guide suggested the following settings > DMRAID="no" > MDADM="yes" > MDADM_CONFIG="/etc/mdadm.conf" > MDADM_VER="3.1.4" there is a reason why I never ever touch genkernel. you should forget that crap. You don't need to copy around anything. If your root is not on some fancy setup, you don't need initramfs. Just make a nice kernel, put it in /boot. Done. grub.conf: kernel /vmlinuz root=/dev/sda1 nmi_watchdog=0 and you are fine. Have the raids assembled by a) kernel (in that case you have to tell mdadm that on creation time, man mdadm is your friend) or by mdadm init script. Don't use fakeraid. Set bios to ahci and be done with this. the relevant part of Kernel config for example: <*> RAID support │ │ │ │ [*] Autodetect RAID arrays during kernel boot │ │ │ │ < > Linear (append) mode │ │ │ │ < > RAID-0 (striping) mode │ │ │ │ <*> RAID-1 (mirroring) mode │ │ │ │ < > RAID-10 (mirrored striping) mode │ │ │ │ <*> RAID-4/RAID-5/RAID-6 mode │ │ │ │ [ ] RAID-4/RAID-5/RAID-6 Multicore processing (EXPERIMENTAL) │ │ │ │ < > Multipath I/O support │ │ │ │ < > Faulty test module for MD │ │ │ │ < > Device mapper support │ │ │ │ │ │ │ │ as you can see no dm support in my kernel. No look what I got... cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md5 : active raid1 sdg2[2] sdf2[1] 830278202 blocks super 1.2 [2/2] [UU] md4 : active raid1 sdf1[1] sdg1[2] 146479542 blocks super 1.2 [2/2] [UU] md124 : active raid1 sdc1[2] sdd1[1] sdb1[0] 64128 blocks [3/3] [UUU] md1 : active raid5 sdc3[2] sdd3[1] sdb3[0] 78123904 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU] md2 : active raid5 sdc5[2] sdd5[1] sdb5[0] 39069824 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU] md127 : active raid5 sdc6[2] sdd6[1] sdb6[0] 843813888 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU] the numbers where once nicely 0-4 but some update fucked that up. No big deal - I mount by UUID. Something I strongly recommend. -- #163933 ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-05 3:01 ` Volker Armin Hemmann @ 2012-01-05 3:45 ` Jeff Cranmer 2012-01-05 8:22 ` Hinnerk van Bruinehsen 2012-01-05 10:22 ` Volker Armin Hemmann 0 siblings, 2 replies; 30+ messages in thread From: Jeff Cranmer @ 2012-01-05 3:45 UTC (permalink / raw To: gentoo-user On Thu, 2012-01-05 at 04:01 +0100, Volker Armin Hemmann wrote: > the short one: > > partition one disk with (c)fdisk. Use sfdisk to transfer the partition scheme > to the other disks. > > run mdadm --create /dev/md0 level=whatever you want --raid- > devices=thenumberofdevices /dev/sdXY /dev/sdZY ... > > mdadm --detail --scan >> /etc/mdadm.conf > > done > > OK, but there is active data on the disks, so I don't want to partition them. They should already partitioned, and running fdisk will erase the data. If I run mdadm --create /dev/md0 level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd, will that erase data already on the disks? Prior to running this command, there is no /dev/md entry. Is this correct? Looking further by using fdisk, it appears that sdc has a linux partition on sdc1 starting at sector 34, and a GPT partition of size 0+ at /dev/sdc4, sector 0. Nothing else is on that disk (no sdc2 or sdc3). sdd and sdb report invalid partition table flags and do not appear to have active partitions. Does this make sense? Is it possible that I ordered the disks incorrectly when I installed them, and by simply swapping disks b and c at the raid I can get things to start making sense? Is there an order to a set of RAID5 disks? I thought any two of three RAID5 disks could be recovered, regardless of which one dies? > there is a reason why I never ever touch genkernel. > > you should forget that crap. You don't need to copy around anything. If your > root is not on some fancy setup, you don't need initramfs. > > Just make a nice kernel, put it in /boot. Done. > OK. The OS disk is non-RAID (120GB SSD), so I don't need any fancy options in my kernel. All the domdadm and dodmraid stuff is needed just when your OS disk is raided. Correct? Thanks Jeff ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-05 3:45 ` Jeff Cranmer @ 2012-01-05 8:22 ` Hinnerk van Bruinehsen 2012-01-05 10:22 ` Volker Armin Hemmann 1 sibling, 0 replies; 30+ messages in thread From: Hinnerk van Bruinehsen @ 2012-01-05 8:22 UTC (permalink / raw To: gentoo-user -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 05.01.2012 04:45, Jeff Cranmer wrote: > On Thu, 2012-01-05 at 04:01 +0100, Volker Armin Hemmann wrote: > >> the short one: >> >> partition one disk with (c)fdisk. Use sfdisk to transfer the >> partition scheme to the other disks. >> >> run mdadm --create /dev/md0 level=whatever you want --raid- >> devices=thenumberofdevices /dev/sdXY /dev/sdZY ... >> >> mdadm --detail --scan >> /etc/mdadm.conf >> >> done >> >> > OK, but there is active data on the disks, so I don't want to > partition them. They should already partitioned, and running fdisk > will erase the data. > > If I run mdadm --create /dev/md0 level=5 --raid-devices=3 /dev/sdb > /dev/sdc /dev/sdd, will that erase data already on the disks? > > Prior to running this command, there is no /dev/md entry. Is this > correct? > > Looking further by using fdisk, it appears that sdc has a linux > partition on sdc1 starting at sector 34, and a GPT partition of > size 0+ at /dev/sdc4, sector 0. Nothing else is on that disk (no > sdc2 or sdc3). > > sdd and sdb report invalid partition table flags and do not appear > to have active partitions. Does this make sense? > > Is it possible that I ordered the disks incorrectly when I > installed them, and by simply swapping disks b and c at the raid I > can get things to start making sense? Is there an order to a set > of RAID5 disks? I thought any two of three RAID5 disks could be > recovered, regardless of which one dies? > >> there is a reason why I never ever touch genkernel. >> >> you should forget that crap. You don't need to copy around >> anything. If your root is not on some fancy setup, you don't need >> initramfs. >> >> Just make a nice kernel, put it in /boot. Done. >> > OK. The OS disk is non-RAID (120GB SSD), so I don't need any > fancy options in my kernel. All the domdadm and dodmraid stuff is > needed just when your OS disk is raided. Correct? > > Thanks > > Jeff If you used a hardware-based RAID before, you should do nothing with mdadm or fdisk until you have a working copy of your data. If I recall correctly, you said, you used that RAID-array on a different mobo before. Then the mobo died and you want just to reuse the array. Correct? If that's correct you may be in serious trouble, because afaik there ist no real standart in how to create a hardware RAID. If the old RAID-controller/firmware isn't available anymore you could try to find an identical one. There may be even the possibility that through your tries with the new controller/mobo the array is damaged right now. That is - by the way - one very good reason to use a software-based solution like mdadm: you aren't restricted to specific hardware... -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.18 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJPBV29AAoJEJwwOFaNFkYcdhYH/A6zUEe8AQGYR959DNYvfkIV V4XRyP1QbVYNcC5hou3vtC8ey7SvZisOXh3JX7vo534ATtY+KW6hRIHu5xlDJe67 KuX1aa37fZ9ivhkpLaGGXOluDZIlf28L70jGV48FMd95TMWFmK4tO12CwTbmRy30 ckuyHFgLrJOsYcTIlrlB/DSsosklsQ3wyMJX5XbqUi7dJuae+h+yiphuPoAU99iX FnO/QxhjfrX37Ch56ughvTSMKxRe6XDtECyIB/v3/2Dq1GH07FHONjIfJ8qEbqjK bfa4W9XHygkWd/Wwfop//2hmz1bXJmGrVRd9iLzs9prnc/Cgv0yDuAPoUhFKgkw= =Yrz+ -----END PGP SIGNATURE----- ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-05 3:45 ` Jeff Cranmer 2012-01-05 8:22 ` Hinnerk van Bruinehsen @ 2012-01-05 10:22 ` Volker Armin Hemmann 2012-01-06 1:13 ` Jeff Cranmer 1 sibling, 1 reply; 30+ messages in thread From: Volker Armin Hemmann @ 2012-01-05 10:22 UTC (permalink / raw To: gentoo-user Am Mittwoch, 4. Januar 2012, 22:45:45 schrieb Jeff Cranmer: > On Thu, 2012-01-05 at 04:01 +0100, Volker Armin Hemmann wrote: > > the short one: > > > > partition one disk with (c)fdisk. Use sfdisk to transfer the partition > > scheme to the other disks. > > > > run mdadm --create /dev/md0 level=whatever you want --raid- > > devices=thenumberofdevices /dev/sdXY /dev/sdZY ... > > > > mdadm --detail --scan >> /etc/mdadm.conf > > > > done > > OK, but there is active data on the disks, so I don't want to partition > them. They should already partitioned, and running fdisk will erase the > data. first rule: always mount a scratch monkey In your case: always backup data. There is a way to preserve the data on one disk, create a raid5 with one disk missing, then copying the data onto the raid and add the disk. But that is high risk stuff. > > If I run mdadm --create /dev/md0 level=5 > --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd, will that erase data > already on the disks? > > Prior to running this command, there is no /dev/md entry. Is this > correct? yes. You might have to create the nodes with mknod - my memory is sketchy there. > Looking further by using fdisk, it appears that sdc has a linux > partition on sdc1 starting at sector 34, and a GPT partition of size 0+ > at /dev/sdc4, sector 0. Nothing else is on that disk (no sdc2 or sdc3). > > sdd and sdb report invalid partition table flags and do not appear to > have active partitions. Does this make sense? if you used fakeraid before, yes. But that means: without the original fakeraid everything on that disks is inaccessible... and you need to partition them. > > Is it possible that I ordered the disks incorrectly when I installed > them, and by simply swapping disks b and c at the raid I can get things > to start making sense? Is there an order to a set of RAID5 disks? I > thought any two of three RAID5 disks could be recovered, regardless of > which one dies? no. First, the order of the disks is irrelevant, but the most important thing: with Raid5 ONE disk out of an array might fail. No matter how many disks - two fail and everything is lost. > > > there is a reason why I never ever touch genkernel. > > > > you should forget that crap. You don't need to copy around anything. If > > your root is not on some fancy setup, you don't need initramfs. > > > > Just make a nice kernel, put it in /boot. Done. > > OK. The OS disk is non-RAID (120GB SSD), so I don't need any fancy > options in my kernel. All the domdadm and dodmraid stuff is needed just > when your OS disk is raided. Correct? yes -- #163933 ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-05 10:22 ` Volker Armin Hemmann @ 2012-01-06 1:13 ` Jeff Cranmer 2012-01-06 1:41 ` Volker Armin Hemmann 2012-01-06 1:42 ` Volker Armin Hemmann 0 siblings, 2 replies; 30+ messages in thread From: Jeff Cranmer @ 2012-01-06 1:13 UTC (permalink / raw To: gentoo-user On Thu, 2012-01-05 at 11:22 +0100, Volker Armin Hemmann wrote: > Am Mittwoch, 4. Januar 2012, 22:45:45 schrieb Jeff Cranmer: > > On Thu, 2012-01-05 at 04:01 +0100, Volker Armin Hemmann wrote: > > > the short one: > > > > > > partition one disk with (c)fdisk. Use sfdisk to transfer the partition > > > scheme to the other disks. > > > > > > run mdadm --create /dev/md0 level=whatever you want --raid- > > > devices=thenumberofdevices /dev/sdXY /dev/sdZY ... > > > > > > mdadm --detail --scan >> /etc/mdadm.conf > > > > > > done > > > > OK, but there is active data on the disks, so I don't want to partition > > them. They should already partitioned, and running fdisk will erase the > > data. > > first rule: > > always mount a scratch monkey > > In your case: always backup data. > No big deal. 99.9% of the data is backed up. I was just hoping to recover the last 0.1% (picky huh?<g>). Now that I know one of the main drawbacks of fakeraid, I think I'll move ahead with software RAID. OK, so I've partitioned the first disk as a single linux partition (/dev/sdb1, ID 83, Linux). How do I use sfdisk to transfer that partition scheme to the other disks? Is it not sufficient just to partition the other two disks in the same way as the first? Jeff ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-06 1:13 ` Jeff Cranmer @ 2012-01-06 1:41 ` Volker Armin Hemmann 2012-01-06 1:42 ` Volker Armin Hemmann 1 sibling, 0 replies; 30+ messages in thread From: Volker Armin Hemmann @ 2012-01-06 1:41 UTC (permalink / raw To: gentoo-user Am Donnerstag, 5. Januar 2012, 20:13:04 schrieb Jeff Cranmer: > On Thu, 2012-01-05 at 11:22 +0100, Volker Armin Hemmann wrote: > > Am Mittwoch, 4. Januar 2012, 22:45:45 schrieb Jeff Cranmer: > > > On Thu, 2012-01-05 at 04:01 +0100, Volker Armin Hemmann wrote: > > > > the short one: > > > > > > > > partition one disk with (c)fdisk. Use sfdisk to transfer the partition > > > > scheme to the other disks. > > > > > > > > run mdadm --create /dev/md0 level=whatever you want --raid- > > > > devices=thenumberofdevices /dev/sdXY /dev/sdZY ... > > > > > > > > mdadm --detail --scan >> /etc/mdadm.conf > > > > > > > > done > > > > > > OK, but there is active data on the disks, so I don't want to partition > > > them. They should already partitioned, and running fdisk will erase the > > > data. > > > > first rule: > > > > always mount a scratch monkey > > > > In your case: always backup data. > > No big deal. > 99.9% of the data is backed up. I was just hoping to recover the last > 0.1% (picky huh?<g>). Now that I know one of the main drawbacks of > fakeraid, I think I'll move ahead with software RAID. > > OK, so I've partitioned the first disk as a single linux partition > (/dev/sdb1, ID 83, Linux). if you want to use kernel autodetection (nice but on the way out) you should change the type. > How do I use sfdisk to transfer that partition scheme to the other > disks? Is it not sufficient just to partition the other two disks in > the same way as the first? sfdisk -d /dev/sda | sfdisk /dev/sdb is safe. -- #163933 ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-06 1:13 ` Jeff Cranmer 2012-01-06 1:41 ` Volker Armin Hemmann @ 2012-01-06 1:42 ` Volker Armin Hemmann 2012-01-06 4:44 ` Jeff Cranmer 1 sibling, 1 reply; 30+ messages in thread From: Volker Armin Hemmann @ 2012-01-06 1:42 UTC (permalink / raw To: gentoo-user Am Donnerstag, 5. Januar 2012, 20:13:04 schrieb Jeff Cranmer: > On Thu, 2012-01-05 at 11:22 +0100, Volker Armin Hemmann wrote: > > Am Mittwoch, 4. Januar 2012, 22:45:45 schrieb Jeff Cranmer: > > > On Thu, 2012-01-05 at 04:01 +0100, Volker Armin Hemmann wrote: > > > > the short one: > > > > > > > > partition one disk with (c)fdisk. Use sfdisk to transfer the partition > > > > scheme to the other disks. > > > > > > > > run mdadm --create /dev/md0 level=whatever you want --raid- > > > > devices=thenumberofdevices /dev/sdXY /dev/sdZY ... > > > > > > > > mdadm --detail --scan >> /etc/mdadm.conf > > > > > > > > done > > > > > > OK, but there is active data on the disks, so I don't want to partition > > > them. They should already partitioned, and running fdisk will erase the > > > data. > > > > first rule: > > > > always mount a scratch monkey > > > > In your case: always backup data. > > No big deal. > 99.9% of the data is backed up. I was just hoping to recover the last > 0.1% (picky huh?<g>). Now that I know one of the main drawbacks of > fakeraid, I think I'll move ahead with software RAID. > > OK, so I've partitioned the first disk as a single linux partition > (/dev/sdb1, ID 83, Linux). > How do I use sfdisk to transfer that partition scheme to the other > disks? Is it not sufficient just to partition the other two disks in > the same way as the first? > > Jeff in your case sfdisk -d /dev/sdb | sfdisk /dev/sdc of course ;) -- #163933 ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-06 1:42 ` Volker Armin Hemmann @ 2012-01-06 4:44 ` Jeff Cranmer 2012-01-06 12:36 ` Volker Armin Hemmann 0 siblings, 1 reply; 30+ messages in thread From: Jeff Cranmer @ 2012-01-06 4:44 UTC (permalink / raw To: gentoo-user On Fri, 2012-01-06 at 02:42 +0100, Volker Armin Hemmann wrote: > in your case > > sfdisk -d /dev/sdb | sfdisk /dev/sdc > > of course ;) > One of the disks had a GPT partition table which I was eventually able to get rid of with gdisk (emerge -av gptfdisk). I'm close. I had a 2.7TiB RAID5 array using genkernal, comprising three 1.5TB disks, using the commands mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm --detail --scan >> /etc/mdadm.conf I formatted this array as an xfs filesystem. After reboot, however, /dev/md0 is still there, but I get a 'can't read superblock' error. What am I missing? ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-06 4:44 ` Jeff Cranmer @ 2012-01-06 12:36 ` Volker Armin Hemmann 2012-01-06 23:04 ` Jeff Cranmer 0 siblings, 1 reply; 30+ messages in thread From: Volker Armin Hemmann @ 2012-01-06 12:36 UTC (permalink / raw To: gentoo-user Am Donnerstag, 5. Januar 2012, 23:44:10 schrieb Jeff Cranmer: > On Fri, 2012-01-06 at 02:42 +0100, Volker Armin Hemmann wrote: > > in your case > > > > sfdisk -d /dev/sdb | sfdisk /dev/sdc > > > > of course ;) > > One of the disks had a GPT partition table which I was eventually able > to get rid of with gdisk (emerge -av gptfdisk). > > I'm close. I had a 2.7TiB RAID5 array using genkernal, comprising three > 1.5TB disks, using the commands > mdadm --create /dev/md0 --level=5 > --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 > > mdadm --detail --scan >> /etc/mdadm.conf > > I formatted this array as an xfs filesystem. > > After reboot, however, /dev/md0 is still there, but I get a 'can't read > superblock' error. > > What am I missing? have you set the type to linux raid autodetect? have you tried mdadm --assemble? -- #163933 ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-06 12:36 ` Volker Armin Hemmann @ 2012-01-06 23:04 ` Jeff Cranmer 2012-01-07 15:11 ` Jeff Cranmer 0 siblings, 1 reply; 30+ messages in thread From: Jeff Cranmer @ 2012-01-06 23:04 UTC (permalink / raw To: gentoo-user On Fri, 2012-01-06 at 13:36 +0100, Volker Armin Hemmann wrote: > Am Donnerstag, 5. Januar 2012, 23:44:10 schrieb Jeff Cranmer: > > On Fri, 2012-01-06 at 02:42 +0100, Volker Armin Hemmann wrote: > > > in your case > > > > > > sfdisk -d /dev/sdb | sfdisk /dev/sdc > > > > > > of course ;) > > > > One of the disks had a GPT partition table which I was eventually able > > to get rid of with gdisk (emerge -av gptfdisk). > > > > I'm close. I had a 2.7TiB RAID5 array using genkernal, comprising three > > 1.5TB disks, using the commands > > mdadm --create /dev/md0 --level=5 > > --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 > > > > mdadm --detail --scan >> /etc/mdadm.conf > > > > I formatted this array as an xfs filesystem. > > > > After reboot, however, /dev/md0 is still there, but I get a 'can't read > > superblock' error. > > > > What am I missing? > > have you set the type to linux raid autodetect? > > have you tried mdadm --assemble? > mdadm --assemble /dev/md0 didn't make any difference. Where do I set the type? Thanks Jeff ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-06 23:04 ` Jeff Cranmer @ 2012-01-07 15:11 ` Jeff Cranmer 2012-01-07 17:20 ` Jeff Cranmer 0 siblings, 1 reply; 30+ messages in thread From: Jeff Cranmer @ 2012-01-07 15:11 UTC (permalink / raw To: gentoo-user > > > > > > What am I missing? > > > > have you set the type to linux raid autodetect? > > > > have you tried mdadm --assemble? > > > mdadm --assemble /dev/md0 didn't make any difference. > Where do I set the type? > after assembling, results of cat/proc/mdstat personalities : [linear] [raid0] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] md0 : inactive sdb1[0](S) sdd1[3](S) sdc1[1](S) 4395409608 blocks super 1.2 unused devices: <none> results of mdadm --detail /dev/md0 mdadm: md device /dev/md0 does not appear to be active. results of /etc/init.d/mdadm status * status: started fstab line /dev/md0 /data xfs noatime 0 0 Is there a raid option I need to add to the fstab entry? Is there another service that needs to run, other than mdam? Thanks Jeff ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-07 15:11 ` Jeff Cranmer @ 2012-01-07 17:20 ` Jeff Cranmer 2012-01-07 17:46 ` Volker Armin Hemmann 2012-01-08 18:31 ` Paul Hartman 0 siblings, 2 replies; 30+ messages in thread From: Jeff Cranmer @ 2012-01-07 17:20 UTC (permalink / raw To: gentoo-user On Sat, 2012-01-07 at 10:11 -0500, Jeff Cranmer wrote: > > > > > > > > What am I missing? > > > > > > have you set the type to linux raid autodetect? > > > > > > have you tried mdadm --assemble? > > > > > mdadm --assemble /dev/md0 didn't make any difference. > > Where do I set the type? > > > after assembling, > results of cat/proc/mdstat > personalities : [linear] [raid0] [raid10] [raid6] [raid5] [raid4] > [multipath] [faulty] > md0 : inactive sdb1[0](S) sdd1[3](S) sdc1[1](S) > 4395409608 blocks super 1.2 > > unused devices: <none> > > results of mdadm --detail /dev/md0 > mdadm: md device /dev/md0 does not appear to be active. > > results of /etc/init.d/mdadm status > * status: started > > fstab line > /dev/md0 /data xfs noatime 0 0 > > Is there a raid option I need to add to the fstab entry? > Is there another service that needs to run, other than mdam? > > Thanks > > Jeff > > I tried changing the type of each array element in fdisk to fd (linux raid autodetect. The array is still not being recognised at boot, with the same 'cannot read superblock' error. I also tried re-running mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 I get the error mdadm: device /dev/sdb1 not suitable for any style of array. What is going on here? ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-07 17:20 ` Jeff Cranmer @ 2012-01-07 17:46 ` Volker Armin Hemmann 2012-01-07 18:27 ` Jeff Cranmer 2012-01-08 18:31 ` Paul Hartman 1 sibling, 1 reply; 30+ messages in thread From: Volker Armin Hemmann @ 2012-01-07 17:46 UTC (permalink / raw To: gentoo-user Am Samstag, 7. Januar 2012, 12:20:08 schrieb Jeff Cranmer: > On Sat, 2012-01-07 at 10:11 -0500, Jeff Cranmer wrote: > > > > > What am I missing? > > > > > > > > have you set the type to linux raid autodetect? > > > > > > > > have you tried mdadm --assemble? > > > > > > mdadm --assemble /dev/md0 didn't make any difference. > > > Where do I set the type? > > > > after assembling, > > results of cat/proc/mdstat > > personalities : [linear] [raid0] [raid10] [raid6] [raid5] [raid4] > > [multipath] [faulty] > > md0 : inactive sdb1[0](S) sdd1[3](S) sdc1[1](S) > > > > 4395409608 blocks super 1.2 > > > > unused devices: <none> > > > > results of mdadm --detail /dev/md0 > > mdadm: md device /dev/md0 does not appear to be active. > > > > results of /etc/init.d/mdadm status > > > > * status: started > > > > fstab line > > /dev/md0 /data xfs noatime 0 0 > > > > Is there a raid option I need to add to the fstab entry? > > Is there another service that needs to run, other than mdam? > > > > Thanks > > > > Jeff > > I tried changing the type of each array element in fdisk to fd (linux > raid autodetect. > > The array is still not being recognised at boot, with the same 'cannot > read superblock' error. > > I also tried re-running mdadm --create /dev/md0 --level=5 > --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 > I get the error > mdadm: device /dev/sdb1 not suitable for any style of array. > > What is going on here? I am thinking ;) -- #163933 ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-07 17:46 ` Volker Armin Hemmann @ 2012-01-07 18:27 ` Jeff Cranmer 2012-01-07 18:50 ` Volker Armin Hemmann 0 siblings, 1 reply; 30+ messages in thread From: Jeff Cranmer @ 2012-01-07 18:27 UTC (permalink / raw To: gentoo-user > > > > I tried changing the type of each array element in fdisk to fd (linux > > raid autodetect. > > > > The array is still not being recognised at boot, with the same 'cannot > > read superblock' error. > > > > I also tried re-running mdadm --create /dev/md0 --level=5 > > --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 > > I get the error > > mdadm: device /dev/sdb1 not suitable for any style of array. > > > > What is going on here? > > I am thinking ;) > > LOL! Me too. mdadm --detail /dev/md0 thinks that /dev/sdc1 is faulty. I'm not sure whether it's really faulty, or just that my setup for RAID is screwed up. How do I get rid of an existing /dev/md0? I'm thinking that I can try creating a RAID1 array using the two allegedly good disks and see if I can make that work. If that works, I'll get rid of it and try recreating the RAID1 with one good disk and the one that mdadm thinks is faulty. Hopefully that will show me whether I have a hardware problem or a software one. Jeff ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-07 18:27 ` Jeff Cranmer @ 2012-01-07 18:50 ` Volker Armin Hemmann 2012-01-07 22:43 ` Jeff Cranmer 2012-01-09 20:45 ` Jeff Cranmer 0 siblings, 2 replies; 30+ messages in thread From: Volker Armin Hemmann @ 2012-01-07 18:50 UTC (permalink / raw To: gentoo-user Am Samstag, 7. Januar 2012, 13:27:04 schrieb Jeff Cranmer: > > > I tried changing the type of each array element in fdisk to fd (linux > > > raid autodetect. > > > > > > The array is still not being recognised at boot, with the same 'cannot > > > read superblock' error. > > > > > > I also tried re-running mdadm --create /dev/md0 --level=5 > > > --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 > > > I get the error > > > mdadm: device /dev/sdb1 not suitable for any style of array. > > > > > > What is going on here? > > > > I am thinking ;) > > LOL! > > Me too. > > mdadm --detail /dev/md0 thinks that /dev/sdc1 is faulty. > I'm not sure whether it's really faulty, or just that my setup for RAID > is screwed up. > > How do I get rid of an existing /dev/md0? you stop it. Override the superblock with dd.. and lose all data on the disks. > > I'm thinking that I can try creating a RAID1 array using the two > allegedly good disks and see if I can make that work. yeah > > If that works, I'll get rid of it and try recreating the RAID1 with one > good disk and the one that mdadm thinks is faulty. > you don't have to. You can migrate a 2 disk raid1 to a 3 disk raid5. Howtos are availble via google. just saying - box in suspend to ram. I change the cable (and connector on mobo) on a disk with two raid 1 partitions on it. One came back after starting the box. The other? Nothing I tried worked. At the end I dd'ed the partition.. and did a complete 'faulty disk/replacement' resync.... argl. -- #163933 ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-07 18:50 ` Volker Armin Hemmann @ 2012-01-07 22:43 ` Jeff Cranmer 2012-01-09 20:45 ` Jeff Cranmer 1 sibling, 0 replies; 30+ messages in thread From: Jeff Cranmer @ 2012-01-07 22:43 UTC (permalink / raw To: gentoo-user > > > > How do I get rid of an existing /dev/md0? > > you stop it. Override the superblock with dd.. and lose all data on the disks. > > > > > > I'm thinking that I can try creating a RAID1 array using the two > > allegedly good disks and see if I can make that work. > > yeah > > > > > If that works, I'll get rid of it and try recreating the RAID1 with one > > good disk and the one that mdadm thinks is faulty. > > > > you don't have to. You can migrate a 2 disk raid1 to a 3 disk raid5. Howtos > are availble via google. > > > just saying - box in suspend to ram. I change the cable (and connector on > mobo) on a disk with two raid 1 partitions on it. > > One came back after starting the box. > > The other? Nothing I tried worked. At the end I dd'ed the partition.. and did > a complete 'faulty disk/replacement' resync.... > > argl. > > You're assuming I have more knowledge that I do. Can you explain the steps more in layman's terms. I've never used dd before. Jeff ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-07 18:50 ` Volker Armin Hemmann 2012-01-07 22:43 ` Jeff Cranmer @ 2012-01-09 20:45 ` Jeff Cranmer 2012-01-10 6:14 ` Pandu Poluan 1 sibling, 1 reply; 30+ messages in thread From: Jeff Cranmer @ 2012-01-09 20:45 UTC (permalink / raw To: gentoo-user > > > > Me too. > > > > mdadm --detail /dev/md0 thinks that /dev/sdc1 is faulty. > > I'm not sure whether it's really faulty, or just that my setup for RAID > > is screwed up. > > > > How do I get rid of an existing /dev/md0? > > you stop it. Override the superblock with dd.. and lose all data on the disks. > > > > > > I'm thinking that I can try creating a RAID1 array using the two > > allegedly good disks and see if I can make that work. > > yeah > > > > > If that works, I'll get rid of it and try recreating the RAID1 with one > > good disk and the one that mdadm thinks is faulty. > > > > you don't have to. You can migrate a 2 disk raid1 to a 3 disk raid5. Howtos > are availble via google. > > > just saying - box in suspend to ram. I change the cable (and connector on > mobo) on a disk with two raid 1 partitions on it. > > One came back after starting the box. > > The other? Nothing I tried worked. At the end I dd'ed the partition.. and did > a complete 'faulty disk/replacement' resync.... > > argl. > > OK, so lesson learned. Just because it builds correctly in a RAID1 array, that doesn't mean that the drive isn't toast. I ran badblocks on the three drive components and, surprise, surprise, /dev/sdc came up faulty. I think I'll just build the two non-faulty drives as a RAID0 array until the hard drive prices come back down to pre-Thailand flood prices and backup regularly. Thanks for all the help. Jeff ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-09 20:45 ` Jeff Cranmer @ 2012-01-10 6:14 ` Pandu Poluan 2012-01-10 11:56 ` Jeff Cranmer 0 siblings, 1 reply; 30+ messages in thread From: Pandu Poluan @ 2012-01-10 6:14 UTC (permalink / raw To: gentoo-user [-- Attachment #1: Type: text/plain, Size: 1725 bytes --] On Jan 10, 2012 8:48 AM, "Jeff Cranmer" <jeff@lotussevencars.com> wrote: > > > > > > > > Me too. > > > > > > mdadm --detail /dev/md0 thinks that /dev/sdc1 is faulty. > > > I'm not sure whether it's really faulty, or just that my setup for RAID > > > is screwed up. > > > > > > How do I get rid of an existing /dev/md0? > > > > you stop it. Override the superblock with dd.. and lose all data on the disks. > > > > > > > > > > I'm thinking that I can try creating a RAID1 array using the two > > > allegedly good disks and see if I can make that work. > > > > yeah > > > > > > > > If that works, I'll get rid of it and try recreating the RAID1 with one > > > good disk and the one that mdadm thinks is faulty. > > > > > > > you don't have to. You can migrate a 2 disk raid1 to a 3 disk raid5. Howtos > > are availble via google. > > > > > > just saying - box in suspend to ram. I change the cable (and connector on > > mobo) on a disk with two raid 1 partitions on it. > > > > One came back after starting the box. > > > > The other? Nothing I tried worked. At the end I dd'ed the partition.. and did > > a complete 'faulty disk/replacement' resync.... > > > > argl. > > > > > OK, so lesson learned. Just because it builds correctly in a RAID1 > array, that doesn't mean that the drive isn't toast. > > I ran badblocks on the three drive components and, surprise, > surprise, /dev/sdc came up faulty. I think I'll just build the two > non-faulty drives as a RAID0 array until the hard drive prices come back > down to pre-Thailand flood prices and backup regularly. > > Thanks for all the help. > > Jeff > > > RAID 0?!?! Please reconsider. With RAID 0, *any* single drive failure will result in *total* data loss. Rgds, [-- Attachment #2: Type: text/html, Size: 2474 bytes --] ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-10 6:14 ` Pandu Poluan @ 2012-01-10 11:56 ` Jeff Cranmer 0 siblings, 0 replies; 30+ messages in thread From: Jeff Cranmer @ 2012-01-10 11:56 UTC (permalink / raw To: gentoo-user This is true, however it's a temporary measure only, and I have backups. Once the prices drop again, I'll buy another 1.5TB disk and convert back to a RAID5. On Tue, 2012-01-10 at 13:14 +0700, Pandu Poluan wrote: > > On Jan 10, 2012 8:48 AM, "Jeff Cranmer" <jeff@lotussevencars.com> > wrote: > > > > > > > > > > > > Me too. > > > > > > > > mdadm --detail /dev/md0 thinks that /dev/sdc1 is faulty. > > > > I'm not sure whether it's really faulty, or just that my setup > for RAID > > > > is screwed up. > > > > > > > > How do I get rid of an existing /dev/md0? > > > > > > you stop it. Override the superblock with dd.. and lose all data > on the disks. > > > > > > > > > > > > > > I'm thinking that I can try creating a RAID1 array using the two > > > > allegedly good disks and see if I can make that work. > > > > > > yeah > > > > > > > > > > > If that works, I'll get rid of it and try recreating the RAID1 > with one > > > > good disk and the one that mdadm thinks is faulty. > > > > > > > > > > you don't have to. You can migrate a 2 disk raid1 to a 3 disk > raid5. Howtos > > > are availble via google. > > > > > > > > > just saying - box in suspend to ram. I change the cable (and > connector on > > > mobo) on a disk with two raid 1 partitions on it. > > > > > > One came back after starting the box. > > > > > > The other? Nothing I tried worked. At the end I dd'ed the > partition.. and did > > > a complete 'faulty disk/replacement' resync.... > > > > > > argl. > > > > > > > > OK, so lesson learned. Just because it builds correctly in a RAID1 > > array, that doesn't mean that the drive isn't toast. > > > > I ran badblocks on the three drive components and, surprise, > > surprise, /dev/sdc came up faulty. I think I'll just build the two > > non-faulty drives as a RAID0 array until the hard drive prices come > back > > down to pre-Thailand flood prices and backup regularly. > > > > Thanks for all the help. > > > > Jeff > > > > > > > > RAID 0?!?! > > Please reconsider. > > With RAID 0, *any* single drive failure will result in *total* data > loss. > > Rgds, > > ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-07 17:20 ` Jeff Cranmer 2012-01-07 17:46 ` Volker Armin Hemmann @ 2012-01-08 18:31 ` Paul Hartman 2012-01-08 20:03 ` Jeff Cranmer 1 sibling, 1 reply; 30+ messages in thread From: Paul Hartman @ 2012-01-08 18:31 UTC (permalink / raw To: gentoo-user On 01/07/2012 11:20 AM, Jeff Cranmer wrote: > On Sat, 2012-01-07 at 10:11 -0500, Jeff Cranmer wrote: >>>>> >>>>> What am I missing? >>>> >>>> have you set the type to linux raid autodetect? >>>> >>>> have you tried mdadm --assemble? >>>> >>> mdadm --assemble /dev/md0 didn't make any difference. >>> Where do I set the type? >>> >> after assembling, >> results of cat/proc/mdstat >> personalities : [linear] [raid0] [raid10] [raid6] [raid5] [raid4] >> [multipath] [faulty] >> md0 : inactive sdb1[0](S) sdd1[3](S) sdc1[1](S) >> 4395409608 blocks super 1.2 >> >> unused devices: <none> >> >> results of mdadm --detail /dev/md0 >> mdadm: md device /dev/md0 does not appear to be active. >> >> results of /etc/init.d/mdadm status >> * status: started >> >> fstab line >> /dev/md0 /data xfs noatime 0 0 >> >> Is there a raid option I need to add to the fstab entry? >> Is there another service that needs to run, other than mdam? >> >> Thanks >> >> Jeff >> >> > I tried changing the type of each array element in fdisk to fd (linux > raid autodetect. > > The array is still not being recognised at boot, with the same 'cannot > read superblock' error. > > I also tried re-running mdadm --create /dev/md0 --level=5 > --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 > I get the error > mdadm: device /dev/sdb1 not suitable for any style of array. > > What is going on here? (I didn't read this whole thread, sorry if I'm repeating someone else's advice) kernel autodetection only works on old superblock version 0.90, you're using 1.2. Not a big deal, we use mdadm to do it. Define your arrays in /etc/mdadm.conf and start /etc/init.d/mdadm in your boot runscripts with "rc-update add mdadm boot", it will bring up the array at boot time. In my mdadm.conf i have a line like this: ARRAY /dev/md1 metadata=1.01 name=black:1 UUID=8e653e72:9d5df6ba:bb66ea8b:02f1c317 (might be word-wrapped, should be all one line) That's all that was needed to bring it up automatically at boot time. Also AFAIR there was a "gotcha" about the hostname stored in the array's metadata must match your machine's hostname or else mdadm auto-assemble won't accept it (to protect you in case you're plugging disks from another machine for recovery, you don't want it to use them as your main drives), so in that case you must specify it explicitly or set the AUTO parameter in mdadm.conf to accept this condition. If you created the array from within a LiveCD or on another machine, the hostname might not match your system. See the mdadm manpage for more info. ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-08 18:31 ` Paul Hartman @ 2012-01-08 20:03 ` Jeff Cranmer 2012-01-08 21:02 ` Jeff Cranmer 0 siblings, 1 reply; 30+ messages in thread From: Jeff Cranmer @ 2012-01-08 20:03 UTC (permalink / raw To: gentoo-user On Sun, 2012-01-08 at 12:31 -0600, Paul Hartman wrote: > > > > What is going on here? > > (I didn't read this whole thread, sorry if I'm repeating someone else's > advice) > > kernel autodetection only works on old superblock version 0.90, you're > using 1.2. Not a big deal, we use mdadm to do it. > > Define your arrays in /etc/mdadm.conf and start /etc/init.d/mdadm in > your boot runscripts with "rc-update add mdadm boot", it will bring up > the array at boot time. > > In my mdadm.conf i have a line like this: > > ARRAY /dev/md1 metadata=1.01 name=black:1 > UUID=8e653e72:9d5df6ba:bb66ea8b:02f1c317 > > (might be word-wrapped, should be all one line) > > That's all that was needed to bring it up automatically at boot time. > > Also AFAIR there was a "gotcha" about the hostname stored in the array's > metadata must match your machine's hostname or else mdadm auto-assemble > won't accept it (to protect you in case you're plugging disks from > another machine for recovery, you don't want it to use them as your main > drives), so in that case you must specify it explicitly or set the AUTO > parameter in mdadm.conf to accept this condition. If you created the > array from within a LiveCD or on another machine, the hostname might not > match your system. > > See the mdadm manpage for more info. mdadm was added to the default level, not boot. My /etc/mdadm.conf file has two active lines DEVICE /dev/sd[bcd]1 ARRAY dev/md0 metadata=1.2 spares=1 name=office-desktop:0 devices=/dev/sdb1,dev/sdc1,/dev/sdd1 It looks like I'm having trouble with a faulty /dev/sdc1, so what I'd like to do is wipe out the existing array and try starting a RAID1 array just with sdb1 and sdd1. I got rid of the old array by using the commands mdadm --manage --fail /dev/md0 mdadm --manage --stop /dev/md0 I then used mdadm --verbose --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdd1 The result of this command was dadm: /dev/sdb1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 08:16:00 2012 mdadm: partition table exists on /dev/sdb1 but will be lost or meaningless after creating array mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 mdadm: /dev/sdd1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Jan 7 08:16:00 2012 mdadm: size set to 1465136400K Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. The results of cat /proc/mdstat are Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid1 sdd1[1] sdb1[0] 1465136400 blocks super 1.2 [2/2] [UU] [>....................] resync = 2.1% (31838144/1465136400) finish=269.7min speed=88551K/sec unused devices: <none> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid1 sdd1[1] sdb1[0] 1465136400 blocks super 1.2 [2/2] [UU] [>....................] resync = 2.1% (31838144/1465136400) finish=269.7min speed=88551K/sec unused devices: <none> The results of mdadm --detail /dev/md0 are /dev/md0: Version : 1.2 Creation Time : Sun Jan 8 14:47:43 2012 Raid Level : raid1 Array Size : 1465136400 (1397.26 GiB 1500.30 GB) Used Dev Size : 1465136400 (1397.26 GiB 1500.30 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Sun Jan 8 14:48:54 2012 State : active, resyncing Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Rebuild Status : 2% complete Name : office-desktop:0 (local to host office-desktop) UUID : bfc16c6e:4e8cb910:96ff7ed2:6fec32bc Events : 1 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 49 1 active sync /dev/sdd1 When I try to mount this drive, however, I get mount: /dev/md0: can't read superblock What do I need to do to complete the process? Thanks Jeff ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-08 20:03 ` Jeff Cranmer @ 2012-01-08 21:02 ` Jeff Cranmer 2012-01-09 18:58 ` Jeff Cranmer 0 siblings, 1 reply; 30+ messages in thread From: Jeff Cranmer @ 2012-01-08 21:02 UTC (permalink / raw To: gentoo-user On Sun, 2012-01-08 at 15:03 -0500, Jeff Cranmer wrote: > On Sun, 2012-01-08 at 12:31 -0600, Paul Hartman wrote: > > > > > > What is going on here? > > > > (I didn't read this whole thread, sorry if I'm repeating someone else's > > advice) > > > > kernel autodetection only works on old superblock version 0.90, you're > > using 1.2. Not a big deal, we use mdadm to do it. > > > > Define your arrays in /etc/mdadm.conf and start /etc/init.d/mdadm in > > your boot runscripts with "rc-update add mdadm boot", it will bring up > > the array at boot time. > > > > In my mdadm.conf i have a line like this: > > > > ARRAY /dev/md1 metadata=1.01 name=black:1 > > UUID=8e653e72:9d5df6ba:bb66ea8b:02f1c317 > > > > (might be word-wrapped, should be all one line) > > > > That's all that was needed to bring it up automatically at boot time. > > > > Also AFAIR there was a "gotcha" about the hostname stored in the array's > > metadata must match your machine's hostname or else mdadm auto-assemble > > won't accept it (to protect you in case you're plugging disks from > > another machine for recovery, you don't want it to use them as your main > > drives), so in that case you must specify it explicitly or set the AUTO > > parameter in mdadm.conf to accept this condition. If you created the > > array from within a LiveCD or on another machine, the hostname might not > > match your system. > > > > See the mdadm manpage for more info. > > mdadm was added to the default level, not boot. > My /etc/mdadm.conf file has two active lines > DEVICE /dev/sd[bcd]1 > ARRAY dev/md0 metadata=1.2 spares=1 name=office-desktop:0 > devices=/dev/sdb1,dev/sdc1,/dev/sdd1 > > It looks like I'm having trouble with a faulty /dev/sdc1, so what I'd > like to do is wipe out the existing array and try starting a RAID1 array > just with sdb1 and sdd1. > > I got rid of the old array by using the commands > mdadm --manage --fail /dev/md0 > mdadm --manage --stop /dev/md0 > > I then used mdadm --verbose --create /dev/md0 --level=1 > --raid-devices=2 /dev/sdb1 /dev/sdd1 > > The result of this command was > dadm: /dev/sdb1 appears to be part of a raid array: > level=raid5 devices=3 ctime=Sat Jan 7 08:16:00 2012 > mdadm: partition table exists on /dev/sdb1 but will be lost or > meaningless after creating array > mdadm: Note: this array has metadata at the start and > may not be suitable as a boot device. If you plan to > store '/boot' on this device please ensure that > your boot-loader understands md/v1.x metadata, or use > --metadata=0.90 > mdadm: /dev/sdd1 appears to be part of a raid array: > level=raid5 devices=3 ctime=Sat Jan 7 08:16:00 2012 > mdadm: size set to 1465136400K > Continue creating array? y > mdadm: Defaulting to version 1.2 metadata > mdadm: array /dev/md0 started. > > The results of cat /proc/mdstat are > Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] > [raid4] [multipath] > md0 : active raid1 sdd1[1] sdb1[0] > 1465136400 blocks super 1.2 [2/2] [UU] > [>....................] resync = 2.1% (31838144/1465136400) > finish=269.7min speed=88551K/sec > > unused devices: <none> > > Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] > [raid4] [multipath] > md0 : active raid1 sdd1[1] sdb1[0] > 1465136400 blocks super 1.2 [2/2] [UU] > [>....................] resync = 2.1% (31838144/1465136400) > finish=269.7min speed=88551K/sec > > unused devices: <none> > > The results of mdadm --detail /dev/md0 are > /dev/md0: > Version : 1.2 > Creation Time : Sun Jan 8 14:47:43 2012 > Raid Level : raid1 > Array Size : 1465136400 (1397.26 GiB 1500.30 GB) > Used Dev Size : 1465136400 (1397.26 GiB 1500.30 GB) > Raid Devices : 2 > Total Devices : 2 > Persistence : Superblock is persistent > > Update Time : Sun Jan 8 14:48:54 2012 > State : active, resyncing > Active Devices : 2 > Working Devices : 2 > Failed Devices : 0 > Spare Devices : 0 > > Rebuild Status : 2% complete > > Name : office-desktop:0 (local to host office-desktop) > UUID : bfc16c6e:4e8cb910:96ff7ed2:6fec32bc > Events : 1 > > Number Major Minor RaidDevice State > 0 8 17 0 active sync /dev/sdb1 > 1 8 49 1 active sync /dev/sdd1 > > When I try to mount this drive, however, I get > mount: /dev/md0: can't read superblock > > What do I need to do to complete the process? > > Thanks > > Jeff > > > Success - I managed to get a raid1 device operating. I created the final filesystem by using mkfs.xfs -f /dev/md0, then waited for the rebuild to complete before rebooting the system. It appears to be created successfully. Now I'll try the same sequence with sdb and sdc to see if sdc is a good disk. If that works, I'll retry a raid5 array tomorrow night. Jeff ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [gentoo-user] How to get raid 2012-01-08 21:02 ` Jeff Cranmer @ 2012-01-09 18:58 ` Jeff Cranmer 0 siblings, 0 replies; 30+ messages in thread From: Jeff Cranmer @ 2012-01-09 18:58 UTC (permalink / raw To: gentoo-user > > > > > > > Success - I managed to get a raid1 device operating. > I created the final filesystem by using mkfs.xfs -f /dev/md0, then > waited for the rebuild to complete before rebooting the system. > > It appears to be created successfully. Now I'll try the same sequence > with sdb and sdc to see if sdc is a good disk. If that works, I'll > retry a raid5 array tomorrow night. > Hmm - it seems to be a bug in RAID5 creation. I can successfully create a RAID1 array either /dev/sdb1 and /dev/sdc1 or /dev/sdb1 and /dev/sdd1 If, however, I try to create a RAID5 array with all three elements, I get /dev/sdc reporting a failure. cat /proc/mdstat fails with the following report. Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid5 sdd1[3](S) sdc1[1](F) sdb1[0] 2930272256 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/1] [U__] unused devices: <none> Has anyone else experienced similar problems? Is there an extra diagnostic procedure which I can use to validate the sdc drive? Is there something extra I have to do when I go over the 2TB level which could explain this goofy behaviour? ^ permalink raw reply [flat|nested] 30+ messages in thread
end of thread, other threads:[~2012-01-10 16:58 UTC | newest] Thread overview: 30+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2012-01-04 2:57 [gentoo-user] How to get raid Jeff Cranmer 2012-01-04 4:21 ` Paul Hartman 2012-01-05 0:37 ` Jeff Cranmer 2012-01-04 13:35 ` Alexander Puchmayr 2012-01-05 1:14 ` Jeff Cranmer 2012-01-04 13:39 ` Volker Armin Hemmann 2012-01-05 2:28 ` Jeff Cranmer 2012-01-05 3:01 ` Volker Armin Hemmann 2012-01-05 3:45 ` Jeff Cranmer 2012-01-05 8:22 ` Hinnerk van Bruinehsen 2012-01-05 10:22 ` Volker Armin Hemmann 2012-01-06 1:13 ` Jeff Cranmer 2012-01-06 1:41 ` Volker Armin Hemmann 2012-01-06 1:42 ` Volker Armin Hemmann 2012-01-06 4:44 ` Jeff Cranmer 2012-01-06 12:36 ` Volker Armin Hemmann 2012-01-06 23:04 ` Jeff Cranmer 2012-01-07 15:11 ` Jeff Cranmer 2012-01-07 17:20 ` Jeff Cranmer 2012-01-07 17:46 ` Volker Armin Hemmann 2012-01-07 18:27 ` Jeff Cranmer 2012-01-07 18:50 ` Volker Armin Hemmann 2012-01-07 22:43 ` Jeff Cranmer 2012-01-09 20:45 ` Jeff Cranmer 2012-01-10 6:14 ` Pandu Poluan 2012-01-10 11:56 ` Jeff Cranmer 2012-01-08 18:31 ` Paul Hartman 2012-01-08 20:03 ` Jeff Cranmer 2012-01-08 21:02 ` Jeff Cranmer 2012-01-09 18:58 ` Jeff Cranmer
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox