From: Duncan <1i5t5.duncan@cox.net>
To: gentoo-amd64@lists.gentoo.org
Subject: [gentoo-amd64] Re: Can initrd and/or RAID be disabled at boot?
Date: Thu, 27 Jun 2013 18:53:08 +0000 (UTC) [thread overview]
Message-ID: <pan$1c1f0$15d94a4c$dd2793d$9fc594cf@cox.net> (raw)
In-Reply-To: CAK2H+efZ=XkUv3YTgptrcdP07mGRvLwf4y8UYJhL9K-9bNJ68w@mail.gmail.com
Mark Knecht posted on Tue, 25 Jun 2013 15:51:14 -0700 as excerpted:
> This is related to my thread from a few days ago about the
> disappointing speed of my RAID6 root partition. The goal here is to get
> the machine booting from an SSD so that I can free up my five hard
> drives to play with.
FWIW, this post covers a lot of ground, too much I think to really cover
in one post. Which is why I've delayed replying until now. I expect
I'll punt on some topics this first time thru, but we'll see how it
goes...
> SHORT SUMMATION: I've tried noninitrd and noraid in the kernel line of
> grub.conf but I keep booting from old RAID instead of the new SSD.
> What am I doing wrong?
>
> What I've done so far:
>
> 1) I've removed everything relatively non-essential from the HDD-based
> RAID6. It's still a lot of data (40GB) but my Windows VMs are moved to
> an external USB drive as is all the video content which is on a second
> USB drive so the remaining size is pretty manageable.
OK...
> 2) In looking around for ways to get / copied to the SDD I ran across
> this Arch Linux page called "Full System Backup with rsync":
>
> https://wiki.archlinux.org/index.php/Full_System_Backup_with_rsync
> Basically it boiled down to just a straight-forward rsync command, but
> what I liked about the description what that it can be done on a live
> system. The command in the page is
>
> rsync -aAXv /* /path/to/backup/folder
> --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/media/*,/lost
+found}
>
> which I have modified to
>
> rsync -avx /* /path/to/backup/folder
> --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/media/*,/lost
+found}
>
> because I don't use (AFAICT) any of the ACL stuff and the command simply
> wouldn't do anything.
For ACL you're probably correct. But you might be using xattributes. Do
you have any of the following USE flags turned on: caps xattr filecaps ?
(Without going into an explanation of the specific distinction between
the USE flags above, particularly caps and filecaps.) What filesystem do
you use on / (and /usr if separate), and if appropriate, are the extended
attributes and security label kernel options enabled for it?
For example, here I have ext4, reiserfs and btrfs enabled and use or have
used them on my various root filesystems, as well as tmpfs with the
appropriate options since I have PORTAGE_TMPDIR pointed at tmpfs (and
also devtmpfs needs some of the options):
zgrep 'REISER\|EXT4\|TMPFS\|BTRFS' /proc/config.gz
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_EXT4_FS=y
CONFIG_EXT4_USE_FOR_EXT23=y
# CONFIG_EXT4_FS_POSIX_ACL is not set
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
CONFIG_REISERFS_FS=y
# CONFIG_REISERFS_CHECK is not set
# CONFIG_REISERFS_PROC_INFO is not set
CONFIG_REISERFS_FS_XATTR=y
# CONFIG_REISERFS_FS_POSIX_ACL is not set
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_BTRFS_FS=y
# CONFIG_BTRFS_FS_POSIX_ACL is not set
# CONFIG_BTRFS_FS_CHECK_INTEGRITY is not set
# CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set
# CONFIG_BTRFS_DEBUG is not set
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y
tmpfs only has ACL on for devtmpfs (and I'm not sure I need that, but to
avoid both security issues and broken device functionality...). The
others don't have that on, but where appropriate, they have XATTR on, as
well as FS_SECURITY. (Again, this is really only surface coverage,
here. TBH I don't fully understand the depths myself, certainly not well
enough to be comfortable discussing it in depth, tho I'm reasonably sure
I have the options I want enabled, here.)
The deal is that file capacities in one form or another can be used to
avoid having to SETUID root various executables that would otherwise need
it, which is a good thing since that reduces the security vulnerability
window that SETUID root otherwise opens, often necessarily.
And these file capacities are implemented using xattrs. So if your
system is setup to use them, a good thing from a security perspective but
somewhat complicated by the kernel config requirements in addition to the
USE flags, you'll probably want to use the -X option, tho you should
still be safe without -A (tho it shouldn't hurt).
However, the penalty for NOT using -X, provided you're not using xattrs
for anything else, should simply be that you'll need to become root to
run some commands that would otherwise be runnable without root (with the
corresponding open security window, should it be possible for a cracker
to get those commands running as root to do something unintended). So
the potential cost of getting it wrong is actually quite limited, unless
you happen to be the target of a cracker with both good timing and a
reasonable skill level, as well.
And of course if you have only part of the pieces above enabled, say the
appropriate filesystem options in the kernel but not the USE flags, or
the reverse, then you're not covered and the rsync options won't matter
in any case.
But the -AX options shouldn't do any harm in any case, so here I'd have
just left them on, making it -avxAX.
Meanwhile, while I always see people worried about copying a live
filesystem around, I've never had a problem here simply doing a
cp --archive, or the equivalent in mc (midnight commander, ncurses-based
commander-style dual-pane file manager).
What I do for root is use a root-bind script:
#!/bin/bash
me=${0##*/}
case $me in
rootbind) mount --bind / /mnt/rootbind;;
rootbindu) umount /mnt/rootbind;;
*) echo rootbind: bad call; exit 1;;
esac
(That allows the script to be called rootbind, with a symlink to it
called rootbindu, that does the corresponding umount.)
What a bind-mount does is mount an already mounted filesystem at a
different mountpoint. In particular, it does NOT do recursive mounts
(tho there's another mount option that copies the full mount tree, it's
just not what I want here), so what I'm using it for here is to get a
"clean" copy of the rootfs, WITHOUT other filesystems such as /dev and
/home mounted on top.
Then I can do a nice clean cp --archive of my rootfs to a (normally
freshly formated, so cp and rsync would accomplish the same thing) backup
root, and that's what I've used for backup, for years.
And I test those backups too, and occasionally reboot to the backup and
do a clean mkfs and copy back from the backup to the normally working
copy too, just to take care of fragmentation and any possibility of
unresolved filesystem damage or bitrot that might have set in, as well as
ensuring that I can switch to the backups for operational use by actually
doing so. So I know the technique works for me.
Now if I was running some active database that was continuing to update
as I did my copy, yes, that would be a problem, and I'd want to do a
snapshot or otherwise "freeze" the live filesystem in ordered to get a
good self-consistent copy. But, for root anyway, unless I'm trying to do
an emerge --update in the background or something at the same time (and
why would I, both the copy and the update could be trying to access the
filesystem at once, slowing both down, and it needlessly complicates
things, so there's no purpose to doing so), a simple cp --archive of the
live filesystem, from the bind-mount so I get JUST the root filesystem,
no more no less, is sufficient.
For /home, there's a /bit/ more concern, say with the firefox sqlite
databases, if I'm browsing at the same time I'm trying to do the backup.
However, that's simple enough to avoid. Just don't do anything that's
going to be actively changing the filesystem at the same time I'm trying
to make an accurate backup of it.
Of course with your VMs that's a bit different story, rather like the
active database case. A snapshotting filesystem (like btrfs) or sub-
filesystem block-device layer (like lvm2) can be used here, taking the
snapshot and copying it while the activity continues on the live
filesystem, or, likely a simpler solution for those where it's possible,
just do the copy when the database/vms aren't active and in use.
But unless your vms/databases are files on your rootfs, that shouldn't be
a problem with the rootfs backup, in any case. And if they are and you
can't shut down the vms/databases for long enough to do a backup, I'd
personally have questions of the strategy that put them on rootfs to
begin with, but whatever, THAT is when you'd need to worry about taking
an accurate copy of the live rootfs, but ideally, that's not a case you
need to worry about, and indeed, from what I've read it's not a problem
in your case at all. =:^)
> I ran this command the first time to get 98% of everything copied while
> in KDE, but before I moved forward I exited KDE, stopped X and ran it as
> root from the console. After the second run it didn't pick up any new
> file changes so I suspect it's pretty close to what I'd get dealing with
> a Live CD boot. (COMMENTS?)
As the above commentary should indicate, if anything I think you're being
overly cautious. In the vast majority of cases, a simple cp --archive,
or your equivalent rsync, should be fine. The caveat would be if you
were trying to backup the vms while they were in operation, but you've
taken care of that separately, so (with the possible caveat about file
capacities and xattrs) I believe you're good to go.
> 3) I added a new boot options in grub.conf:
(Long) Note in passing: You should probably look into upgrading to grub2
at some point. Now may not be a good time, as you've got a lot on your
plate right now as it is, but at some point. Because while there's a bit
of a learning curve to getting up and running on grub2, it's a lot more
flexible than grub1, with a lot more troubleshooting possible if you're
not getting the boot you expect, and direct support of all sorts of fancy
stuff like mdadm, lvm2, btrfs, zfs, etc, as well as an advanced command-
line shell much like sh/bash itself, so it's very possible to browse your
whole filesystem directly from inside grub, as I said, making
troubleshooting **MUCH** easier. Plus its scripting (including if/then
conditionals and variable handling much like bash) and menu system make
all sorts of advanced boot configs possible.
And while I'm at it, I'd strongly recommend switching to gpt partitioning
from the old mbr style partitions, either before switching to grub2 or at
the same time. GPT is more reliable (checksummed partitioned table with
two copies, one at the beginning and one at the end of the device, unlike
mbr with just one, with no checking, that if it goes bad...) and less
complicated (no primary/secondary/logical partition distinction, up to
128 partitions handled by default, with a possibility for even more if
you setup a larger gpt than mbr. Plus, it allows partition labels much
like the filesystem labels people already use, only on the partitions
themselves, so they don't change with the filesystem. That in itself
makes things much easier, since with labels it's much easier to keep
track of what each partition is actually for.
The reason I recommend switching to gpt before switching to grub2, is
that gpt has a special BIOS-reserved partition type, that grub2 can make
use of to store its core (like grub1's stage-1.5 and 2), making the grub2
administration and updates less problematic than they might be
otherwise. (This of course assumes bios, not efi, but grub2 works with
efi as well, I'm just not familiar with its efi operation, and besides,
efi folks are likely to already be running grub2 or something else,
instead of legacy grub1, so it's likely a reasonably safe assumption.)
Actually, when I switched to gpt here, while still on grub1, I was
forward thinking enough to setup both a bios-reserved partition, and an
efi-reserved partition, even tho neither one was used at the time. They
were small (a couple MB for the BIOS partition, just under 128 MB for the
efi partition, so the both fit in 128 MB, an eighth of a gig). Then I
upgraded to grub2 and it found and used the gpt bios partition without
issue, instead of having to worry about fitting it in slack space before
the first partition or whatever. The efi-reserved partition is still
unused, but it's there in case I upgrade to efi on this machine (I doubt
I will as I have no reason to), or decide to fit the existing disk into a
new machine at some point, without full repartitioning.
(FWIW, I use gptfdisk, aka gdisk, as my gpt-partitioner analogous to
fdisk. However, gparted has supported gpt for awhile, and even standard
fdisk, from the util-linux package, has (still experimental) gpt support
now. Tho the cfdisk variant (also from util-linux) doesn't have gpt
support yet, but cgdisk, from the gptfdisk package, does, and that's the
executable from the gptfdisk package I tend to use here. (I use gdisk -l
to spit out the partition list on the commandline, similar to cat-ing a
file. That's about it. I use cgdisk for actual gpt partition table
editing.))
It's just that reading your post, I'm translating to grub2 in my head,
and thinking how much simpler grub2 makes troubleshooting, when you can
effectively browse all hard drives in read-only mode directly from grub,
not only browsing around to know for sure that a particular partition is
the one you need, but paging thru various files in the kernel
Documentation dir, for instance, to get options to plug in on the kernel
commandline in grub, etc. It really does make troubleshooting early boot
problems MUCH easier, because grub2 simply gives you far more to work
with in terms of troubleshooting tools available to use at the grub
prompt.
The one caveat for gpt is for people multi-booting to other than Linux.
From what I've read, MS does support GPT, but with substantially less
flexibility (especially for XP, 7 is better) than Linux. I think it can
install to either, but switching from one to the other without
reinstalling is problematic, or something like that, whereas with Linux
it's simply ensuring the appropriate support is configured into (or
available as modules if you're running an initr*) the kernel. (I have
little idea how Apple or the BSDs work with GPT.)
But while you do MS, AFAIK it's all in VMs, so that shouldn't be a
problem for you, so gpt should be fine.
And of course grub2 should be fine as well, gpt or not, but based on my
experience, gpt makes the grub2 upgrade far easier, at least as long as
there's a bios-reserved partition setup in gpt already, as there was here
when I did my grub2 upgrade, since I'd already done the gpt upgrade
previously.
But as I said, now may not be the best time to think about that as you
have enough on your plate ATM. Maybe something for later, tho... Or
maybe consider doing gpt now, since you're repartitioning now, and grub2
later...
(grub1 menu entries:)
> title fastVM 3.8.13-gentoo using LABEL (SSD, initramfs in kernel)
> root (hd5,0)
> kernel (hd0,0)/boot/bzImage-3.8.13-gentoo root=LABEL=fastVM video=vesafb
> vga=0x307title
>
> fastVM 3.8.13-gentoo using LABEL (SSD, initramfs in kernel)
> root (hd5,0)
> kernel (hd0,0)/boot/bzImage-3.8.13-gentoo root=/dev/sda1 video=vesafb
> vga=0x307
I'll assume that "vga=0x307title" is a typo, and that "title" starts the
second menu entry...
... Making the difference between the two entries the root=LABEL=fastVM,
vs root=/dev/sda1
> I am relatively confident that (hd5,0) is the SSD. I have 6 drives in
> the system - the 5 HDDs and the SSD. The 5 hard drives all have multiple
> partitions which is what grub tells me using tab completion for the line
>
> root(hdX,
>
> Additionally the SDD has a single partition to tab completion on
> root(hd5 finishes with root(hd5,0). I used /dev/sda as that's how it's
> identified when I boot using RAID.
This is actually what triggered the long grub2 note above. "Relatively
confident", vs. knowing, because with grub2's mdadm support, you can
(read-only) browse all the filesystems in the raid, etc (lvm2, etc, if
you're using that...), as well. So you know what's what, because you can
actually browse it, direct from the grub2 boot prompt.
However, while my grub1 knowledge is getting a bit rusty now, I think
you're mixing up grub's root(hdX,Y) notation, which can be thought of as
sort of like a cd in bash, simply changing the location you're starting
from if you don't type in the full path, with the kernel's root=
commandline option.
Once the kernel loads (from hd0,0 in both entries), its root= line may
have an entirely DIFFERENT device ordering, depending on the order in
which it loaded its (sata chipset, etc) drivers and the order the devices
came back in the device probes it did as it loaded them.
That's actually why kernel devs and udev folks plus many distros tend to
recommend the LABEL= (or alternatively UUID=) option for the kernel's
root= commandline option, these days, instead of the old /dev/sdX style,
because in theory at least, the numbering of /dev/sdX devices can change
arbitrarily. In fact, on most home systems with a consistent set of
devices appearing at boot, the order seldom changes, and it's *OFTEN* the
same as the order as seen by grub, but that doesn't HAVE to be the case.
Of course the monkey wrench in all this is that as far as I'm aware
anyway, the LABEL= and UUID= variants of the root= kernel commandline
option *REQUIRE* an initr* with working udev or similar (I'm not sure if
busybox's mdev supports LABEL=/UUID= or not), which might well be a given
on binary-based distros that handle devices using kernel modules instead
of custom built-in kernel device support, and thus require an initr* to
handle the modules load anyway, but it's definitely *NOT* a given on a
distro like gentoo, which strongly encourages building from source and
where many, perhaps most users, use a custom-built kernel with the
drivers necessary to boot builtin, and thus may well not require an initr*
at all. For initr*-less boots, AFAIK root=/dev/* is the only usable
alternative, because the /dev/disk/by-*/ subdirs that LABEL= and UUID=
depends on are udev userspace created, and those won't be available for
rootfs mount in an initr*-less boot.
> Now, the kernel has the initrd built into it so if it cannot be
> turned off I guess I'll try building a new kernel without it. However I
> found a few web pages that also said RAID could be disabled using a
> 'noraid' option which I thought should stop the system from finding the
> exiting RAID6 but no luck.
FWIW, the best reference for kernel commandline options is the kernel
documentation itself. Sometimes you need more, but that's always the
place to look first.
Specifically, $KERNELDIR/Documentation/kernel-parameters.txt , for the
big list in one place, with additional documentation often provided in
the various individual files documenting specific features.
kernel-parameters.txt lists noinitrd:
noinitrd [RAM] Tells the kernel not to load any configured
initial RAM disk.
So that should work. It doesn't say anything about it not working with
a built-in initramfs, either, so if it doesn't, there's a bug in that it
either should say something about it, or it should work.
FWIW, depending on what initramfs creation script you're using and its
content, you should be able to tell whether the initramfs activated or
not.
Here, I /just/ /recently/ started using dracut, since it seems multi-
device btrfs as root doesn't work reliably otherwise, and that's what I'm
using as my rootfs now (btrfs raid1 mode on dual SSDs, I could only get
it to mount the dual-device btrfs raid1 in degraded mode, seeing only one
of the two devices, without the btrfs device scan in the initramfs, tho a
google says some people have it working, <shrug>).
But even booting to my more traditional reiserfs rootfs backups on the
"spinning rust", where booting from the initramfs isn't mandatory, I can
tell whether the initramfs was loaded or not by the boot-time console
output. Among other things, if the initramfs is loaded and run, then
/proc and /run are already loaded when the openrc service that would
normally mount them gets run, because the initramfs mounted them. But
apparently the initramfs mounts at least /run with different permissions,
so openrc mentions that it's changing permissions on /run when it runs
after the initramfs, but simply mounts it with the permissions it wants,
when the initramfs hasn't run.
But unfortunately, I've not actually tried the noinitrd kernel commandline
option, so I can't VERIFY that it works here, with my now builtin
initramfs. I'll have to reboot to try that, and will try to get back to
you on that. (Note to self. Test the root=LABEL with initramfs-less
boot too, while I'm at it.)
If you're using a dracut-created initr*, then there's several other
helpful kernel commandline options that it hooks. See the
dracut.comandline manpage for the full list, but rd.break and its
rd.break=<brkpoint> variants allow dropping to the initr*'s builtin shell
(AFAIK dash by default for dracut, but bash is an option... which I've
enabled) at various points, say right before or right after the initr*'s
udev runs, right before mounting the real rootfs, or right before the
final switchroot and start of the init on the real rootfs. If you're
using some other initr* creator, obviously you'd check its documentation
for similar options.
I know rd.break works here, as I tested it while I was figuring out how
to work this new-to-me initramfs thing. And it's obvious that I'm in the
initr*'s bash, because its shell prompt isn't anything like my customized
shell prompt.
Meanwhile, I DO NOT see "noraid" listed in kernel-parameters.txt, altho
that doesn't mean it doesn't (or didn't at some point) exist. I DO see a
couple raid-options, md= and raid=, however, with references to
Documentation/md.txt.
Based on the md.txt file, it appears raid=noautodetect is the option
you're looking for. This also matches my now slightly rusty recollection
from when I ran mdraid before. noraid didn't look quite right, but
raid=noautodetect looks much closer to what I remember.
(If you're using dracut-based initr*, there's a similar option for it,
rd.auto, rd.auto=(0|1), that defaults off with current versions of dracut,
according to the dracut.cmdline manpage. That governs autoassembly of
raid, lvm, etc. But since it already defaults off, unless you're running
an old version where that defaulted on, or have it as part of your
builtin commandline as configured in your kernel or something, that
shouldn't be your problem.)
> Does anyone here have any ideas? fdisk info follows at the end.
> Ask for anything else you want to see.
>
> If I can get to booting off the SSD then for the next few days I
> could build different RAIDs and do some performance testing.
Hmm... This didn't turn out to be so hard to reply to after all. Maybe
because I kept my initr* remarks to dracut-based, which is all I know
anyway...
Some other remarks...
FWIW, if you're running an md-raid rootfs, at least with gpt and a
dedicated bios partition, installing grub2 is easier than installing or
updating grub1, as well. I remember the pain it was to install grub1 to
each of the four drives composing my raid, back when I had that setup, in
particular, the pain of trying to be sure I was installing to the
physical drive I /thought/ I was installing to, while at the same time
ensuring it was actually pointed at the /boot on the same drive, not at
the /boot on a different drive, so that if that drive was the only one I
had left, I could still boot from it. The problem was that because I was
on mdraid, grub1 was detecting that and I had to specify the physical
device one way to tell it where to install stage1, and a different way to
tell it where to put stage2 in /boot.
With grub2, things were so much easier that I had trouble believing I'd
actually installed it already. But I rebooted and it worked just fine,
so I had. Same thing when I switched to the pair of ssds with btrfs in
raid1 mode as my rootfs. I installed to the first one... and thought
surely there was another step I had missed, but there it was. After
reboot to test, I installed to the second one, and rebooted to it (using
the boot selector in the BIOS) to test. All fine. =:^)
Of course part of that, again, is due to using gpt with the reserved bios
partition for grub to put its stage2 core in, quite apart from what it
puts in /boot. I suppose I'd have had a similar problem as I did with
grub1, if I was still using mbr or didn't have a reserved bios partition
in my gpt layout, and grub had to stick the stage2 core either in slack
space before the first partition (if there was room to do so), or in
/boot itself, and hope the filesystem didn't move things around afterward
(which reiserfs did do a couple of times to me back with grub1, tho it
wasn't usually a problem).
I glossed over what to do with non-dracut-based initr*, as I've not used
anything other than dracut and direct no-initr*, and dracut's only very
recently. However, I'd be quite surprised if others didn't have
something similar to dracuts rd.break options, etc, and you certainly
should be able to tell whether the initr* is running or not, based on the
early-boot console output. Of course, whether you're /familiar/ enough
with that output or not to tell what's initr* and what's not, is an
entirely different question, but if you know well what one looks like,
the other should be enough different that you can tell, if you look
closely.
From the initrd, it should be possible to mount something else other than
the old raid as rootfs, and by that time, you'll have the kernel
populated /dev tree to work with as well as possibly the udev populated
disk/by-* trees, so finding the right one to mount shouldn't be an issue
-- no worries about kernel device order not matching grub device order,
because you're past grub and using the kernel already, by that point.
That was definitely one of the things I tested on my dracut-based initr*,
that from within the initr* (using for instance rd.break=pre-mount to
drop to a shell before the mount), I could find and mount backup root
filesystems, should it be necessary.
From within the initrd, you should be able to mount using label, uuid or
device, any of the three, provided of course that udev has populated the
disk/by-label and by-uuid trees, and I could certainly mount with either
label or device (I didn't try uuid), using my dracut-based initramfs,
here.
So really, you shouldn't need the noinitrd option for that. Tho as long
as your selected rootfs doesn't /require/ an initr* to boot (as my multi-
device btrfs rootfs seems to here), you should be able to boot to it with
an appropriate kernel commandline root= option, with or without the
initr*.
> c2RAID6 ~ # fdisk -l
>
> Disk /dev/sda: 128.0 GB [snip]
>
> Device Boot Start End Blocks Id System
> /dev/sda1 2048 250069679 125033816 83 Linux
>
> Disk /dev/sdb: 500.1 GB
>
> Device Boot Start End Blocks Id System
> /dev/sdb1 * 63 112454 56196 83 Linux
> /dev/sdb2 112455 8514449 4200997+ 82 Linux swap
> /dev/sdb3 8594775 976773167 484089196+ fd Linux raid
>
> Disk /dev/sdc: 500.1 GB
>
> Device Boot Start End Blocks Id System
> /dev/sdc1 * 63 112454 56196 83 Linux
> /dev/sdc2 112455 8514449 4200997+ 82 Linux swap
> /dev/sdc3 8594775 976773167 484089196+ fd Linux raid
>
> Disk /dev/sde: 500.1 GB
>
> Device Boot Start End Blocks Id System
> /dev/sde1 2048 8594774 4296363+ 83 Linux
> /dev/sde3 8594775 976773167 484089196+ fd Linux raid
>
> Disk /dev/sdf: 500.1 GB
>
> Device Boot Start End Blocks Id System
> /dev/sdf1 2048 8594774 4296363+ 83 Linux
> /dev/sdf3 8595456 976773167 484088856 fd Linux raid
>
> Disk /dev/sdd: 500.1 GB
>
> Device Boot Start End Blocks Id System
> /dev/sdd1 * 63 112454 56196 83 Linux
> /dev/sdd2 112455 8514449 4200997+ 82 Linux swap
> /dev/sdd3 8594775 976773167 484089196+ fd Linux raid
>
> Disk /dev/md3: 1487.1 GB
>
> c2RAID6 ~ #
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
next prev parent reply other threads:[~2013-06-27 18:53 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-06-25 22:51 [gentoo-amd64] Can initrd and/or RAID be disabled at boot? Mark Knecht
2013-06-26 22:53 ` Bob Sanders
2013-06-27 13:40 ` Mark Knecht
2013-06-27 18:53 ` Duncan [this message]
2013-06-27 20:52 ` [gentoo-amd64] " Mark Knecht
2013-06-28 0:14 ` Duncan
2013-06-27 21:43 ` Duncan
2013-07-01 21:10 ` [gentoo-amd64] " Paul Hartman
2013-07-02 17:06 ` Mark Knecht
2013-07-03 1:47 ` [gentoo-amd64] " Duncan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='pan$1c1f0$15d94a4c$dd2793d$9fc594cf@cox.net' \
--to=1i5t5.duncan@cox.net \
--cc=gentoo-amd64@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox