From: Duncan <1i5t5.duncan@cox.net>
To: gentoo-amd64@lists.gentoo.org
Subject: [gentoo-amd64] Re: Is my RAID performance bad possibly due to starting sector value?
Date: Sat, 22 Jun 2013 15:45:06 +0000 (UTC) [thread overview]
Message-ID: <pan$7c60a$34f695e$f0595300$b5612b1@cox.net> (raw)
In-Reply-To: CAGfcS_ntPz6sfirRbDmqWge4dznr29CvWNsS7wx9RarcHFybcw@mail.gmail.com
Rich Freeman posted on Sat, 22 Jun 2013 07:12:25 -0400 as excerpted:
> Multiple-level redundancy just seems to be past the point of diminishing
> returns to me. If I wanted to spend that kind of money I'd probably
> spend it differently.
My point was that for me, it wasn't multiple level redundancy. It was
simply device redundancy (raid), and fat-finger redundancy (backups), on
the same set of drives so I was protected from either scenario.
The fire/flood scenario would certainly get me if I didn't have offsite
backups, but just as you call multiple redundancy past your point of
diminishing returns, I call the fire/flood scenario past mine. If that
happens, I figure I'll have far more important things to worry about than
rebuilding my computer for awhile. And chances are, when I do get around
to it, things will be progressed enough that much of the data won't be
worth so much any more anyway. Besides, the real /important/ data is in
my head. What's worth rebuilding, will be nearly as easy to rebuild due
to what's in my head, as it would be to go thru what's now historical
data and try to pick up the pieces, sorting thru what's still worth
keeping around and what's not.
Tho as I said, I do/did keep an additional level of backup on that 1 TB
drive, but it's on-site too, and while not in the computer, it's
generally nearby enough that it'd be lost too in case of flood/fire.
It's more a convenience than a real backup, and I don't really keep it
upto date, but if it survived and what's in the computer itself didn't, I
do have old copies of much of my data, simply because it's still there
from the last time I used that drive as convenient temporary storage
while I switched things around.
> However, I do agree that mdadm should support more flexible arrays. For
> example, my boot partition is raid1 (since grub doesn't support anything
> else), and I have it set up across all 5 of my drives. However, the
> reality is that only two get used and the others are treated only as
> spares. So, that is just a waste of space, and it is actually more
> annoying from a config perspective because it would be really nice if my
> system could boot from an arbitrary drive.
Three points on that. First, obviously you're not on grub2 yet. It
handles all sorts of raid, lvm, newer filesystems like btrfs (and zfs for
those so inclined), various filesystems, etc, natively, thru its modules.
Second, /boot is an interesting case. Here, originally (with grub1 and
the raid6s across 4 drives) I setup a 4-drive raid1. But, I actually
installed grub to the boot sector of all four drives, and tested each one
booting just to grub by itself (the other drives off), so I knew it was
using its own grub, not pointed somewhere else.
But I was still worried about it as while I could boot from any of the
drives, they were a single raid1, which meant no fat-finger redundancy,
and doing a usable backup of /boot isn't so easy.
So I think it was when I switched from raid6 to raid1 for almost the
entire system, that I switched to dual dual-drive raid1s for /boot as
well, and of course tested booting to each one alone again, just to be
sure. That gave me fat-finger redundancy, as well as added convenience
since I run git kernels, and I was able to update just the one dual-drive
raid1 /boot with the git kernels, then update the backup with the
releases once they came out, which made for a nice division of stable
kernel vs pre-release there.
That dual dual-drive raid-1 setup proved very helpful when I upgraded to
grub2 as well, since I was able to play around with it on the one dual-
drive raid1 /boot while the other one stayed safely bootable grub1 until
I had grub2 working the way I wanted on the working /boot, and had again
installed and tested it on both component hard drives to boot to grub and
to the full raid1 system just from the one drive by itself, with the
others entirely shut off.
Only when I had both drives of the working /boot up and running grub2,
did I mounth the backup /boot as well, and copy over the now working
config to it, before running grub2-install on those two drives.
Of course somewhere along the way, IIRC at the same time as the raid6 to
raid1 conversion as well, I had also upgraded to gpt partitions from
traditional mbr. When I did I had the foresight to create BOTH dedicated
BIOS boot partitions and EFI partitions on each of the four drives.
grub1 wasn't using them, but that was fine; they were small (tiny). That
made the upgrade to grub2 even easier, since grub2 could install its core
into the dedicated BIOS partitions. The EFI partitions remain unused to
this day, but as I said, they're tiny, and with gpt they're specifically
typed and labeled so they can't mix me up, either.
(BTW, talking about data integrity, if you're not on GPT yet, do consider
it. It keeps a second partition table at the end of the drive as well as
the one at the beginning, and unlike mbr they're checksummed, so
corruption is detected. It also kills the primary/secondary/extended
difference so no more worrying about that, and allows partition labels,
much like filesystem labels, which makes tracking and managing what's
what **FAR** easier. I GPT partition everything now, including my USB
thumbdrives if I partition them at all!)
When that machine slowly died and I transferred to a new half-TB drive
thinking it was the aging 300-gigs (it wasn't, caps were dying on the by
then 8 year old mobo), and then transferred that into my new machine
without raid, I did the usual working/backup partition arrangement, but
got frustrated without the ability to have a backup /boot, because with
just one device, the boot sector could point just one place, at the core
grub2 in the dedicated BIOS boot partition, which in turn pointed at the
usual /boot. Now grub2's better in this regard than grub2, since that
core grub2 has an emergency mode that would give me limited ability to
load a backup /boot, that's an entirely manual process with a
comparatively limited grub2 emergency shell without additional modules
available, and I didn't actually take advantage of that to configure a
backup /boot that it could reach.
But when I switched to the SSDs, I again had multiple devices, the pair
of SSDs, which I setup with individual /boots, and the original one still
on the spinning rust. Again I installed grub2 to each one, pointed at
its own separately configured /boot, so now I actually have three
separately configured and bootable /boots, one on each of the SSDs and a
third on the spinning rust half-TB.
(FWIW the four old 300-gigs are sitting on the shelf. I need to badblocks
or dd them to wipe, and I have a friend that'll buy them off me.)
Third point. /boot partition raid1 across all five drives and three are
wasted? How? I believe if you check, all five will have a mirror of the
data (not just two unless it's btrfs raid1 not mdadm raid1, but btrfs is /
entirely/ different in that regard). Either they're all wasted but one,
or none are wasted, depending on how you look at it.
Meanwhile, do look into installing grub on each drive, so you can boot
from any of them. I definitely know it's possible as that's what I've
been doing, tested, for quite some time.
> Oh, as far as raid on partitions goes - I do use this for a different
> purpose. If you have a collection of drives of different sizes it can
> reduce space waste. Suppose you have 3 500GB drives and 2 1TB drives.
> If you put them all directly in a raid5 you get 2TB of space. If you
> chop the 1TB drives into 2 500GB partitions then you can get two raid5s
> - one 2TB in space, and the other 500GB in space. That is 500GB more
> data for the same space. Oh, and I realize I wrote raid5. With mdadm
> you can set up a 2-drive raid5. It is functionally equivalent to a
> raid1 I think,
You better check. Unless I'm misinformed, which I could be as I've not
looked at this in awhile and both mdadm and the kernel have changed quite
a bit since then, that'll be setup as a degraded raid5, which means if
you lose one...
But I do know raid10 can be setup like that, on fewer drives than it'd
normally take, with the mirrors in "far" mode I believe, and it just
arranges the stripes as it needs to. It's quite possible that they fixed
it so raid5 works similarly and can do the same thing now, in which case
that degraded thing I knew about is obsolete. But unless you know for
sure, please do check.
> and I believe you can convert between them, but since I generally intend
> to expand arrays I prefer to just set them up as raid5 from the start.
> Since I stick lvm on top I don't care if the space is chopped up.
There's a lot of raid conversion ability in modern mdadm. I think most
levels can be converted between, given sufficient devices. Again, a lot
has changed in that regard since I set my originals up, I'd guess
somewhere around 2008.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
next prev parent reply other threads:[~2013-06-22 15:45 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-06-20 19:10 [gentoo-amd64] Is my RAID performance bad possibly due to starting sector value? Mark Knecht
2013-06-20 19:16 ` Volker Armin Hemmann
2013-06-20 19:28 ` Mark Knecht
2013-06-20 20:45 ` Mark Knecht
2013-06-24 18:47 ` Volker Armin Hemmann
2013-06-24 19:11 ` Mark Knecht
2013-06-20 19:27 ` Rich Freeman
2013-06-20 19:31 ` Mark Knecht
2013-06-21 7:31 ` [gentoo-amd64] " Duncan
2013-06-21 10:28 ` Rich Freeman
2013-06-21 14:23 ` Bob Sanders
2013-06-21 14:27 ` Duncan
2013-06-21 15:13 ` Rich Freeman
2013-06-22 10:29 ` Duncan
2013-06-22 11:12 ` Rich Freeman
2013-06-22 15:45 ` Duncan [this message]
2013-06-22 23:04 ` Mark Knecht
2013-06-22 23:17 ` Matthew Marlowe
2013-06-23 11:43 ` Rich Freeman
2013-06-23 15:23 ` Mark Knecht
2013-06-28 0:51 ` Duncan
2013-06-28 3:18 ` Matthew Marlowe
2013-06-21 17:40 ` Mark Knecht
2013-06-21 17:56 ` Bob Sanders
2013-06-21 18:12 ` Mark Knecht
2013-06-21 17:57 ` Rich Freeman
2013-06-21 18:10 ` Gary E. Miller
2013-06-21 18:38 ` Mark Knecht
2013-06-21 18:50 ` Gary E. Miller
2013-06-21 18:57 ` Rich Freeman
2013-06-22 14:34 ` Duncan
2013-06-22 22:15 ` Gary E. Miller
2013-06-28 0:20 ` Duncan
2013-06-28 0:41 ` Gary E. Miller
2013-06-21 18:53 ` Bob Sanders
2013-06-22 14:23 ` Duncan
2013-06-23 1:02 ` Mark Knecht
2013-06-23 1:48 ` Mark Knecht
2013-06-28 3:36 ` Duncan
2013-06-28 9:12 ` Duncan
2013-06-28 17:50 ` Gary E. Miller
2013-06-29 5:40 ` Duncan
2013-06-30 1:04 ` Rich Freeman
2013-06-22 12:49 ` [gentoo-amd64] " B Vance
2013-06-22 13:12 ` Rich Freeman
2013-06-23 11:31 ` thegeezer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='pan$7c60a$34f695e$f0595300$b5612b1@cox.net' \
--to=1i5t5.duncan@cox.net \
--cc=gentoo-amd64@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox