From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id EE9F51381F3 for ; Sat, 22 Jun 2013 15:45:31 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 3C4D9E0A9D; Sat, 22 Jun 2013 15:45:25 +0000 (UTC) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 41CEEE0A4A for ; Sat, 22 Jun 2013 15:45:24 +0000 (UTC) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1UqQ0A-0003kY-KB for gentoo-amd64@lists.gentoo.org; Sat, 22 Jun 2013 17:45:22 +0200 Received: from ip68-231-22-224.ph.ph.cox.net ([68.231.22.224]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Sat, 22 Jun 2013 17:45:22 +0200 Received: from 1i5t5.duncan by ip68-231-22-224.ph.ph.cox.net with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Sat, 22 Jun 2013 17:45:22 +0200 X-Injected-Via-Gmane: http://gmane.org/ To: gentoo-amd64@lists.gentoo.org From: Duncan <1i5t5.duncan@cox.net> Subject: [gentoo-amd64] Re: Is my RAID performance bad possibly due to starting sector value? Date: Sat, 22 Jun 2013 15:45:06 +0000 (UTC) Message-ID: References: Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-amd64@lists.gentoo.org Reply-to: gentoo-amd64@lists.gentoo.org Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Complaints-To: usenet@ger.gmane.org X-Gmane-NNTP-Posting-Host: ip68-231-22-224.ph.ph.cox.net User-Agent: Pan/0.140 (Chocolate Salty Balls; GIT 459f52e /usr/src/portage/src/egit-src/pan2) X-Archives-Salt: 7c4cba9d-552b-40ed-9b76-940aada18ba5 X-Archives-Hash: 7ca8bcfc24cfe5cc366cc21f3d7b98f9 Rich Freeman posted on Sat, 22 Jun 2013 07:12:25 -0400 as excerpted: > Multiple-level redundancy just seems to be past the point of diminishing > returns to me. If I wanted to spend that kind of money I'd probably > spend it differently. My point was that for me, it wasn't multiple level redundancy. It was simply device redundancy (raid), and fat-finger redundancy (backups), on the same set of drives so I was protected from either scenario. The fire/flood scenario would certainly get me if I didn't have offsite backups, but just as you call multiple redundancy past your point of diminishing returns, I call the fire/flood scenario past mine. If that happens, I figure I'll have far more important things to worry about than rebuilding my computer for awhile. And chances are, when I do get around to it, things will be progressed enough that much of the data won't be worth so much any more anyway. Besides, the real /important/ data is in my head. What's worth rebuilding, will be nearly as easy to rebuild due to what's in my head, as it would be to go thru what's now historical data and try to pick up the pieces, sorting thru what's still worth keeping around and what's not. Tho as I said, I do/did keep an additional level of backup on that 1 TB drive, but it's on-site too, and while not in the computer, it's generally nearby enough that it'd be lost too in case of flood/fire. It's more a convenience than a real backup, and I don't really keep it upto date, but if it survived and what's in the computer itself didn't, I do have old copies of much of my data, simply because it's still there from the last time I used that drive as convenient temporary storage while I switched things around. > However, I do agree that mdadm should support more flexible arrays. For > example, my boot partition is raid1 (since grub doesn't support anything > else), and I have it set up across all 5 of my drives. However, the > reality is that only two get used and the others are treated only as > spares. So, that is just a waste of space, and it is actually more > annoying from a config perspective because it would be really nice if my > system could boot from an arbitrary drive. Three points on that. First, obviously you're not on grub2 yet. It handles all sorts of raid, lvm, newer filesystems like btrfs (and zfs for those so inclined), various filesystems, etc, natively, thru its modules. Second, /boot is an interesting case. Here, originally (with grub1 and the raid6s across 4 drives) I setup a 4-drive raid1. But, I actually installed grub to the boot sector of all four drives, and tested each one booting just to grub by itself (the other drives off), so I knew it was using its own grub, not pointed somewhere else. But I was still worried about it as while I could boot from any of the drives, they were a single raid1, which meant no fat-finger redundancy, and doing a usable backup of /boot isn't so easy. So I think it was when I switched from raid6 to raid1 for almost the entire system, that I switched to dual dual-drive raid1s for /boot as well, and of course tested booting to each one alone again, just to be sure. That gave me fat-finger redundancy, as well as added convenience since I run git kernels, and I was able to update just the one dual-drive raid1 /boot with the git kernels, then update the backup with the releases once they came out, which made for a nice division of stable kernel vs pre-release there. That dual dual-drive raid-1 setup proved very helpful when I upgraded to grub2 as well, since I was able to play around with it on the one dual- drive raid1 /boot while the other one stayed safely bootable grub1 until I had grub2 working the way I wanted on the working /boot, and had again installed and tested it on both component hard drives to boot to grub and to the full raid1 system just from the one drive by itself, with the others entirely shut off. Only when I had both drives of the working /boot up and running grub2, did I mounth the backup /boot as well, and copy over the now working config to it, before running grub2-install on those two drives. Of course somewhere along the way, IIRC at the same time as the raid6 to raid1 conversion as well, I had also upgraded to gpt partitions from traditional mbr. When I did I had the foresight to create BOTH dedicated BIOS boot partitions and EFI partitions on each of the four drives. grub1 wasn't using them, but that was fine; they were small (tiny). That made the upgrade to grub2 even easier, since grub2 could install its core into the dedicated BIOS partitions. The EFI partitions remain unused to this day, but as I said, they're tiny, and with gpt they're specifically typed and labeled so they can't mix me up, either. (BTW, talking about data integrity, if you're not on GPT yet, do consider it. It keeps a second partition table at the end of the drive as well as the one at the beginning, and unlike mbr they're checksummed, so corruption is detected. It also kills the primary/secondary/extended difference so no more worrying about that, and allows partition labels, much like filesystem labels, which makes tracking and managing what's what **FAR** easier. I GPT partition everything now, including my USB thumbdrives if I partition them at all!) When that machine slowly died and I transferred to a new half-TB drive thinking it was the aging 300-gigs (it wasn't, caps were dying on the by then 8 year old mobo), and then transferred that into my new machine without raid, I did the usual working/backup partition arrangement, but got frustrated without the ability to have a backup /boot, because with just one device, the boot sector could point just one place, at the core grub2 in the dedicated BIOS boot partition, which in turn pointed at the usual /boot. Now grub2's better in this regard than grub2, since that core grub2 has an emergency mode that would give me limited ability to load a backup /boot, that's an entirely manual process with a comparatively limited grub2 emergency shell without additional modules available, and I didn't actually take advantage of that to configure a backup /boot that it could reach. But when I switched to the SSDs, I again had multiple devices, the pair of SSDs, which I setup with individual /boots, and the original one still on the spinning rust. Again I installed grub2 to each one, pointed at its own separately configured /boot, so now I actually have three separately configured and bootable /boots, one on each of the SSDs and a third on the spinning rust half-TB. (FWIW the four old 300-gigs are sitting on the shelf. I need to badblocks or dd them to wipe, and I have a friend that'll buy them off me.) Third point. /boot partition raid1 across all five drives and three are wasted? How? I believe if you check, all five will have a mirror of the data (not just two unless it's btrfs raid1 not mdadm raid1, but btrfs is / entirely/ different in that regard). Either they're all wasted but one, or none are wasted, depending on how you look at it. Meanwhile, do look into installing grub on each drive, so you can boot from any of them. I definitely know it's possible as that's what I've been doing, tested, for quite some time. > Oh, as far as raid on partitions goes - I do use this for a different > purpose. If you have a collection of drives of different sizes it can > reduce space waste. Suppose you have 3 500GB drives and 2 1TB drives. > If you put them all directly in a raid5 you get 2TB of space. If you > chop the 1TB drives into 2 500GB partitions then you can get two raid5s > - one 2TB in space, and the other 500GB in space. That is 500GB more > data for the same space. Oh, and I realize I wrote raid5. With mdadm > you can set up a 2-drive raid5. It is functionally equivalent to a > raid1 I think, You better check. Unless I'm misinformed, which I could be as I've not looked at this in awhile and both mdadm and the kernel have changed quite a bit since then, that'll be setup as a degraded raid5, which means if you lose one... But I do know raid10 can be setup like that, on fewer drives than it'd normally take, with the mirrors in "far" mode I believe, and it just arranges the stripes as it needs to. It's quite possible that they fixed it so raid5 works similarly and can do the same thing now, in which case that degraded thing I knew about is obsolete. But unless you know for sure, please do check. > and I believe you can convert between them, but since I generally intend > to expand arrays I prefer to just set them up as raid5 from the start. > Since I stick lvm on top I don't care if the space is chopped up. There's a lot of raid conversion ability in modern mdadm. I think most levels can be converted between, given sufficient devices. Again, a lot has changed in that regard since I set my originals up, I'd guess somewhere around 2008. -- Duncan - List replies preferred. No HTML msgs. "Every nonfree program has a lord, a master -- and if you use the program, he is your master." Richard Stallman