From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id 0A1BB1381F3 for ; Fri, 21 Jun 2013 15:13:58 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 6A73FE0A02; Fri, 21 Jun 2013 15:13:53 +0000 (UTC) Received: from mail-vb0-f47.google.com (mail-vb0-f47.google.com [209.85.212.47]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id A36E8E09C4 for ; Fri, 21 Jun 2013 15:13:52 +0000 (UTC) Received: by mail-vb0-f47.google.com with SMTP id x14so5912713vbb.20 for ; Fri, 21 Jun 2013 08:13:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:content-type; bh=FMy/3l6+H3vAGUZw1B2IOEvWjDwpmnTuCIo48chvRbc=; b=uuZxJUzMLPpAdBcOux8EtTZg/ub1J3ugRVdkuVt7PIZtSUhWAWT3eVopxmBKVXSS0B xuivuHkNbzbUIYABodH1E9/XlzpPdPkofeHlKUPT5VSES5eidrtnoD0y+pF98SC0n2C1 79k3sj4HuV9S2PzToTCr21c7j0Kzjs23tyxM6N+RKRFZ9zramSOHTfnXfPIreoBf83TA 5+OmkfuK6hbu6fuGBdyH9hSyFbwiSHHWeI86yd7a2raz5zLRyyLm/S5Y6GdGvRlPRtiR EqXgr5jhc5fpU/4YxxHmT5Lt9AsqYGcko874E8fVER8FPkzTUJc1BHLT59iWAFs0PX0J 3G+A== Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-amd64@lists.gentoo.org Reply-to: gentoo-amd64@lists.gentoo.org MIME-Version: 1.0 X-Received: by 10.220.59.69 with SMTP id k5mr5677583vch.34.1371827631742; Fri, 21 Jun 2013 08:13:51 -0700 (PDT) Sender: freemanrich@gmail.com Received: by 10.52.180.98 with HTTP; Fri, 21 Jun 2013 08:13:51 -0700 (PDT) In-Reply-To: References: Date: Fri, 21 Jun 2013 11:13:51 -0400 X-Google-Sender-Auth: SphgnUxWuDSi7f-Huvn7y13G4N8 Message-ID: Subject: Re: [gentoo-amd64] Re: Is my RAID performance bad possibly due to starting sector value? From: Rich Freeman To: gentoo-amd64@lists.gentoo.org Content-Type: text/plain; charset=ISO-8859-1 X-Archives-Salt: dcb4303b-441b-4deb-b4f3-55ccb6060fb9 X-Archives-Hash: e5130876eea4662e951df3479cc3c453 On Fri, Jun 21, 2013 at 10:27 AM, Duncan <1i5t5.duncan@cox.net> wrote: > Rich Freeman posted on Fri, 21 Jun 2013 06:28:35 -0400 as excerpted: > >> That is what is keeping me away. I won't touch it until I can use it >> with raid5, and the first common containing that hit the kernel weeks >> ago I think (and it has known gaps). Until it is stable I'm sticking >> with my current setup. > > Question: Would you use it for raid1 yet, as I'm doing? What about as a > single-device filesystem? Do you believe my estimates of reliability in > those cases (almost but not quite stable for single-device, kind of in > the middle for raid1/raid0/raid10, say a year behind single-device and > raid5/6/50/60 about a year behind that) reasonably accurate? If I wanted to use raid1 I might consider using btrfs now. I think it is still a bit risky, but the established use cases have gotten a fair bit of testing now. I'd be more confident in using it with a single device. > > Because if you're waiting until btrfs raid5 is fully stable, that's > likely to be some wait yet -- I'd say a year, likely more given that > everything btrfs has seemed to take longer than people expected. That's my thought as well. Right now I'm not running out of space, so I'm hoping that I can wait until the next time I need to migrate my data (from 1TB to 5+TB drives, for example). With such a scenario I don't need to have 10 drives mounted at once to migrate the data - I can migrate existing data to 1-2 drives, remove the old ones, and expand the new array. To migrate today would require finding someplace to dump all the data offline and migrate the drives, as there is no in-place way to migrate multiple ext3/4 logical volumes on top of mdadm to a single btrfs on bare metal. Without replying to anything in particular both you and Bob have mentioned the importance of multiple redundancy. Obviously risk goes down as redundancy goes up. If you protect 25 drives of data with 1 drive of parity then you need 2/26 drives to fail to hose 25 drives of data. If you protect 1 drive of data with 25 drives of parity (call them mirrors or parity or whatever - they're functionally equivalent) then you need 25/26 drives to fail to lose 1 drive of data. RAID 1 is actually less effective - if you protect 13 drives of data with 13 mirrors you need 2/26 drives to fail to lose 1 drive of data (they just have to be the wrong 2). However, you do need to consider that RAID is not the only way to protect data, and I'm not sure that multiple-redundancy raid-1 is the most cost-effective strategy. If I had 2 drives of data to protect and had 4 spare drives to do it with, I doubt I'd set up a 3x raid-1/5/10 setup (or whatever you want to call it - imho raid "levels" are poorly named as there really is just striping and mirroring and adding RS parity and everything else is just combinations). Instead I'd probably set up a RAID1/5/10/whatever with single redundancy for faster storage and recovery, and an offline backup (compressed and with incrementals/etc). The backup gets you more security and you only need it in a very unlikely double-failure. I'd only invest in multiple redundancy in the event that the risk-weighted cost of having the node go down exceeds the cost of the extra drives. Frankly in that case RAID still isn't the right solution - you need a backup node someplace else entirely as hard drives aren't the only thing that can break in your server. This sort of rationale is why I don't like arguments like "RAM is cheap" or "HDs are cheap" or whatever. The fact is that wasting money on any component means investing less in some other component that could give you more space/performance/whatever-makes-you-happy. If you have $1000 that you can afford to blow on extra drives then you have $1000 you could blow on RAM, CPU, an extra server, or a trip to Disney. Why not blow it on something useful? Rich