From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 448C1139082 for ; Fri, 1 Dec 2017 17:14:23 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 6BD7CE0D35; Fri, 1 Dec 2017 17:14:15 +0000 (UTC) Received: from mail-pl0-x241.google.com (mail-pl0-x241.google.com [IPv6:2607:f8b0:400e:c01::241]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id EB1F7E0CA4 for ; Fri, 1 Dec 2017 17:14:14 +0000 (UTC) Received: by mail-pl0-x241.google.com with SMTP id s10so6608782plj.5 for ; Fri, 01 Dec 2017 09:14:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to; bh=7vLRR3QFAFV8ehcBnmByke6dGC4E6TLg4w3J8obCQPo=; b=s7+ryeQeIXqODlcBLtQygr3OS+XIoKJMJhzP6TlYa17IrTZ7KOq3eHH+uQMER+Lqf2 fNTIpaRwnPVcCbJEPETdtsGxquAYliY1/rVakMwaUzy+ks1pPlUWSI+rHDn5xt0DKlVa dDjRyAZ7O/z700sAZiHDSsO6uL0KnSWygYeqRzAe2guiVRqOI0VPyEHuTkmuKs20rjc7 CYRElKyOL7Hh9vnwoWZXkSeP7l28FYODKm0SCsoa0dMIm3x/c5+fENVifVxhKQ5DtCzX 9JmvGgoj3s5X69JraU+3IDB2ndvmfIsOjEm8HVocfdbCEh0/XtFow+3h8rIu05Nn/c/c UITQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to; bh=7vLRR3QFAFV8ehcBnmByke6dGC4E6TLg4w3J8obCQPo=; b=Yn6db5ZVaXcmuaIlT3PrE4lylDlWJ9IY0jfVTYPHPL64FWDxd4Tu4MVNvH3A+Dza8O V+6Zjm7YqJDhOtpQyb8lzvB9x7h9glOL5GrC92XYDSGkiQBscDRd7usnsTN03ysu3xT4 KELX7LWLhQioloFU4ozUAWrd4JsIT6yxTj86tF7E+yXb16ZnC4HaC47ytivvCSczr6tZ DqcZ85a6DMFdvJZEklijsGbswZnRHA7vYaXC8Wb6VrwhGY9/6LFxp2JI6aMkFbdLSPEr dC4zyhR+Ojl8UKV5oDLgVqAr6u+Irzw2/ZxEzcrfd3zgMg4kV1QJjDJl1bkE0rHsZFhS 0B0g== X-Gm-Message-State: AJaThX5yaU8pYeF0ZYb9171fSjEoD5oxmkYtS6kUWgiWDWRyRNFZDb8q Pmg2Ynzdp9nZNoVGNT8r8aHVCqEtuj5tSXxUcajo+Q== X-Google-Smtp-Source: AGs4zMYdTVsfnPaebiFSPxdD5KbKrlw6WHsX1maqQ99EIp6b+aK2OGJmYOSGvgKYZ28Bv4OChIT/JrPb/xGea9bdPpE= X-Received: by 10.84.129.97 with SMTP id 88mr6754332plb.230.1512148453018; Fri, 01 Dec 2017 09:14:13 -0800 (PST) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org MIME-Version: 1.0 Sender: freemanrich@gmail.com Received: by 10.100.151.169 with HTTP; Fri, 1 Dec 2017 09:14:12 -0800 (PST) In-Reply-To: <5A218A49.3050004@youngman.org.uk> References: <5A218A49.3050004@youngman.org.uk> From: Rich Freeman Date: Fri, 1 Dec 2017 12:14:12 -0500 X-Google-Sender-Auth: 4nECq_-Cw-G9wL_LQtBBF_-6a6k Message-ID: Subject: Re: [gentoo-user] OT: btrfs raid 5/6 To: gentoo-user@lists.gentoo.org Content-Type: text/plain; charset="UTF-8" X-Archives-Salt: e5d254c3-8325-463b-94f6-8f6b463064f9 X-Archives-Hash: 0ee053d3638abb6647622d162a467dca On Fri, Dec 1, 2017 at 11:58 AM, Wols Lists wrote: > On 27/11/17 22:30, Bill Kenworthy wrote: >> Hi all, >> I need to expand two bcache fronted 4xdisk btrfs raid 10's - this >> requires purchasing 4 drives (and one system does not have room for two >> more drives) so I am trying to see if using raid 5 is an option >> >> I have been trying to find if btrfs raid 5/6 is stable enough to use but >> while there is mention of improvements in kernel 4.12, and fixes for the >> write hole problem I cant see any reports that its "working fine now" >> though there is a phoronix article saying Oracle is using it since the >> fixes. >> >> Is anyone here successfully using btrfs raid 5/6? What is the status of >> scrub and self healing? The btrfs wiki is woefully out of date :( >> > Or put btrfs over md-raid? > > Thing is, with raid-6 over four drives, you have a 100% certainty of > surviving a two-disk failure. With raid-10 you have a 33% chance of > losing your array. > I tend to be a fan of parity raid in general for these reasons. I'm not sure the performance gains with raid-10 are enough to warrant the waste of space. With btrfs though I don't really see the point of "Raid-10" vs just a pile of individual disks in raid1 mode. Btrfs will do a so-so job of balancing the IO across them already (they haven't really bothered to optimize this yet). I've moved away from btrfs entirely until they sort things out. However, I would not use btrfs for raid-5/6 under any circumstances. That has NEVER been stable, and if anything has gone backwards. I'm sure they'll sort it out sometime, but I have no idea when. RAID-1 on btrfs is reasonably stable, but I've still had it run into issues (nothing that kept me from reading the data off the array, but I've had various issues with it, and when I finally moved it to ZFS it was in a state where I couldn't run it in anything other than degraded mode). You could run btrfs over md-raid, but other than the snapshots I think this loses a lot of the benefit of btrfs in the first place. You are vulnerable to the write hole, the ability of btrfs to recover data with soft errors is compromised (though you can detect it still), and you're potentially faced with more read-write-read cycles when raid stripes are modified. Both zfs and btrfs were really designed to work best on raw block devices without any layers below. They still work of course, but you don't get some of those optimizations since they don't have visibility into what is happening at the disk level. -- Rich