From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 5C7E6139083 for ; Sat, 9 Dec 2017 23:36:53 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 73E4FE10F2; Sat, 9 Dec 2017 23:36:48 +0000 (UTC) Received: from mail-pl0-x22b.google.com (mail-pl0-x22b.google.com [IPv6:2607:f8b0:400e:c01::22b]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 109B3E0FE9 for ; Sat, 9 Dec 2017 23:36:47 +0000 (UTC) Received: by mail-pl0-x22b.google.com with SMTP id bd8so2445095plb.9 for ; Sat, 09 Dec 2017 15:36:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:content-transfer-encoding; bh=FjcUPUSqscIWttjBTvrdKAGc4lBRy6gqEFvs8pETJz4=; b=GD7vJ5FaOMFprl36+Nu+mZfzt3d264MEG92+UDMJ/cwauhiLj+HVYmmNA0fnFIRYTK anUy/0sOznKP3TDqb2AYULhRb5DlAeViLesA6sT9AbL18cBUMJoKGiQ7cN5E7aWpndHU +/cfftgdkbWFTG7vQDQaYRW5eCuEDYTSoubRy89Uhe61i4g6NYThwgNL/o1xQcavZJJq kMPWid54Q8wNZvERAuWLD+c2y7eipX2EGkYRWqPwKGRhbnCQsyqA6LxcwTzUXqR3rcZy Lw4X6580d9cr7f3QYC7XszFfFQmGX7FQpDFXYJ94Junbwrlk7nq8fY4qquPjR+wkaGdp BOiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to:content-transfer-encoding; bh=FjcUPUSqscIWttjBTvrdKAGc4lBRy6gqEFvs8pETJz4=; b=imvpsPycW6qVgLbgMb3czcl8NOiq4WCwOCqBuab72/0X5fADbxQhktzvwZuutaL26B 6LfVQX4PZghGnWi78BL7PRWObVc2nSosH1PgGqppllTfTfIX42140+rCKN/WR55gZ4PE T41tudq06ofzNcIaBuzUadvAPDHmsff/yYWPMh45YeLDXRrBEaqH3iPCVtgp+QZRxgbz 7MbhU8E8KFyndoVQ8OktD2MH6hCb+rYYjg9MtsUMYSA/5iSTfRtE54WbLKIsndzb36Qq LKIQGsOPHMF2mxepK5Oifzy3jX46NSKQGxWGZ60OAXFO/TneoAkhKWJFvbEMiu/cMCXe E7Sg== X-Gm-Message-State: AJaThX5iZJG8BaZPVWVrnOm24RaOs5lhTypCUyf5GIRMEsx+hGsRET+T NKjM0Vcd/KhAUxOjwkvSdvhjnIqLSzzTQ/oPpO5Rzg== X-Google-Smtp-Source: AGs4zMaKJ7ll+1NTS3nlZHCWJHm1R+LLVyXa8oXok8KjDwT/l3BfhiaHhy85BVE3aSnf8+ZCaNAIk3ptE13MiQd+CAk= X-Received: by 10.84.130.67 with SMTP id 61mr34007296plc.368.1512862606459; Sat, 09 Dec 2017 15:36:46 -0800 (PST) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org MIME-Version: 1.0 Sender: freemanrich@gmail.com Received: by 10.100.151.169 with HTTP; Sat, 9 Dec 2017 15:36:45 -0800 (PST) In-Reply-To: <5A2C2B44.6060802@youngman.org.uk> References: <20171207223545.GC18433@tp> <5A29D35D.1040901@youngman.org.uk> <1963563.zU3MYjX5FE@eve> <5A2C2B44.6060802@youngman.org.uk> From: Rich Freeman Date: Sat, 9 Dec 2017 18:36:45 -0500 X-Google-Sender-Auth: 1sUf-HtRNJrdfmfJInJdC4lo-xg Message-ID: Subject: Re: [gentoo-user] OT: btrfs raid 5/6 To: gentoo-user@lists.gentoo.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Archives-Salt: 3942bfd7-0d1e-41c1-bbc4-7601d531647c X-Archives-Hash: cbbac8e9d411e36804859ce06d259486 On Sat, Dec 9, 2017 at 1:28 PM, Wols Lists wrote= : > On 09/12/17 16:58, J. Roeleveld wrote: >> On Friday, December 8, 2017 12:48:45 AM CET Wols Lists wrote: >>> On 07/12/17 22:35, Frank Steinmetzger wrote: >>>>> (Oh - and md raid-5/6 also mix data and parity, so the same holds tru= e >>>>> >>>>>> there.) >>>> >>>> Ok, wasn=E2=80=99t aware of that. I thought I read in a ZFS article th= at this were >>>> a special thing. >>> >>> Say you've got a four-drive raid-6, it'll be something like >>> >>> data1 data2 parity1 parity2 >>> data3 parity3 parity4 data4 >>> parity5 parity6 data5 data6 >>> >>> The only thing to watch out for (and zfs is likely the same) if a file >>> fits inside a single chunk it will be recoverable from a single drive. >>> And I think chunks can be anything up to 64MB. >> >> Except that ZFS doesn't have fixed on-disk-chunk-sizes. (especially if y= ou use >> compression) >> >> See: >> https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-= or-how-i-learned-stop-worrying-and-love-raidz >> > Which explains nothing, sorry ... :-( > > It goes on about 4K or 8K database blocks (and I'm talking about 64 MEG > chunk sizes). And the OP was talking about files being recoverable from > a disk that was removed from an array. Are you telling me that a *small* > file has bits of it scattered across multiple drives? That would be *craz= y*. I'm not sure why it would be "crazy." Granted, most parity RAID systems seem to operate just as you describe, but I don't see why with Reed Solomon you couldn't store ONLY parity data on all the drives. All that matters is that you generate enough to recover the data - the original data contains no more information than an equivalent number of Reed-Solomon sets. Of course, with the original data I imagine you need to do less computation assuming you aren't bothering to check its integrity against the parity data. In case my point is clear a RAID would work perfectly fine if you had 5 drives with the capacity to store 4 drives wort of data, but instead of storing the original data across 4 drives and having 1 of parity, you instead compute 5 sets of parity so that now you have 9 sets of data that can tolerate the loss of any 5, then throw away the sets containing the original 4 sets of data and store the remaining 5 sets of parity data across the 5 drives. You can still tolerate the loss of one more set, but all 4 of the original sets of data have been tossed already. --=20 Rich