From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 719CA139083 for ; Sat, 9 Dec 2017 18:28:30 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 7AD86E10DB; Sat, 9 Dec 2017 18:28:24 +0000 (UTC) Received: from auth-3.ukservers.net (auth-3.ukservers.net [217.10.138.152]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 1AE08E10CB for ; Sat, 9 Dec 2017 18:28:23 +0000 (UTC) Received: from [192.168.1.64] (host86-173-156-32.range86-173.btcentralplus.com [86.173.156.32]) by auth-3.ukservers.net (Postfix smtp) with ESMTPA id 6C6205412F8 for ; Sat, 9 Dec 2017 18:28:21 +0000 (GMT) Subject: Re: [gentoo-user] OT: btrfs raid 5/6 To: gentoo-user@lists.gentoo.org References: <20171207223545.GC18433@tp> <5A29D35D.1040901@youngman.org.uk> <1963563.zU3MYjX5FE@eve> From: Wols Lists X-Enigmail-Draft-Status: N1110 Message-ID: <5A2C2B44.6060802@youngman.org.uk> Date: Sat, 9 Dec 2017 18:28:20 +0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.7.0 Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org MIME-Version: 1.0 In-Reply-To: <1963563.zU3MYjX5FE@eve> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-Archives-Salt: b491fa50-e2ab-4ba2-97c1-d99329933280 X-Archives-Hash: f57481ad666d37408ab6b7a39f04dae3 On 09/12/17 16:58, J. Roeleveld wrote: > On Friday, December 8, 2017 12:48:45 AM CET Wols Lists wrote: >> On 07/12/17 22:35, Frank Steinmetzger wrote: >>>> (Oh - and md raid-5/6 also mix data and parity, so the same holds true >>>> >>>>> there.) >>> >>> Ok, wasn’t aware of that. I thought I read in a ZFS article that this were >>> a special thing. >> >> Say you've got a four-drive raid-6, it'll be something like >> >> data1 data2 parity1 parity2 >> data3 parity3 parity4 data4 >> parity5 parity6 data5 data6 >> >> The only thing to watch out for (and zfs is likely the same) if a file >> fits inside a single chunk it will be recoverable from a single drive. >> And I think chunks can be anything up to 64MB. > > Except that ZFS doesn't have fixed on-disk-chunk-sizes. (especially if you use > compression) > > See: > https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz > Which explains nothing, sorry ... :-( It goes on about 4K or 8K database blocks (and I'm talking about 64 MEG chunk sizes). And the OP was talking about files being recoverable from a disk that was removed from an array. Are you telling me that a *small* file has bits of it scattered across multiple drives? That would be *crazy*. If I have a file of, say, 10MB, and write it to an md-raid array, there is a good chance it will fit inside a single chunk, and be written - *whole* - to a single disk. With parity on another disk. How big does a file have to be on ZFS before it is too big to fit in a typical chunk, so that it gets split up across multiple drives? THAT is what I was on about, and that is what concerned the OP. I was just warning the OP that a chunk typically is rather more than just one disk block, so anybody harking back to the days of 512byte sectors could get a nasty surprise ... Cheers, Wol Cheers, Wol