From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id A58E71381F3 for ; Fri, 21 Jun 2013 10:28:43 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id F22B1E09F1; Fri, 21 Jun 2013 10:28:36 +0000 (UTC) Received: from mail-ie0-f179.google.com (mail-ie0-f179.google.com [209.85.223.179]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 0B1CAE0A04 for ; Fri, 21 Jun 2013 10:28:35 +0000 (UTC) Received: by mail-ie0-f179.google.com with SMTP id c10so18916696ieb.38 for ; Fri, 21 Jun 2013 03:28:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:content-type; bh=M2SpYOUjpZLYMn36LTmklq5Jp5C/QS9jqOV1uQDf6TA=; b=PjcJBw5dRNbHUulldKDKuLreP7EOSTzSX8K3yy3kB+EvWb9FMSqZg/KGxosIsxW8wY DmCkMrhXMRZld0ATFEa9SoBMvtaH45qK85yprogYYpSDnwV6pkyotc77uoiPCjYtaNIH 0NL5vBBX172pKCgzOHAZb17CsPI+DceBNhpS79+h8pLadLH7X8SlAGbyqmEySmolalQe Wg8whnshGN7GQxmRq/GzDF00m13e5Y9qU9epBih+kyjmJLec9DWyCnTUvVx759hjt0BG oKBzDBBIOKqqXlR8FWLWpcOERz5/SY31PW0m3E0Z8TcDpUXb77kKd0PNxGAoFudyn0xK brKQ== Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-amd64@lists.gentoo.org Reply-to: gentoo-amd64@lists.gentoo.org MIME-Version: 1.0 X-Received: by 10.50.22.98 with SMTP id c2mr1861600igf.52.1371810515153; Fri, 21 Jun 2013 03:28:35 -0700 (PDT) Sender: freemanrich@gmail.com Received: by 10.64.92.166 with HTTP; Fri, 21 Jun 2013 03:28:35 -0700 (PDT) In-Reply-To: References: Date: Fri, 21 Jun 2013 06:28:35 -0400 X-Google-Sender-Auth: w2ka4YZGbyq_QDOfRVKRsabm-GQ Message-ID: Subject: Re: [gentoo-amd64] Re: Is my RAID performance bad possibly due to starting sector value? From: Rich Freeman To: gentoo-amd64@lists.gentoo.org Content-Type: text/plain; charset=ISO-8859-1 X-Archives-Salt: e89cb93b-dff2-42f6-8aeb-f73a0a92378f X-Archives-Hash: e7cdd6295e6e4cd4ace534b681ecc97e On Fri, Jun 21, 2013 at 3:31 AM, Duncan <1i5t5.duncan@cox.net> wrote: > So with 4k block sizes on a 5-device raid6, you'd have 20k stripes, 12k > in data across three devices, and 8k of parity across the other two > devices. With mdadm on a 5-device raid6 with 512K chunks you have 1.5M in a stripe, not 20k. If you modify one block it needs to read all 1.5M, or it needs to read at least the old chunk on the single drive to be modified and both old parity chunks (which on such a small array is 3 disks either way). > Forth, back to the parity. Remember, raid5/6 has all that parity that it > writes out (but basically never reads in normal mode, only when degraded, > in ordered to reconstruct the data from the missing device(s)), but > doesn't actually use it for integrity checking. I wasn't aware of this - I can't believe it isn't even an option either. Note to self - start doing weekly scrubs... > The single down side to raid1 as opposed to raid5/6 is the loss of the > extra space made available by the data striping, 3*single-device-space in > the case of 5-way raid6 (or 4-way raid5) vs. 1*single-device-space in the > case of raid1. Otherwise, no contest, hands down, raid1 over raid6. This is a HUGE downside. The only downside to raid1 over not having raid at all is that your disk space cost doubles. raid5/6 is considerably cheaper in that regard. In a 5-disk raid5 the cost of redundancy is only 25% more, vs a 100% additional cost for raid1. To accomplish the same space as a 5-disk raid5 you'd need 8 disks. Sure, read performance would be vastly superior, but if you're going to spend $300 more on hard drives and whatever it takes to get so many SATA ports on your system you could instead add an extra 32GB of RAM or put your OS on a mirrored SSD. I suspect that both of those options on a typical workload are going to make a far bigger improvement in performance. Which is better really depends on your workload. In my case much of my raid space is used my mythtv, or for storage of stuff I only occasionally use. In these use cases the performance of the raid5 is more than adequate, and I'd rather be able to keep shows around for an extra 6 months in HD than have the DVR respond a millisecond faster when I hit play. If you really have sustained random access of the bulk of your data than a raid1 would make much more sense. > So several points on btrfs: > > 1) It's still in heavy development. That is what is keeping me away. I won't touch it until I can use it with raid5, and the first common containing that hit the kernel weeks ago I think (and it has known gaps). Until it is stable I'm sticking with my current setup. > 2) RAID levels work QUITE a bit differently on btrfs. In particular, > what btrfs calls raid1 mode (with the same applying to raid10) is simply > two-way-mirroring, NO MATTER THE NUMBER OF DEVICES. There's no multi-way > mirroring yet available Odd, for some reason I thought it let you specify arbitrary numbers of copies, but looking around I think you're right. It does store two copies of metadata regardless of the number of drives unless you override this. However, if one considered raid1 expensive, having multiple layers of redundancy is REALLY expensive if you aren't using Reed Solomon and many data disks. >From my standpoint I don't think raid1 is the best use of money in most cases, either for performance OR for data security. If you want performance the money is probably better spent on other components. If you want data security the money is probably better spent on offline backups. However, this very-much depends on how the disks will be used - there are certainly cases where raid1 is your best option. Rich