From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id 2A1901381F3 for ; Fri, 21 Jun 2013 18:38:05 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 60CF7E0AF5; Fri, 21 Jun 2013 18:38:02 +0000 (UTC) Received: from mail-pb0-f48.google.com (mail-pb0-f48.google.com [209.85.160.48]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 6BE46E0AD6 for ; Fri, 21 Jun 2013 18:38:01 +0000 (UTC) Received: by mail-pb0-f48.google.com with SMTP id ma3so8110341pbc.7 for ; Fri, 21 Jun 2013 11:38:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=NwxssS6gjqQ82TGWv7/n5Z99SSrL8wD9M3qPX9kYllo=; b=IgSynEgihOTd7eGFKszWpvGbsgn3sXnKSOTAxE5lCyRU7wPXTGM74mVllxWWJs8WHC YpTd5OycGULU1F5ZTrhFsLRiUFpaCw1JU4htEr/RW7yMwe9EeIgZ/QXQSJrGNrnG2W6P KpZx4nrgzz9gj4MPAIAdml2Jy+uQxuEuFwNHCP5NBZ/MgjTyLq3cU/7oUMI5lBY41U4J Un/AKM3uZIFUK4H4lwVuQAt/azT1HKrqKADrN2A8QfG8oI5IRGSIPc3IUtYDoIjNGRw1 G414yIWVPiuZiCRcUxiqxIu/Z5rPg10KBL7+7aIsuvNWAurUsLGzSAgzUgcbsvEBrc8r 3Hzw== Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-amd64@lists.gentoo.org Reply-to: gentoo-amd64@lists.gentoo.org MIME-Version: 1.0 X-Received: by 10.66.145.164 with SMTP id sv4mr17358280pab.46.1371839880275; Fri, 21 Jun 2013 11:38:00 -0700 (PDT) Received: by 10.70.33.198 with HTTP; Fri, 21 Jun 2013 11:38:00 -0700 (PDT) In-Reply-To: References: Date: Fri, 21 Jun 2013 11:38:00 -0700 Message-ID: Subject: Re: [gentoo-amd64] Re: Is my RAID performance bad possibly due to starting sector value? From: Mark Knecht To: Gentoo AMD64 Content-Type: text/plain; charset=UTF-8 X-Archives-Salt: 139c860a-c464-4ebf-b734-dffce124b16b X-Archives-Hash: 6aa530ebea5da1b965983acaa6c8652d On Fri, Jun 21, 2013 at 10:57 AM, Rich Freeman wrote: > On Fri, Jun 21, 2013 at 1:40 PM, Mark Knecht wrote: >> One place where I wanted to double check your thinking. My thought >> is that a RAID1 will _NEVER_ outperform the hdparm -tT read speeds as >> it has to read from three drives and make sure they are all good >> before returning data to the user. > > That isn't correct. In theory it could be done that way, but every > raid1 implementation I've heard of makes writes to all drives > (obviously), but reads from only a single drive (assuming it is > correct). That means that read latency is greatly reduced since they > can be split across two drives which effectively means two heads per > "platter." Also, raid1 typically does not include checksumming, so if > there is a discrepancy between the drives there is no way to know > which one is right. With raid5 at least you can always correct > discrepancies if you have all the disks (though as Duncan pointed out > in practice this only happens if you do an explicit scrub on mdadm). > With btrfs every block is checksummed and so as long as there is one > good (err, consistent) copy somewhere it will be used. > > Rich > Humm... OK, we agree on RAID1 writes. All data must be written to all drives so there's no way to implement any real speed up in that area. If I simplistically assume that write speeds are similar to hdparm -tT read speeds then that's that. On the read side I'm not sure if I'm understanding your point. I agree that a so-designed RAID1 system could/might read smaller portions of a larger read from RAID1 drives in parallel, taking some data from one drive and some from another drive, and then only take action corrective if one of the drives had troubles. However I don't know that mdadm-based RAID1 does anything like that. Does it? It seems to me that unless I at least _request_ all data from all drives and minimally compare at least some error flag from the controller telling me one drive had trouble reading a sector then how do I know if anything bad is happening? Or maybe you're saying it's RAID1 and I don't know if anything bad is happening _unless_ I do a scrub and specifically check all the drives for consistency? Just trying to get clear what you're saying. I do mdadm scrubs at least once a week. I still do them by hand. They have never appeared terribly expensive watching top or iotop but sometimes when I'm watching NetFlix or Hulu in a VM I get more pauses when the scrub is taking place, but it's not huge. I agree that RAID5 gives you an opportunity to get things fixed, but there are folks who lose a disk in a RAID5, start the rebuild, and then lose a second disk during the rebuild. That was my main reason to go to RAID6. Not that I would ever run the array degraded but that I could still tolerate a second loss while the rebuild was happening and hopefully get by. That was similar to my old 3-disk RAID1 where I'd have to lose all 3 disks to be out of business. Thanks, Mark