From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id EFB741381F3 for ; Sat, 22 Jun 2013 11:50:07 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 6EA09E0B56; Sat, 22 Jun 2013 11:50:04 +0000 (UTC) Received: from mail.storm.ca (mail.storm.ca [209.87.239.66]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id B841BE0B4E for ; Sat, 22 Jun 2013 11:50:03 +0000 (UTC) Received: from [192.168.1.51] (ppp-199-167-117-46.storm.ca [199.167.117.46] (may be forged)) by mail.storm.ca (8.14.2+Sun/8.14.2) with ESMTP id r5MBnvnc005094 for ; Sat, 22 Jun 2013 07:50:02 -0400 (EDT) Subject: Re: [gentoo-amd64] Is my RAID performance bad possibly due to starting sector value? From: B Vance To: gentoo-amd64@lists.gentoo.org In-Reply-To: References: Content-Type: text/plain; charset="us-ascii" Date: Sat, 22 Jun 2013 07:49:45 -0500 Message-ID: <1371905385.10227.50.camel@ShadowRookerie> Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-amd64@lists.gentoo.org Reply-to: gentoo-amd64@lists.gentoo.org Mime-Version: 1.0 X-Mailer: Evolution 2.32.3 Content-Transfer-Encoding: 7bit X-Archives-Salt: 387ad5db-497e-4b4f-b0e2-82aa787a9332 X-Archives-Hash: 4e91a7587cbf48f31eeb643434cda9f0 On Thu, 2013-06-20 at 12:10 -0700, Mark Knecht wrote: > Hi, > Does anyone know of info on how the starting sector number might > impact RAID performance under Gentoo? The drives are WD-500G RE3 > drives shown here: > > http://www.amazon.com/Western-Digital-WD5002ABYS-3-5-inch-Enterprise/dp/B001EMZPD0/ref=cm_cr_pr_product_top > > These are NOT 4k sector sized drives. > > Specifically I'm a 5-drive RAID6 for about 1.45TB of storage. My > benchmarking seems abysmal at around 40MB/S using dd copying large > files. It's higher, around 80MB/S if the file being transferred is > coming from an SSD, but even 80MB/S seems slow to me. I see a LOT of > wait time in top. And my 'large file' copies might not be large enough > as the machine has 24GB of DRAM and I've only been copying 21GB so > it's possible some of that is cached. > > Then I looked again at how I partitioned the drives originally and > see the starting sector of sector 3 as 8594775. I started wondering if > something like 4K block sizes at the file system level might be > getting munged across 16k chunk sizes in the RAID. Maybe the blocks > are being torn apart in bad ways for performance? That led me down a > bunch of rabbit holes and I haven't found any light yet. > > Looking for some thoughtful ideas from those more experienced in this area. > > Cheers, > Mark > Not necessarily the kind of answer you are looking for, but a year or so back I converted my NAS from Hardware RAID1 to linux software RAID1 to RAID1 on ZFS. Before the conversion to ZFS I had issues with the NAS being unable to keep up with requests. Since then I have been able to hit the SAN relatively hard with no visible effects. Just to give an idea, a normal load involves streaming an HD movie to the TV, streaming music to a second system, being used as the shared storage for four computers, two of which almost constantly hit the shared drive for data (keep the distfile directory for all the systems on it as well as using it as the local rsync) , and once a month, transferring data to removable storage devices. All of this going over cat6 Ethernet and occasionally USB2. I'm unsure how I would go about measuring the throughput, mainly because I never cared in the past as long as the files transferred at a reasonable pace and the video/audio didn't stutter. By no means is my NAS a high-end system. It's stats are: AMD64 X2, 4200 ASUS A8V MoBo (I think) 4GB RAM 2 x Silicon Image Sil 3114 SATA RAID cards (4 port PCI cards) 3 x 1.5TB Seagate drives (on Raid Cards) 4 x 2TB Western Digital drives (On Raid Cards) 2 x Western Digital antique 80GB drives (mirrored on motherboard for OS) Marvell GigE network cards (Have a second card to add once I figure how to automatically load balance through two cards) Case with 2 x 120mm fans on top, 3 x 120mm fans on the front, 1 x 240mm fan on the side Total storage available 6.3TB, of which 3.4TB is used. An image of the pool is created on a daily basis via cron jobs, which are overwritten every 3 days. (Image of Day 1, Day 2, Day 3 then Day 4 overwrites Day 1.)The pool started with 5 750GB drives and has been grown slowly as I find deals on better drives. Main advantage of using ZFS on linux is the ease of growing your pools. As long as you know the id of the drive (preferably the hardware id not the delegated one), its so simple I can manage it. Since I'm nowhere near the technical level of most folk here, anyone can do it. For what it's worth (very little I know), I think that ZFS has too many advantages over linux software RAID for it to be a real competition. YMMV B. Vance