From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from pigeon.gentoo.org ([208.92.234.80] helo=lists.gentoo.org) by finch.gentoo.org with esmtp (Exim 4.60) (envelope-from ) id 1RVQx1-0003aM-27 for garchives@archives.gentoo.org; Tue, 29 Nov 2011 16:54:35 +0000 Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 27DC521C0CA; Tue, 29 Nov 2011 16:54:11 +0000 (UTC) Received: from mail-fx0-f53.google.com (mail-fx0-f53.google.com [209.85.161.53]) by pigeon.gentoo.org (Postfix) with ESMTP id F0D1FE0495 for ; Tue, 29 Nov 2011 16:53:05 +0000 (UTC) Received: by faaq2 with SMTP id q2so2593851faa.40 for ; Tue, 29 Nov 2011 08:53:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; bh=bzCVDXwuQZ7xubgKlgZyMtWxSD89EQ9JsiEZLb2PuvY=; b=dy1Fis0dn7Kon/6eQXyMM/xSM6LBFm0RC/osNSjSLaQ3NFGKJhGI8T+ykVwoaVTm7f 3u0CyP1BvnJu4+xm68NCAzI3F6CGMEaNLLDNoObUUryaI/elefjLafMJYeqkf4FRu0tG v1rCXIE2CNEN/BeJLg/41NOtoGVPU7K3qvN10= Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org MIME-Version: 1.0 Received: by 10.204.157.135 with SMTP id b7mr50346718bkx.122.1322585584868; Tue, 29 Nov 2011 08:53:04 -0800 (PST) Received: by 10.204.14.7 with HTTP; Tue, 29 Nov 2011 08:53:04 -0800 (PST) In-Reply-To: References: Date: Tue, 29 Nov 2011 11:53:04 -0500 Message-ID: Subject: Re: [gentoo-user] dmraid, mdraid, lvm, btrfs, what? From: Michael Mol To: gentoo-user@lists.gentoo.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Archives-Salt: 6d09948a-a82e-46e5-bdbd-6eeb924c9431 X-Archives-Hash: dc834a3c76a06351b50e0238672f8fec On Tue, Nov 29, 2011 at 9:10 AM, Mark Knecht wrote: > On Mon, Nov 28, 2011 at 8:10 PM, Michael Mol wrote: > > Hi Michael, > =C2=A0 Welcome to the world of what ever sort of multi-disk environment > you choose. It's a HUGE topic and a conversation I look forward to > having as you dig through it. > > =C2=A0 My main compute system here at home has six 500GB WD RE3 drives. > Five are in use with one as a cold spare. =C2=A0I'm using md. It's pretty > mature and you have good access to the main developer through the > email list. I don't know much about dm. If this is your first time > putting RAID on a box (it was for me) then I think md is a good > choice. On the other hand you're more system software savy than I am > so go with what you think is best for you. Last time I set up RAID was three or four years ago. Two volumes, on RAID5 of three 1.5TB drives (Seagate econo drives, but they worked well enough for me), one RIAD0 of three 1TB drives (WD Caviar Black). The RAID0 was for some video munging scratch space. The RAID5, I mounted as /home. Those volumes lasted a couple years, before I rebuilt all of them as two LVM pvgs, using the same drive sets. > > 1) First lesson - not all hard drives make good RAID hard drives. I > started with six 1TB WD Green drives and found they made _terrible_ > RAID units so I took them out and bought _real_ RAID drives. They were > only half as large for the same price but they have worked perfectly > for nearly 2 years. What makes a good RAID unit, and what makes a terrible RAID unit? Unless we're talking rapid failure, I'd think anything striped would be faster than the bare drive alone. > > 2) Second lesson - prepare to build a few RAID configurations and > TEST, TEST, TEST __BEFORE__ (BEFORE!!!) you make _ANY_ decision about > what sort of RAID you really want. There are a LOT of parameter > choices that effect performance, reliability, capacity and I think to > some extent your ability to change RAID types later on. To name a few: > The obvious RAID type (0,1,2,3,4,5,6,10, etc.) but also chunk size, > metadata type, physical layout for certain RAID types, etc. I strongly > suggest building 5-10 different configurations and testing them with > bonnie++ to gauge speed. I didn't do enough of this before I built > this system and I've been dealing with the effects ever since. I'm familiar with the different RAID types and how they operate. I'm familiar with some of the impacts of chunk size, what that can mean in impacts on caching and sector overlap (for SSD and 2TB+ drives, at least). The purpose of this array (or set of arrays) is for volume aggregation with a touch of redundancy. Speed is a tertiary concern, and if it becomes a real issue, I'll adapt; I've got 730GB left free on the system's primary disk which I can throw into the mix any which way. (use it raw as I currently am, or stripe a logical volume into it...) > 3) Third lesson - think deeply about what happens when 1 drive goes > bad and you are in the process of fixing the system. Do you have a > spare drive ready? Don't plan to, but I don't plan on storing vital or operations-dependent data in the volume without backup. These are going to be volumes of convenience. > Is it in the box? Hot or cold? What happens if a > second drive in the system fails while you're rebuilding the RAID? Drop the failed drives, rebuild with the remaining drives, copy back a back= up. > It's from the same manufacturing lot so it probably suffers from the > same weaknesses. My decision for the most part was (for data or system > drives) 3-drive RAID1 or 5-drive RAID6. For backup I went with 5-drive > RAID5. It all makes me feel good, but it's too complicated. > > 4) Lastly - as they say all the time on the mdadm list: RAID is not a bac= kup. Absolutely. I've had discussions of RAID and disk storage many times with some rather apt and experienced friends, but dmraid and btrfs are relatively new on the block, and the gentoo-user list is a new, mostly-untapped resource of expertise. I wanted to pick up any additional knowledge or references I hadn't heard before. :) > =C2=A0 Personally I like your idea of one big RAID with lvm on top but I > haven't done it myself. I think it's what I would look at today if I > was starting from scratch, but I'm not sure. It would take some study. It's probably the simplest way forward. I notice there are some network-syncing block devices in the kernel (acting as RAID1 over a network) I'd like to play with, but I haven't done anything with OCFS2 (or whatever other multi-operator filesystems are in the 3.0.6 kernel) before. > > Hope this helps even a little, > Mark Certainly does. Also, your email has a permanent URL through at least a couple mailing list archivers, so it'll be a good thing to link to in the future. :) --=20 :wq