From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id F0BF4139083 for ; Thu, 7 Dec 2017 15:26:43 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 8CDB8E1015; Thu, 7 Dec 2017 15:26:37 +0000 (UTC) Received: from mail-pg0-x22d.google.com (mail-pg0-x22d.google.com [IPv6:2607:f8b0:400e:c05::22d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 06204E0F7D for ; Thu, 7 Dec 2017 15:26:36 +0000 (UTC) Received: by mail-pg0-x22d.google.com with SMTP id k134so4700352pga.3 for ; Thu, 07 Dec 2017 07:26:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to; bh=I3CvKIiPQvO7CiTp+yYkDSZVaVepi8Kc3cLLKxPBOcE=; b=joylsUmsc4oHsimFzKXKSBJhKhUC8y3apOFCLcV8cBbn7lOYC+u4bAcpt0IOqmDfzF ptjT4ayj3WBEJ2F05h6dafsXeVLbkd4bYP2kOPnEnqxESj2vnSBwZEuz391T05wFJjq6 uLrkROX/6/qEXMYElIaBouKJfmjUOkHJriOPI41BjuiLG8TZOxo9e+RS5wJCqpncUD12 8cKFJiPQDqnyukceMb6pVzrB9QUAFUGEUGceEg3nRiGWeo8G4zJ331B8ExV4F9y3dfwL TanYduyH7vOFLkBSJjZZrwICbSzUi37rechDw/4Q4ogSeQrnp6XJhIDNfJT1XhEYU2EU lWRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to; bh=I3CvKIiPQvO7CiTp+yYkDSZVaVepi8Kc3cLLKxPBOcE=; b=RQSO9+ryYCxjz8YERnzcu1DvUXC4vElchXWX2LNfSZIn+Zg+t57k7CXXskkhXejBwg Kzl19wY4OAxrOi1kZcM3DDmAasoqk1/wNfFnTXqOvaz2RyUeTYAel+JwbeQJUK19Tskv cud/FIxCzAwjqpLV1pG/nBEgmegVVgzpLs1uVkBYata5pKIpZLY53wii/Jj46XrcArA7 ln6w9yylEvQJ9yOEUd7zY63/uCFZ0l9aFQEXY7OpovQLEeEYz04ePWJYI+DUiEdaYWEc IJ4a5XNGjtXV9iBTNdp1aYtO/98/JVg0macUhACzOGkjSwZ5n87UCzPFmfJOXjELIrEA u6eQ== X-Gm-Message-State: AKGB3mKCvABZATSnNHZOqL552o85nRPyfbN8f0qo0Ofd9PDVXhBqbTpu aYscyjQjVtbOm8h2AoQvsa43KXzJ9i96vitJ6r3Evw== X-Google-Smtp-Source: AGs4zMbLOhCL5JW2RPmbEqj41FKv+YRhxPg4qZ4CdSga9lcqPruNTnggD1ZVek2IrhKfW1ga1cKHeaXi8WbDdRzKUSI= X-Received: by 10.99.171.69 with SMTP id k5mr15070496pgp.229.1512660395214; Thu, 07 Dec 2017 07:26:35 -0800 (PST) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org MIME-Version: 1.0 Sender: freemanrich@gmail.com Received: by 10.100.151.169 with HTTP; Thu, 7 Dec 2017 07:26:34 -0800 (PST) In-Reply-To: <20171207145327.GA26110@steinmetzger.isa-ilmenau.de> References: <5A218A49.3050004@youngman.org.uk> <20171206232829.GA5249@tp> <20171207075441.rwzd5qlk7mdm7rtn@carbon> <20171207092856.GA29899@steinmetzger.isa-ilmenau.de> <1512640375.1412595.1197046296.535FF6C0@webmail.messagingengine.com> <20171207145327.GA26110@steinmetzger.isa-ilmenau.de> From: Rich Freeman Date: Thu, 7 Dec 2017 10:26:34 -0500 X-Google-Sender-Auth: KTG1lgJWqS32dbaS26kLcattlW8 Message-ID: Subject: Re: [gentoo-user] OT: btrfs raid 5/6 To: gentoo-user@lists.gentoo.org Content-Type: text/plain; charset="UTF-8" X-Archives-Salt: e353cdcd-b9ea-4dd0-9fc2-076ec63f70fe X-Archives-Hash: e660da490b4ec084acccb61780132f56 On Thu, Dec 7, 2017 at 9:53 AM, Frank Steinmetzger wrote: > > I see. I'm always looking for ways to optimise expenses and cut down on > environmental footprint by keeping stuff around until it really breaks. In > order to increase capacity, I would have to replace all four drives, whereas > with a mirror, two would be enough. > That is a good point. Though I would note that you can always replace the raidz2 drives one at a time - you just get zero benefit until they're all replaced. So, if your space use grows at a rate lower than the typical hard drive turnover rate that is an option. > > When I configured my kernel the other day, I discovered network block > devices as an option. My PC has a hotswap bay[0]. Problem solved. :) Then I > can do zpool replace with the drive-to-be-replaced still in the pool, which > improves resilver read distribution and thus lessens the probability of a > failure cascade. > If you want to get into the network storage space I'd keep an eye on cephfs. I don't think it is quite to the point where it is a zfs/btrfs replacement option, but it could get there. I don't think the checksums are quite end-to-end, but they're getting better. Overall stability for cephfs itself (as opposed to ceph object storage) is not as good from what I hear. The biggest issue with it though is RAM use on the storage nodes. They want 1GB/TB RAM, which rules out a lot of the cheap ARM-based solutions. Maybe you can get by with less, but finding ARM systems with even 4GB of RAM is tough, and even that means only one hard drive per node, which means a lot of $40+ nodes to go on top of the cost of the drives themselves. Right now cephfs mainly seems to appeal to the scalability use case. If you have 10k servers accessing 150TB of storage and you want that all in one managed well-performing pool that is something cephfs could probably deliver that almost any other solution can't (and the ones that can cost WAY more than just one box running zfs on a couple of RAIDs). -- Rich