From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id 1EB371389ED for ; Tue, 28 Oct 2014 00:41:57 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 29AB0E08D2; Tue, 28 Oct 2014 00:41:51 +0000 (UTC) Received: from sempidan.tirtonadi.com (198.11.216.210-static.reverse.softlayer.com [198.11.216.210]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 008E2E08BF for ; Tue, 28 Oct 2014 00:41:49 +0000 (UTC) Received: from mail-lb0-f181.google.com ([209.85.217.181]:36675) by sempidan.tirtonadi.com with esmtpsa (TLSv1:RC4-SHA:128) (Exim 4.82) (envelope-from ) id 1Xiur5-0006J0-Ib for gentoo-user@lists.gentoo.org; Tue, 28 Oct 2014 07:41:47 +0700 Received: by mail-lb0-f181.google.com with SMTP id w7so2060099lbi.26 for ; Mon, 27 Oct 2014 17:41:43 -0700 (PDT) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org MIME-Version: 1.0 X-Received: by 10.112.221.226 with SMTP id qh2mr27494956lbc.5.1414456903249; Mon, 27 Oct 2014 17:41:43 -0700 (PDT) Received: by 10.152.4.7 with HTTP; Mon, 27 Oct 2014 17:41:42 -0700 (PDT) Received: by 10.152.4.7 with HTTP; Mon, 27 Oct 2014 17:41:42 -0700 (PDT) In-Reply-To: References: <201410270924.40381.michaelkintzios@gmail.com> <544E2875.5000309@gmail.com> <201410271522.32452.michaelkintzios@gmail.com> Date: Tue, 28 Oct 2014 07:41:42 +0700 Message-ID: Subject: Re: [gentoo-user] Safeguarding strategies against SSD data loss From: Pandu Poluan To: gentoo-user@lists.gentoo.org Content-Type: multipart/alternative; boundary=001a1135ecb844b6cc050670e972 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - sempidan.tirtonadi.com X-AntiAbuse: Original Domain - lists.gentoo.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - poluan.info X-Get-Message-Sender-Via: sempidan.tirtonadi.com: authenticated_id: rileyer+pandu.poluan.info/only user confirmed/virtual account not confirmed X-Source: X-Source-Args: X-Source-Dir: X-Archives-Salt: 0a972786-0538-4c47-a3ca-8cbf8562f742 X-Archives-Hash: 846fd610be4687bdd38e1a6b35fad39a --001a1135ecb844b6cc050670e972 Content-Type: text/plain; charset=UTF-8 On Oct 28, 2014 12:31 AM, "Rich Freeman" wrote: > > On Mon, Oct 27, 2014 at 12:52 PM, Pandu Poluan wrote: > > > > ZoL (ZFS on Linux) nowadays is implemented using DKMS instead of FUSE, thus > > running in kernelspace, and (relatively) easier to put into an initramfs. > > Sorry about that. I should have known that, but for some reason I got > that memory crossed in my brain... :) > > > vdevs can grow, but they can't (yet) shrink. > > Can you point to any docs on that, including any limitations/etc? The > inability to expand raid-z the way you can do so with mdadm was one of > the big things that has been keeping me away from zfs. I understand > that it isn't so important when you're dealing with large numbers of > disks (backblaze's storage pods come to mind), but when you have only > a few disks being able to manipulate them one at a time is very > useful. Growing is the more likely use case than shrinking. Then > again, at some point if you want to replace smaller drives with larger > ones you might want a way to remove drives from a vdev. > First, you need to set your pool to "autoexpand=on". Then, one by one, you offline a disk within the vdev and replace it with a larger one. After all disks have been replaced, do a scrub, and ZFS will automagically enlarge the vdev. If you're not using whole disks as ZFS, then s/replace with larger/enlarge the partition/. Rgds, -- --001a1135ecb844b6cc050670e972 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable


On Oct 28, 2014 12:31 AM, "Rich Freeman" <rich0@gentoo.org> wrote:
>
> On Mon, Oct 27, 2014 at 12:52 PM, Pandu Poluan <pandu@poluan.info> wrote:
> >
> > ZoL (ZFS on Linux) nowadays is implemented using DKMS instead of = FUSE, thus
> > running in kernelspace, and (relatively) easier to put into an in= itramfs.
>
> Sorry about that.=C2=A0 I should have known that, but for some reason = I got
> that memory crossed in my brain...=C2=A0 :)
>
> > vdevs can grow, but they can't (yet) shrink.
>
> Can you point to any docs on that, including any limitations/etc?=C2= =A0 The
> inability to expand raid-z the way you can do so with mdadm was one of=
> the big things that has been keeping me away from zfs.=C2=A0 I underst= and
> that it isn't so important when you're dealing with large numb= ers of
> disks (backblaze's storage pods come to mind), but when you have o= nly
> a few disks being able to manipulate them one at a time is very
> useful.=C2=A0 Growing is the more likely use case than shrinking.=C2= =A0 Then
> again, at some point if you want to replace smaller drives with larger=
> ones you might want a way to remove drives from a vdev.
>

First, you need to set your pool to "autoexpand=3Don&qu= ot;.

Then, one by one, you offline a disk within the vdev and rep= lace it with a larger one. After all disks have been replaced, do a scrub, = and ZFS will automagically enlarge the vdev.

If you're not using whole disks as ZFS, then s/replace w= ith larger/enlarge the partition/.

Rgds,
--

--001a1135ecb844b6cc050670e972--