From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id 6554413898D for ; Mon, 27 Oct 2014 16:52:46 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id D65C8E09AD; Mon, 27 Oct 2014 16:52:39 +0000 (UTC) Received: from sempidan.tirtonadi.com (198.11.216.210-static.reverse.softlayer.com [198.11.216.210]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 9A07EE0918 for ; Mon, 27 Oct 2014 16:52:37 +0000 (UTC) Received: from mail-la0-f51.google.com ([209.85.215.51]:49921) by sempidan.tirtonadi.com with esmtpsa (TLSv1:RC4-SHA:128) (Exim 4.82) (envelope-from ) id 1XinX1-0003Wd-Sn for gentoo-user@lists.gentoo.org; Mon, 27 Oct 2014 23:52:36 +0700 Received: by mail-la0-f51.google.com with SMTP id q1so3068005lam.38 for ; Mon, 27 Oct 2014 09:52:32 -0700 (PDT) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org MIME-Version: 1.0 X-Received: by 10.152.234.136 with SMTP id ue8mr17916596lac.21.1414428752241; Mon, 27 Oct 2014 09:52:32 -0700 (PDT) Received: by 10.152.4.7 with HTTP; Mon, 27 Oct 2014 09:52:32 -0700 (PDT) Received: by 10.152.4.7 with HTTP; Mon, 27 Oct 2014 09:52:32 -0700 (PDT) In-Reply-To: References: <201410270924.40381.michaelkintzios@gmail.com> <544E2875.5000309@gmail.com> <201410271522.32452.michaelkintzios@gmail.com> Date: Mon, 27 Oct 2014 23:52:32 +0700 Message-ID: Subject: Re: [gentoo-user] Safeguarding strategies against SSD data loss From: Pandu Poluan To: gentoo-user@lists.gentoo.org Content-Type: multipart/alternative; boundary=001a11349c4c566b3605066a5ba4 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - sempidan.tirtonadi.com X-AntiAbuse: Original Domain - lists.gentoo.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - poluan.info X-Get-Message-Sender-Via: sempidan.tirtonadi.com: authenticated_id: rileyer+pandu.poluan.info/only user confirmed/virtual account not confirmed X-Source: X-Source-Args: X-Source-Dir: X-Archives-Salt: fd9a1169-9e6f-4a8f-9165-afbd0eaa6d21 X-Archives-Hash: c2874e694c87656fd8c77d24ef2b7fc0 --001a11349c4c566b3605066a5ba4 Content-Type: text/plain; charset=UTF-8 On Oct 27, 2014 10:40 PM, "Rich Freeman" wrote: > > On Mon, Oct 27, 2014 at 11:22 AM, Mick wrote: > > > > Thanks Rich, I have been reading your posts about btrfs with interest, but > > have not yet used it on my systems. Is btrfs agreeable with SSDs, or should I > > be using f2fs: > > > > Btrfs will auto-detect SSDs and optimize itself differently, and is > generally considered to be fine on SSDs. Of course, btrfs itself is > experimental and may eat your data, especially if you get it too full, > but you'll be no worse off for running it on an SSD. > > I doubt you'll find any general-purpose filesystem that works as well > overall on an SSD as something like f2fs as this is log-based and > designed with SSDs in mind. However, f2fs is also very immature and > also carries risks, and the last time I checked it was missing some > features like xattrs as well. It also doesn't have anything like > btrfs send to serialize your data. > > zfs on linux might be another option. I don't know how well it > handles SSDs in general, and you have to fuss with FUSE and a boot > partition as I don't think grub supports it - it could be a bit of a > PITA for a single-drive system. However, it is probably more mature > than btrfs overall, and it certainly supports send. > > I just had a btrfs near-miss which caused me to rethink how I'm > managing my own storage. I was half-tempted to blog on it - it is a > bit frustrating as I believe we're right in the middle of the shift > between the traditional filesystems and the next-generation ones. > Sticking with the old means giving up a lot of potential benefits, but > there are a lot of issues with jumping ship as well as the new systems > all lack maturity or are not feature-complete yet. I was looking at > f2fs, btrfs, and zfs again this weekend and the issues I struggle with > are the immaturity of btrfs and f2fs, the lack of working parity raid > on btrfs, the lack of many features on f2fs, and the inability to > resize vdevs on zfs which means on a system with few drives you get > locked in. I suspect all of those will change in time, but not yet! > > -- > Rich > ZoL (ZFS on Linux) nowadays is implemented using DKMS instead of FUSE, thus running in kernelspace, and (relatively) easier to put into an initramfs. Updating is a beeyotch on binary-based distros as it requires a recompile. Not a big deal for us Gentooers :-) vdevs can grow, but they can't (yet) shrink. And putting ZFS on SSDs... not recommended. Rather, ZFS can employ SSDs to act as a 'write cache' for the spinning HDDs. In my personal opinion, the 'killer' feature of ZFS is that it's built from the ground up to provide maximum data integrity. The second feature is its high performance COW snapshot ability. You can do an obscene amount of snapshots if you want (but don't actually do it; managing more than a hundred snapshots is a Royal PITA). And it's also able to serialize the snapshots, allowing perfect delta replication to another system. This saves a lot of time doing bit-perfect backup because only changed blocks will be transferred. And you can ship a snapshot instead of the whole filesystem, allowing online backup. (And yes, actually deployed ZoL on my previous employer's email system, with the aforementioned snapshot-shipping backup strategy). Other features include: Much easier mounting (no need to mess with fstab), built-in NFS support for higher throughput, and ability to easily rebuild a pool merely by installing the drives (in any order) into a new box and let ZFS scan for all the metadata. The most serious drawback in my opinion is ZoL's nearly insatiable appetite for RAM. Unless you purposefully limit its RAM usage, ZoL's cache will consume nearly all available memory, causing memory fragmentation and ending with OOM. Rgds, -- --001a11349c4c566b3605066a5ba4 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable


On Oct 27, 2014 10:40 PM, "Rich Freeman" <rich0@gentoo.org> wrote:
>
> On Mon, Oct 27, 2014 at 11:22 AM, Mick <michaelkintzios@gmail.com> wrote:
> >
> > Thanks Rich, I have been reading your posts about btrfs with inte= rest, but
> > have not yet used it on my systems.=C2=A0 Is btrfs agreeable with= SSDs, or should I
> > be using f2fs:
> >
>
> Btrfs will auto-detect SSDs and optimize itself differently, and is > generally considered to be fine on SSDs.=C2=A0 Of course, btrfs itself= is
> experimental and may eat your data, especially if you get it too full,=
> but you'll be no worse off for running it on an SSD.
>
> I doubt you'll find any general-purpose filesystem that works as w= ell
> overall on an SSD as something like f2fs as this is log-based and
> designed with SSDs in mind.=C2=A0 However, f2fs is also very immature = and
> also carries risks, and the last time I checked it was missing some > features like xattrs as well.=C2=A0 It also doesn't have anything = like
> btrfs send to serialize your data.
>
> zfs on linux might be another option.=C2=A0 I don't know how well = it
> handles SSDs in general, and you have to fuss with FUSE and a boot
> partition as I don't think grub supports it - it could be a bit of= a
> PITA for a single-drive system.=C2=A0 However, it is probably more mat= ure
> than btrfs overall, and it certainly supports send.
>
> I just had a btrfs near-miss which caused me to rethink how I'm > managing my own storage.=C2=A0 I was half-tempted to blog on it - it i= s a
> bit frustrating as I believe we're right in the middle of the shif= t
> between the traditional filesystems and the next-generation ones.
> Sticking with the old means giving up a lot of potential benefits, but=
> there are a lot of issues with jumping ship as well as the new systems=
> all lack maturity or are not feature-complete yet.=C2=A0 I was looking= at
> f2fs, btrfs, and zfs again this weekend and the issues I struggle with=
> are the immaturity of btrfs and f2fs, the lack of working parity raid<= br> > on btrfs, the lack of many features on f2fs, and the inability to
> resize vdevs on zfs which means on a system with few drives you get > locked in.=C2=A0 I suspect all of those will change in time, but not y= et!
>
> --
> Rich
>

ZoL (ZFS on Linux) nowadays is implemented using DKMS instea= d of FUSE, thus running in kernelspace, and (relatively) easier to put into= an initramfs.

Updating is a beeyotch on binary-based distros as it require= s a recompile. Not a big deal for us Gentooers :-)

vdevs can grow, but they can't (yet) shrink. And putting= ZFS on SSDs... not recommended. Rather, ZFS can employ SSDs to act as a &#= 39;write cache' for the spinning HDDs.

In my personal opinion, the 'killer' feature of ZFS = is that it's built from the ground up to provide maximum data integrity= . The second feature is its high performance COW snapshot ability. You can = do an obscene amount of snapshots if you want (but don't actually do it= ; managing more than a hundred snapshots is a Royal PITA). And it's als= o able to serialize the snapshots, allowing perfect delta=C2=A0 replication= to another system. This saves a lot of time doing bit-perfect backup becau= se only changed blocks will be transferred. And you can ship a snapshot ins= tead of the whole filesystem, allowing online backup.

(And yes, actually deployed ZoL on my previous employer'= s email system, with the aforementioned snapshot-shipping backup strategy).=

Other features include: Much easier mounting (no need to mes= s with fstab), built-in NFS support for higher throughput, and ability to e= asily rebuild a pool merely by installing the drives (in any order) into a = new box and let ZFS scan for all the metadata.

The most serious drawback in my opinion is ZoL's nearly = insatiable appetite for RAM. Unless you purposefully limit its RAM usage, Z= oL's cache will consume nearly all available memory, causing memory fra= gmentation and ending with OOM.

Rgds,
--

--001a11349c4c566b3605066a5ba4--