From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (unknown [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id BA3DC1381FA for ; Thu, 8 May 2014 11:57:48 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id A8A09E09C4; Thu, 8 May 2014 11:57:43 +0000 (UTC) Received: from icp-osb-irony-out6.external.iinet.net.au (icp-osb-irony-out6.external.iinet.net.au [203.59.1.222]) by pigeon.gentoo.org (Postfix) with ESMTP id B67BDE0997 for ; Thu, 8 May 2014 11:57:41 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AnIFAGVwa1N8qOiO/2dsb2JhbABZgwasagEBBpoOAYETFnSCJQEBBXgRCw0LCRYPCQMCAQIBRRMGAgEBGogiz3UXhVaIESJQhD8EigaPMYZlKItug0MwgTE X-IronPort-AV: E=Sophos;i="4.97,1010,1389715200"; d="scan'208";a="36313125" Received: from unknown (HELO moriah.localdomain) ([124.168.232.142]) by icp-osb-irony-out6.iinet.net.au with ESMTP; 08 May 2014 19:57:39 +0800 Received: from localhost (localhost [127.0.0.1]) by moriah.localdomain (Postfix) with ESMTP id D2DEE1DA04 for ; Thu, 8 May 2014 19:57:38 +0800 (WST) X-Virus-Scanned: amavisd-new at lan.localdomain Received: from moriah.localdomain ([127.0.0.1]) by localhost (moriah.lan.localdomain [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id to6PVUfYvMy3 for ; Thu, 8 May 2014 19:57:35 +0800 (WST) Received: from [192.168.44.3] (moriah [192.168.44.3]) by moriah.localdomain (Postfix) with ESMTP id 0F6851BE84 for ; Thu, 8 May 2014 19:57:35 +0800 (WST) Message-ID: <536B712E.3040009@iinet.net.au> Date: Thu, 08 May 2014 19:57:34 +0800 From: William Kenworthy User-Agent: Mozilla/5.0 (X11; Linux i686; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org MIME-Version: 1.0 To: gentoo-user@lists.gentoo.org Subject: Re: [gentoo-user] planned btrfs conversion: questions References: <20140506121832.678ae781@marcec> <5369688C.1040708@iinet.net.au> <20140507015126.5b57fb88@marcec> In-Reply-To: <20140507015126.5b57fb88@marcec> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Archives-Salt: 31249a24-a204-4430-ab2b-4da5cfd88264 X-Archives-Hash: 0d3a3565d119e32a5115360fba2c8f50 On 05/07/14 07:51, Marc Joliet wrote: > Am Wed, 07 May 2014 06:56:12 +0800 > schrieb William Kenworthy : > >> On 05/06/14 18:18, Marc Joliet wrote: >>> Hi all, >>> >>> I've become increasingly motivated to convert to btrfs. From what I've seen, >>> it has become increasingly stable; enough so that it is apparently supposed to >>> become the default FS on OpenSuse in 13.2. >>> >>> I am motivated by various reasons: >> .... >> >> My btrfs experience: >> >> I have been using btrfs seriously (vs testing) for a while now with >> mixed results but the latest kernel/tools seem to be holding up quite well. >> >> ~ 2yrs on a Apple/gentoo laptop (I handed it back to work a few months >> back) - never a problem! (mounted with discard/trim) > That's one HDD, right? From what I've read, that's the most tested and stable > use case for btrfs, so it doesn't surprise me that much that it worked so well. > Yes, light duty using the builtin ssd chips on the motherboard. >> btrfs on a 128MB intel ssd (linux root drive) had to secure reset a few >> times as btrfs said the filesystem was full, but there was 60G+ free - >> happens after multiple crashes and it seemed the btrfs metadata and the >> ssd disagreed on what was actually in use - reset drive and restore from >> backups :( Now running ext4 on that drive with no problems - will move >> back to btrfs at some point. > All the more reason to stick with EXT4 on the SSD for now. I have had had very poor luck with ext anything and would hesitate it to recommend it except for this very specific case where there is little alternative - reiserfs is far better on platters for instance. > > [snip interesting but irrelevant ceph scenario] Its relevant because it keeps revealing bugs in btrfs by stressing it - one of those reported by me to ceph was reported upstream by the ceph team and fixed last year - bugs still exist in btrfs ! >> 3 x raid 0+1 (btrfs raid 1 with 3 drives) - working well for about a month > That last one is particularly good to know. I expect RAID 0, 1 and 10 to work > fairly well, since those are the oldest supported RAID levels. > >> ~10+ gentoo VM's, one ubuntu and 3 x Win VM's with kvm/qemu storage on >> btrfs - regular scrubs show an occasional VM problem after system crash >> (VM server), otherwise problem free since moving to pure btrfs from >> ceph. Gentoo VM's were btrfs in raw qemu containers and are now >> converted to qcow2 - no problems since moving from ceph. Fragmentation >> on VM's is a problem but "cp --reflink vm1 vm2" for vm's is really >> really cool! > That matches the scenario from the ars technica article; the author is a huge > fan of file cloning in btrfs :) . > > And yeah, too bad autodefrag is not yet stable. Not that its not stable but that it cant deal with large files that change randomly on a continual basis like VM virtual disks. > >> I have a clear impression that btrfs has been incrementally improving >> and the current kernel and recovery tools are quite good but its still >> possible to end up with an unrecoverable partition (in the sense that >> you might be able to get to some of the the data using recovery tools, >> but the btrfs mount itself is toast) >> >> Backups using dirvish - was getting an occasional corruption (mainly >> checksum) that seemed to coincide with network problems during a backup >> sequence - have not seen it for a couple of months now. Only lost whole >> partition once :( Dirvish really hammers a file system and ext4 usually >> dies very quickly so even now btrfs is far better here. > I use rsnapshot here with an external hard drive formatted to EXT4. I'm not > *that* worried about the FS dying, more that it dies at an inopportune moment > where I can't immediately restore it. > > [again, snip interesting but irrelevant ceph scenario] as I said above - if it fails under ceph, its likely going to fail under similar stresses using other software - I am not talking ceph bugs (of which there are many) but actual btrfs corruption. >> I am slowly moving my systems from reiserfs to btrfs as my confidence in >> it and its tools builds. I really dislike ext4 and its ability to lose >> valuable data (though that has improved dramaticaly) but it still seems >> better than btrfs on solid state and hard use - but after getting burnt >> I am avoiding that scenario so need to retest. > Rising confidence: good to hear :) . > > Perhaps this will turn out similarly to when I was using the xf86-video-ati > release candidates and bleeding edge gentoo-sources/mesa/libdrm/etc. (for 3D > support in the r600 driver): I start using it shortly before it starts truly > stabilising :) . > More exposure, more bugs will surface and be fixed - its getting there. BillK