From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id 34206138824 for ; Thu, 23 Oct 2014 19:42:06 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 56F4FE0876; Thu, 23 Oct 2014 19:41:57 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id D1A57E084A for ; Thu, 23 Oct 2014 19:41:55 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp.gentoo.org (Postfix) with ESMTP id F07A7340520 for ; Thu, 23 Oct 2014 19:41:54 +0000 (UTC) X-Virus-Scanned: by amavisd-new using ClamAV at gentoo.org X-Spam-Flag: NO X-Spam-Score: -2.997 X-Spam-Level: X-Spam-Status: No, score=-2.997 tagged_above=-999 required=5.5 tests=[AWL=1.033, BAYES_00=-1.9, RCVD_IN_DNSWL_LOW=-0.7, RP_MATCHES_RCVD=-1.428, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=ham Received: from smtp.gentoo.org ([127.0.0.1]) by localhost (smtp.gentoo.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Wk9oRd4lc6dS for ; Thu, 23 Oct 2014 19:41:47 +0000 (UTC) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 7A7743404F7 for ; Thu, 23 Oct 2014 19:41:46 +0000 (UTC) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1XhOGL-0002r6-CC for gentoo-user@gentoo.org; Thu, 23 Oct 2014 21:41:33 +0200 Received: from rrcs-71-40-157-251.se.biz.rr.com ([71.40.157.251]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Thu, 23 Oct 2014 21:41:33 +0200 Received: from wireless by rrcs-71-40-157-251.se.biz.rr.com with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Thu, 23 Oct 2014 21:41:33 +0200 X-Injected-Via-Gmane: http://gmane.org/ To: gentoo-user@lists.gentoo.org From: James Subject: [gentoo-user] Re: ceph on btrfs Date: Thu, 23 Oct 2014 19:41:22 +0000 (UTC) Message-ID: References: <20141023174903.dfc6e5be9f0213c4cd884101@gmail.com> Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Complaints-To: usenet@ger.gmane.org X-Gmane-NNTP-Posting-Host: sea.gmane.org User-Agent: Loom/3.14 (http://gmane.org/) X-Loom-IP: 71.40.157.251 (Mozilla/5.0 (X11; Linux x86_64; rv:29.0) Gecko/20100101 Firefox/29.0 SeaMonkey/2.26.1) X-Archives-Salt: c1c9b9e3-7b65-4c71-8813-d5e3087f0af0 X-Archives-Hash: cb1f9005fbf06382e46f64fe8a90b2c6 Andrew Savchenko gmail.com> writes: > Ceph is optimized for btrfs by design, it has no configure options > to enable or disable btrfs-related stuff: > https://github.com/ceph/ceph/blob/master/configure.ac > No configure option => no use flag. Good to know; nice script. > Just use the latest (0.80.7 ATM). You may just nerame and rehash > 0.80.5 ebuild (usually this works fine). Or you may stay with > 0.80.5, but with fewer bug fixes. So just download from ceph.com, put it in distfiles and copy-edit ceph-0.80.7 in my /usr/local/portage, or is there an overlay somewhere I missed? > If raid is supposed to be read more frequently than written to, > then my favourite solution is raid-10-f2 (2 far copies, perfectly > fine for 2 disks). This will give you read performance of raid-0 and > robustness of raid-1. Though write i/o will be somewhat slower due > to more seeks. Also it depends on workload: if you'll have a lot of > independent read requests, raid-1 will be fine too. But for large read > i/o from a single or few clients raid-10-f2 is the best imo. Interesting. For now I'm going to stay with simple mirroring. After some time I might migrate to a more agressive FS arrangement, once I have a better idea of the i/o needs. With spark(RDD) on top of mesos, I shooting for mostly "in-memory" usage so i/o is not very heavily used. We'll just have to see how things work out. Last point. I'm using openrc and not systemd, at this time; any ceph issues with openrc, as I do see systemd related items with ceph. > Andrew Savchenko Very good advice. Thanks, James