* [gentoo-user] ceph on btrfs
@ 2014-10-22 20:05 James
2014-10-23 13:49 ` Andrew Savchenko
0 siblings, 1 reply; 6+ messages in thread
From: James @ 2014-10-22 20:05 UTC (permalink / raw
To: gentoo-user
Hello,
So looking at the package sys-cluster/ceph, I see these flags:
cryptopp debug fuse gtk +libaio libatomic +nss radosgw static-libs tcmalloc
xfs zfs No specific flags for btrfs?
ceph-0.67.9 is marked stable, while 0.67.10 and 0.80.5 are marked
(yellow) testing and *9999 is marked (red) masked. So what version
would anyone recommend, with what flags? [1]
Ceph will be the DFS on top of a (3) node mesos+spark cluster.
btrfs is being set up with 2 disks in raid 1 on each system. Btrfs
seems to be keenly compatible with ceph [2].
Guidance and comments, warmly requested,
James
[1]
http://ceph.com/docs/v0.78/rados/configuration/filesystem-recommendations/
[2] http://ceph.com/docs/master/release-notes/#v0-80-firefly
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [gentoo-user] ceph on btrfs
2014-10-22 20:05 [gentoo-user] ceph on btrfs James
@ 2014-10-23 13:49 ` Andrew Savchenko
2014-10-23 19:41 ` [gentoo-user] " James
0 siblings, 1 reply; 6+ messages in thread
From: Andrew Savchenko @ 2014-10-23 13:49 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 1746 bytes --]
Hi,
On Wed, 22 Oct 2014 20:05:48 +0000 (UTC) James wrote:
> Hello,
>
> So looking at the package sys-cluster/ceph, I see these flags:
> cryptopp debug fuse gtk +libaio libatomic +nss radosgw static-libs tcmalloc
> xfs zfs No specific flags for btrfs?
Ceph is optimized for btrfs by design, it has no configure options
to enable or disable btrfs-related stuff:
https://github.com/ceph/ceph/blob/master/configure.ac
No configure option => no use flag.
> ceph-0.67.9 is marked stable, while 0.67.10 and 0.80.5 are marked
> (yellow) testing and *9999 is marked (red) masked. So what version
> would anyone recommend, with what flags? [1]
Just use the latest (0.80.7 ATM). You may just nerame and rehash
0.80.5 ebuild (usually this works fine). Or you may stay with
0.80.5, but with fewer bug fixes.
> Ceph will be the DFS on top of a (3) node mesos+spark cluster.
> btrfs is being set up with 2 disks in raid 1 on each system. Btrfs
> seems to be keenly compatible with ceph [2].
If raid is supposed to be read more frequently than written to,
then my favourite solution is raid-10-f2 (2 far copies, perfectly
fine for 2 disks). This will give you read performance of raid-0 and
robustness of raid-1. Though write i/o will be somewhat slower due
to more seeks.
Also it depends on workload: if you'll have a lot of independent
read requests, raid-1 will be fine too. But for large read i/o from
a single or few clients raid-10-f2 is the best imo.
> Guidance and comments, warmly requested,
> James
>
>
> [1]
> http://ceph.com/docs/v0.78/rados/configuration/filesystem-recommendations/
>
> [2] http://ceph.com/docs/master/release-notes/#v0-80-firefly
Best regards,
Andrew Savchenko
[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* [gentoo-user] Re: ceph on btrfs
2014-10-23 13:49 ` Andrew Savchenko
@ 2014-10-23 19:41 ` James
2014-10-24 10:02 ` Andrew Savchenko
0 siblings, 1 reply; 6+ messages in thread
From: James @ 2014-10-23 19:41 UTC (permalink / raw
To: gentoo-user
Andrew Savchenko <bircoph <at> gmail.com> writes:
> Ceph is optimized for btrfs by design, it has no configure options
> to enable or disable btrfs-related stuff:
> https://github.com/ceph/ceph/blob/master/configure.ac
> No configure option => no use flag.
Good to know; nice script.
> Just use the latest (0.80.7 ATM). You may just nerame and rehash
> 0.80.5 ebuild (usually this works fine). Or you may stay with
> 0.80.5, but with fewer bug fixes.
So just download from ceph.com, put it in distfiles and copy-edit
ceph-0.80.7 in my /usr/local/portage, or is there an overlay somewhere
I missed?
> If raid is supposed to be read more frequently than written to,
> then my favourite solution is raid-10-f2 (2 far copies, perfectly
> fine for 2 disks). This will give you read performance of raid-0 and
> robustness of raid-1. Though write i/o will be somewhat slower due
> to more seeks. Also it depends on workload: if you'll have a lot of
> independent read requests, raid-1 will be fine too. But for large read
> i/o from a single or few clients raid-10-f2 is the best imo.
Interesting. For now I'm going to stay with simple mirroring. After
some time I might migrate to a more agressive FS arrangement, once
I have a better idea of the i/o needs. With spark(RDD) on top of mesos,
I shooting for mostly "in-memory" usage so i/o is not very heavily
used. We'll just have to see how things work out.
Last point. I'm using openrc and not systemd, at this time; any
ceph issues with openrc, as I do see systemd related items with ceph.
> Andrew Savchenko
Very good advice.
Thanks,
James
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [gentoo-user] Re: ceph on btrfs
2014-10-23 19:41 ` [gentoo-user] " James
@ 2014-10-24 10:02 ` Andrew Savchenko
2014-10-24 16:20 ` James
0 siblings, 1 reply; 6+ messages in thread
From: Andrew Savchenko @ 2014-10-24 10:02 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 871 bytes --]
Hello,
On Thu, 23 Oct 2014 19:41:22 +0000 (UTC) James wrote:
[...]
> > Just use the latest (0.80.7 ATM). You may just nerame and rehash
> > 0.80.5 ebuild (usually this works fine). Or you may stay with
> > 0.80.5, but with fewer bug fixes.
>
> So just download from ceph.com, put it in distfiles and copy-edit
> ceph-0.80.7 in my /usr/local/portage, or is there an overlay somewhere
> I missed?
I don't know such. Just use a local overlay (or stay with 0.80.5 —
difference should not be huge).
[...]
> Last point. I'm using openrc and not systemd, at this time; any
> ceph issues with openrc, as I do see systemd related items with ceph.
We are using openrc too, no related issues. (systemd is banned on
all our setups: masked and its dirs are in INSTALL_MASK, so we don't
have its stuff floating around.)
Best regards,
Andrew Savchenko
[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* [gentoo-user] Re: ceph on btrfs
2014-10-24 10:02 ` Andrew Savchenko
@ 2014-10-24 16:20 ` James
2014-10-25 10:00 ` Andrew Savchenko
0 siblings, 1 reply; 6+ messages in thread
From: James @ 2014-10-24 16:20 UTC (permalink / raw
To: gentoo-user
Andrew Savchenko <bircoph <at> gmail.com> writes:
> We are using openrc too, no related issues. (systemd is banned on
> all our setups: masked and its dirs are in INSTALL_MASK, so we don't
> have its stuff floating around.)
замечательно
I'm a fan of your work!
James
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [gentoo-user] Re: ceph on btrfs
2014-10-24 16:20 ` James
@ 2014-10-25 10:00 ` Andrew Savchenko
0 siblings, 0 replies; 6+ messages in thread
From: Andrew Savchenko @ 2014-10-25 10:00 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 384 bytes --]
On Fri, 24 Oct 2014 16:20:36 +0000 (UTC) James wrote:
> Andrew Savchenko <bircoph <at> gmail.com> writes:
> > We are using openrc too, no related issues. (systemd is banned on
> > all our setups: masked and its dirs are in INSTALL_MASK, so we don't
> > have its stuff floating around.)
>
> замечательно
Рад помочь :)
Best regards,
Andrew Savchenko
[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2014-10-25 10:01 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-10-22 20:05 [gentoo-user] ceph on btrfs James
2014-10-23 13:49 ` Andrew Savchenko
2014-10-23 19:41 ` [gentoo-user] " James
2014-10-24 10:02 ` Andrew Savchenko
2014-10-24 16:20 ` James
2014-10-25 10:00 ` Andrew Savchenko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox