public inbox for gentoo-user@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-user] ceph on gentoo?
@ 2014-12-23 14:22 Stefan G. Weichinger
  2014-12-23 15:20 ` Andrew Savchenko
  2014-12-23 15:25 ` Tomas Mozes
  0 siblings, 2 replies; 32+ messages in thread
From: Stefan G. Weichinger @ 2014-12-23 14:22 UTC (permalink / raw
  To: gentoo-user


Anyone here running ceph / http://ceph.com/ on gentoo?

As server(s) or client or ... ?

I am learning about this right now and currently on my way to a first
small test cluster. Very interesting possibilities !

Stefan


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-23 14:22 [gentoo-user] ceph on gentoo? Stefan G. Weichinger
@ 2014-12-23 15:20 ` Andrew Savchenko
  2014-12-23 15:36   ` Stefan G. Weichinger
  2014-12-27  1:47   ` Bruce Hill
  2014-12-23 15:25 ` Tomas Mozes
  1 sibling, 2 replies; 32+ messages in thread
From: Andrew Savchenko @ 2014-12-23 15:20 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 987 bytes --]

Hi,

On Tue, 23 Dec 2014 15:22:26 +0100 Stefan G. Weichinger wrote:
> Anyone here running ceph / http://ceph.com/ on gentoo?
> 
> As server(s) or client or ... ?
> 
> I am learning about this right now and currently on my way to a first
> small test cluster. Very interesting possibilities !

We used it about a year ago for our infrastructure (backup and
live sync of HA systems), obviously both servers and clients were
used, both on Gentoo. We stopped this because of numerous kernel
panics, not to mention that it was quite slow even after tuning. So
we switch to another solution for data sync and backups: clsync. (It
was developed from scratch for our needs, this is not a filesystem,
but may be considered as more powerful alternative to lsyncd.)

Though this was a year ago or so. Your mileage may vary and
it is likely that during this year stability was improved.
Ceph is very promising by both design and capabilities.

Best regards,
Andrew Savchenko

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-23 14:22 [gentoo-user] ceph on gentoo? Stefan G. Weichinger
  2014-12-23 15:20 ` Andrew Savchenko
@ 2014-12-23 15:25 ` Tomas Mozes
  2014-12-23 15:28   ` Stefan G. Weichinger
  1 sibling, 1 reply; 32+ messages in thread
From: Tomas Mozes @ 2014-12-23 15:25 UTC (permalink / raw
  To: gentoo-user

On 2014-12-23 15:22, Stefan G. Weichinger wrote:
> Anyone here running ceph / http://ceph.com/ on gentoo?
> 
> As server(s) or client or ... ?
> 
> I am learning about this right now and currently on my way to a first
> small test cluster. Very interesting possibilities !
> 
> Stefan

I tried the filesystem with kernel 3.7 a year ago (to export distfiles 
to several machines). Since it's kernel based a bug caused my system to 
reboot and sadly it was a database. However the project mentioned that 
the filesystem wasn't production ready that time. Never tried the object 
storage though.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-23 15:25 ` Tomas Mozes
@ 2014-12-23 15:28   ` Stefan G. Weichinger
  2014-12-23 20:27     ` Stefan G. Weichinger
  2014-12-23 22:53     ` [gentoo-user] " Bill Kenworthy
  0 siblings, 2 replies; 32+ messages in thread
From: Stefan G. Weichinger @ 2014-12-23 15:28 UTC (permalink / raw
  To: gentoo-user

Am 23.12.2014 um 16:25 schrieb Tomas Mozes:

> I tried the filesystem with kernel 3.7 a year ago (to export distfiles
> to several machines). Since it's kernel based a bug caused my system to
> reboot and sadly it was a database. However the project mentioned that
> the filesystem wasn't production ready that time. Never tried the object
> storage though.

cephfs still is mentioned as kind of beta in most of the talks I saw on
youtube.

I am going to try the object store ... and I am interested in using it
with qemu/kvm.

S



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-23 15:20 ` Andrew Savchenko
@ 2014-12-23 15:36   ` Stefan G. Weichinger
  2014-12-23 18:07     ` [gentoo-user] " James
  2014-12-24  1:02     ` [gentoo-user] " Andrew Savchenko
  2014-12-27  1:47   ` Bruce Hill
  1 sibling, 2 replies; 32+ messages in thread
From: Stefan G. Weichinger @ 2014-12-23 15:36 UTC (permalink / raw
  To: gentoo-user

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Am 23.12.2014 um 16:20 schrieb Andrew Savchenko:
> Hi,
> 
> On Tue, 23 Dec 2014 15:22:26 +0100 Stefan G. Weichinger wrote:
>> Anyone here running ceph / http://ceph.com/ on gentoo?
>> 
>> As server(s) or client or ... ?
>> 
>> I am learning about this right now and currently on my way to a
>> first small test cluster. Very interesting possibilities !
> 
> We used it about a year ago for our infrastructure (backup and live
> sync of HA systems), obviously both servers and clients were used,
> both on Gentoo. We stopped this because of numerous kernel panics,
> not to mention that it was quite slow even after tuning. So we
> switch to another solution for data sync and backups: clsync. (It 
> was developed from scratch for our needs, this is not a
> filesystem, but may be considered as more powerful alternative to
> lsyncd.)
> 
> Though this was a year ago or so. Your mileage may vary and it is
> likely that during this year stability was improved. Ceph is very
> promising by both design and capabilities.

I agree!

I expect that there were many changes over the time of a year ... they
went from v0.72 (5th stable release) in Nov 2013 to v0.80 in May 2014
(6th stable release) ... and v0.87 in Oct 2014 (7th ...)

We get 0.80.7 in ~amd64 now ... I will see.

Ad "slow": what kind of hardware did you use and how many nodes/osds?

Thanks, Stefan

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQIcBAEBAgAGBQJUmYv5AAoJEClcuD1V0PzmjTEP/2xdLh/rw9SEYzjeJwShrZOn
su0tl9bqgGLC+FdKjSJ5XYCkV7VEGe4UTG1SRbPIyF246y88LoRxKZNpMcrm367o
rw7dGI81uwYQPOe/ajDPXHeJTZjvfNslQugzxvHL9OxRhUcNrnw1kN3ymL3WlqCS
REYcVsvEh3JL8y2edPZLMVGT5FV1P6U6UgehGYYfwT0jNJQQINEq31jf/fB/3k4n
/jrnB45eKJRxwNXDm0HhtwmKWXOKWF9d2B9qvHKkYCtPnZt/5pPbDh1CX6OlU1Gm
SuaVZZzCTSA88umZKq7rBKmrs09v458OlvvdcsRb7EVQ/bF26KKM0RT2xzDXvx1A
KuEveiDcSulijFpxL+rk4GTNpyrc9/oz3SBKYK97VPIv+MS+IPsnAnXiVy3EYg1I
UGOdy3UjIVCaZn3FVnvgbgJq6hxmsYpFB+3YED6Ei0f80efHEn+L4oeXIgaF411n
57dOsjpnm3WnfHqht7BsU+qIDfD3haPgh0RBAVFk1KPBzwqvU0fJIFbEIPUN379E
iIZlZsX9BQocAsyGzTku3G2AZScRoSjYNXfT///vAYnm2BHb274mjBkzsjp3NExN
r8GZx/Rjb1qBnhIJsPWKe1sMXf4zeZ+bq4d9a8pzEycrFE2YYFJggOObLJFpF505
W9ahxI/qsoWQ/gX6+HkU
=Xl3X
-----END PGP SIGNATURE-----


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [gentoo-user] Re: ceph on gentoo?
  2014-12-23 15:36   ` Stefan G. Weichinger
@ 2014-12-23 18:07     ` James
  2014-12-24  1:02     ` [gentoo-user] " Andrew Savchenko
  1 sibling, 0 replies; 32+ messages in thread
From: James @ 2014-12-23 18:07 UTC (permalink / raw
  To: gentoo-user

Stefan G. Weichinger <lists <at> xunil.at> writes:


> > Though this was a year ago or so. Your mileage may vary and it is
> > likely that during this year stability was improved. Ceph is very
> > promising by both design and capabilities.

> I expect that there were many changes over the time of a year ... they
> went from v0.72 (5th stable release) in Nov 2013 to v0.80 in May 2014
> (6th stable release) ... and v0.87 in Oct 2014 (7th ...)
> We get 0.80.7 in ~amd64 now ... I will see.
> Ad "slow": what kind of hardware did you use and how many nodes/osds?

I too am building up a (3 node) cluster on btrfs/ceph. 
My hardware is AMD 8350 (8 cores) with 32G of ram on each mobo. I have water
coolers installed and intend to crank up to 6GHz after the cluster is
stable. My work has been idle for about a month due to other, more pressing,
needs. My cluster will be openrc centric, many others are systemd centric. ymmv.

I intend to run mesos+spark to keep some codes "in-memory" and thus
only write out to HD, when large jobs are finished. Here is the lab
that is pushing the state of the art on "in-memory" computations [1].
Spark is now managed under the Apache umbrella of projects.

I believe that most of the current problems folks encounter with btrfs+ceph,
are related to the need to tune the underlying linux
kernels with advanced tools and testing [2].

I think there is an ebuild (don't remember where) that puts trace-cmd,
ftrace and kernel shark into a gentoo gui package. I opened a bug on
BGO (Bug 517428), but so far it is still in search of a maintainer.


I hope an active group of gentoo-clustering emerges after the herds/projects
at gentoo are re-organized. The science herd/project
is your best bet for folks with similar interests in gentoo clusters,
 imho [3].


hth,
James


[1] https://amplab.cs.berkeley.edu/

[2] http://lwn.net/Articles/425583/

[3] http://wiki.gentoo.org/wiki/Project:Science/Overlay





^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-23 15:28   ` Stefan G. Weichinger
@ 2014-12-23 20:27     ` Stefan G. Weichinger
  2014-12-23 20:40       ` Rich Freeman
  2014-12-23 22:53     ` [gentoo-user] " Bill Kenworthy
  1 sibling, 1 reply; 32+ messages in thread
From: Stefan G. Weichinger @ 2014-12-23 20:27 UTC (permalink / raw
  To: gentoo-user

Am 23.12.2014 um 16:28 schrieb Stefan G. Weichinger:
> Am 23.12.2014 um 16:25 schrieb Tomas Mozes:
> 
>> I tried the filesystem with kernel 3.7 a year ago (to export distfiles
>> to several machines). Since it's kernel based a bug caused my system to
>> reboot and sadly it was a database. However the project mentioned that
>> the filesystem wasn't production ready that time. Never tried the object
>> storage though.
> 
> cephfs still is mentioned as kind of beta in most of the talks I saw on
> youtube.
> 
> I am going to try the object store ... and I am interested in using it
> with qemu/kvm.

got my first two demo nodes up and in-sync ... what a success ;-)

As so often with new technology one has to learn and understand things
first ... the next nodes should be way easier to set up.

I already set up a block device on the store and mounted it on my
desktop machine ... it works!

Performance aside ... right now the cluster runs on 2 VMs.

I should also file a bug already:

/usr/bin/rbd looks for  /sbin/udevadm while it is located in /usr/bin

solved it with "ln -s" for now ... but, you know, it should configure
this at build-time.

Now for some qemu/libvirt-testing ...

S


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-23 20:27     ` Stefan G. Weichinger
@ 2014-12-23 20:40       ` Rich Freeman
  2014-12-23 20:54         ` Stefan G. Weichinger
  0 siblings, 1 reply; 32+ messages in thread
From: Rich Freeman @ 2014-12-23 20:40 UTC (permalink / raw
  To: gentoo-user

On Tue, Dec 23, 2014 at 3:27 PM, Stefan G. Weichinger <lists@xunil.at> wrote:
>
> got my first two demo nodes up and in-sync ... what a success ;-)

I started to look into ceph, and my biggest issue is that they don't
protect against silent corruption. They do checksum data during
transit, but not at rest.  That means that you could end up with 3
different copies of a file and no way to know which one is the right
one.  Simply storing the data on btrfs isn't enough - that will
protect against files changing on the disk itself, but you could STILL
end up with 3 different copies of a file on different nodes and no way
to know which one is right, if the error happens at a higher level
than the btrfs filesystem/disk.

--
Rich


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-23 20:40       ` Rich Freeman
@ 2014-12-23 20:54         ` Stefan G. Weichinger
  2014-12-23 21:02           ` Rich Freeman
  2014-12-23 21:08           ` [gentoo-user] " Holger Hoffstätte
  0 siblings, 2 replies; 32+ messages in thread
From: Stefan G. Weichinger @ 2014-12-23 20:54 UTC (permalink / raw
  To: gentoo-user

Am 23.12.2014 um 21:40 schrieb Rich Freeman:
> On Tue, Dec 23, 2014 at 3:27 PM, Stefan G. Weichinger <lists@xunil.at> wrote:
>>
>> got my first two demo nodes up and in-sync ... what a success ;-)
> 
> I started to look into ceph, and my biggest issue is that they don't
> protect against silent corruption. They do checksum data during
> transit, but not at rest.  That means that you could end up with 3
> different copies of a file and no way to know which one is the right
> one.  Simply storing the data on btrfs isn't enough - that will
> protect against files changing on the disk itself, but you could STILL
> end up with 3 different copies of a file on different nodes and no way
> to know which one is right, if the error happens at a higher level
> than the btrfs filesystem/disk.

but ...  oh my. *sigh*

I assume the devs there have a clever answer to this as well?

At least for the future ... now that btrfs is declared stable at least
for the more trivial setups (read: not RAID5/6) by Chris Mason himself
... btrfs should be usable for ceph-OSDs soon.

In the other direction: what protects against these errors you mention?

S







^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-23 20:54         ` Stefan G. Weichinger
@ 2014-12-23 21:02           ` Rich Freeman
  2014-12-23 21:08           ` [gentoo-user] " Holger Hoffstätte
  1 sibling, 0 replies; 32+ messages in thread
From: Rich Freeman @ 2014-12-23 21:02 UTC (permalink / raw
  To: gentoo-user

On Tue, Dec 23, 2014 at 3:54 PM, Stefan G. Weichinger <lists@xunil.at> wrote:
> Am 23.12.2014 um 21:40 schrieb Rich Freeman:
>> On Tue, Dec 23, 2014 at 3:27 PM, Stefan G. Weichinger <lists@xunil.at> wrote:
>>>
>>> got my first two demo nodes up and in-sync ... what a success ;-)
>>
>> I started to look into ceph, and my biggest issue is that they don't
>> protect against silent corruption. They do checksum data during
>> transit, but not at rest.  That means that you could end up with 3
>> different copies of a file and no way to know which one is the right
>> one.  Simply storing the data on btrfs isn't enough - that will
>> protect against files changing on the disk itself, but you could STILL
>> end up with 3 different copies of a file on different nodes and no way
>> to know which one is right, if the error happens at a higher level
>> than the btrfs filesystem/disk.
>
> but ...  oh my. *sigh*
>
> I assume the devs there have a clever answer to this as well?
>
> At least for the future ... now that btrfs is declared stable at least
> for the more trivial setups (read: not RAID5/6) by Chris Mason himself
> ... btrfs should be usable for ceph-OSDs soon.

Proclamations of stability do not stable make.  :)  I'm using btrfs
now, but I've had my share of headaches (especially with the 3.15/16
kernels).  I think the rate of change/regressions is a good long-term
sign, but I'd stick to longterm kernels if you use it (which is
different advice than I used to give).

>
> In the other direction: what protects against these errors you mention?
>

If I had a solution I'd be using it.  I don't use ceph.  Btrfs
protects against them just fine for a single system.  The problem with
ceph is cross-node consistency.

--
Rich


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [gentoo-user] Re: ceph on gentoo?
  2014-12-23 20:54         ` Stefan G. Weichinger
  2014-12-23 21:02           ` Rich Freeman
@ 2014-12-23 21:08           ` Holger Hoffstätte
  2014-12-23 21:12             ` Stefan G. Weichinger
  2014-12-24  3:24             ` Rich Freeman
  1 sibling, 2 replies; 32+ messages in thread
From: Holger Hoffstätte @ 2014-12-23 21:08 UTC (permalink / raw
  To: gentoo-user

On Tue, 23 Dec 2014 21:54:00 +0100, Stefan G. Weichinger wrote:

> At least for the future ... now that btrfs is declared stable at least

Yeah, no. 3.18 is finally OK-ish (after they missed the .17 merge window 
with a huge number of fixes) but you really want to wait for 3.19.

> In the other direction: what protects against these errors you mention?

ceph scrub :)

-h



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] Re: ceph on gentoo?
  2014-12-23 21:08           ` [gentoo-user] " Holger Hoffstätte
@ 2014-12-23 21:12             ` Stefan G. Weichinger
  2014-12-24  3:24             ` Rich Freeman
  1 sibling, 0 replies; 32+ messages in thread
From: Stefan G. Weichinger @ 2014-12-23 21:12 UTC (permalink / raw
  To: gentoo-user

Am 23.12.2014 um 22:08 schrieb Holger Hoffstätte:
> On Tue, 23 Dec 2014 21:54:00 +0100, Stefan G. Weichinger wrote:
> 
>> At least for the future ... now that btrfs is declared stable at least
> 
> Yeah, no. 3.18 is finally OK-ish (after they missed the .17 merge window 
> with a huge number of fixes) but you really want to wait for 3.19.

I don't consider going productive with btrfs as OSDs soon, no!

Given my newbie-status with ceph this will take some time to test things
and then maybe set up some real hardware for first tests.

For now it's two small VMs with 3 OSDs (on XFS, btw) ... just a demo setup.

>> In the other direction: what protects against these errors you mention?
> 
> ceph scrub :)

ah, sure ;-)



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-23 15:28   ` Stefan G. Weichinger
  2014-12-23 20:27     ` Stefan G. Weichinger
@ 2014-12-23 22:53     ` Bill Kenworthy
  1 sibling, 0 replies; 32+ messages in thread
From: Bill Kenworthy @ 2014-12-23 22:53 UTC (permalink / raw
  To: gentoo-user

On 23/12/14 23:28, Stefan G. Weichinger wrote:
> Am 23.12.2014 um 16:25 schrieb Tomas Mozes:
> 
>> I tried the filesystem with kernel 3.7 a year ago (to export distfiles
>> to several machines). Since it's kernel based a bug caused my system to
>> reboot and sadly it was a database. However the project mentioned that
>> the filesystem wasn't production ready that time. Never tried the object
>> storage though.
> 
> cephfs still is mentioned as kind of beta in most of the talks I saw on
> youtube.
> 
> I am going to try the object store ... and I am interested in using it
> with qemu/kvm.
> 
> S
> 
> 

Tried that (qemu/kvm mostly gentoo VM's up to 64G) ... gave up as it was
too slow/unstable with the hardware I had.

You need a 10G network and a much larger number of hosts than 3 to do
serious I/O.  I do think its something that is only practical with a
datacentre sized installation.

Using it for VM images was very slow and unstable - I did use btrfs
under it and not xfs (the recommended for production?) - moved to pure
btrfs and its is SO much better :)

Rebuilding it every few weeks and having to keep backups of a couple of
terrabytes of disposable data because rebuilding was so slow was the
last straw ...


What I would like (and what I was looking at ceph to do) was a
distributable (across a relatively slow WAN) synced file system that
placed only data in use close to the host using it - will have to get
back to it one day.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-23 15:36   ` Stefan G. Weichinger
  2014-12-23 18:07     ` [gentoo-user] " James
@ 2014-12-24  1:02     ` Andrew Savchenko
  2014-12-24  9:58       ` Stefan G. Weichinger
  2014-12-26  6:38       ` Bruce Hill
  1 sibling, 2 replies; 32+ messages in thread
From: Andrew Savchenko @ 2014-12-24  1:02 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1680 bytes --]

Hi,

On Tue, 23 Dec 2014 16:36:25 +0100 Stefan G. Weichinger wrote:
> Am 23.12.2014 um 16:20 schrieb Andrew Savchenko:
[...]
> > We used it about a year ago for our infrastructure (backup and live
> > sync of HA systems), obviously both servers and clients were used,
> > both on Gentoo. We stopped this because of numerous kernel panics,
> > not to mention that it was quite slow even after tuning. So we
> > switch to another solution for data sync and backups: clsync. (It 
> > was developed from scratch for our needs, this is not a
> > filesystem, but may be considered as more powerful alternative to
> > lsyncd.)
> > 
> > Though this was a year ago or so. Your mileage may vary and it is
> > likely that during this year stability was improved. Ceph is very
> > promising by both design and capabilities.
> 
> I agree!
> 
> I expect that there were many changes over the time of a year ... they
> went from v0.72 (5th stable release) in Nov 2013 to v0.80 in May 2014
> (6th stable release) ... and v0.87 in Oct 2014 (7th ...)
> 
> We get 0.80.7 in ~amd64 now ... I will see.
> 
> Ad "slow": what kind of hardware did you use and how many nodes/osds?

We used 3 servers, where each server was both node and osd (that's
our hardware limitation). Each machine had hardware alike 2x
Xeon E5450, 16 GB and 2 Gbps network connectivity (via bonding of
two 1 Gbps interfaces).

We went through a lot of software and kernel tuning, this helped to
solve many issues, but not all of them: ceph nodes still got kernel
panics once in a while. This was unacceptable and we moved for
other approaches to our issues.

Best regards,
Andrew Savchenko

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] Re: ceph on gentoo?
  2014-12-23 21:08           ` [gentoo-user] " Holger Hoffstätte
  2014-12-23 21:12             ` Stefan G. Weichinger
@ 2014-12-24  3:24             ` Rich Freeman
  2014-12-24  4:34               ` Bill Kenworthy
  2014-12-24 10:16               ` Holger Hoffstätte
  1 sibling, 2 replies; 32+ messages in thread
From: Rich Freeman @ 2014-12-24  3:24 UTC (permalink / raw
  To: gentoo-user

On Tue, Dec 23, 2014 at 4:08 PM, Holger Hoffstätte
<holger.hoffstaette@googlemail.com> wrote:
> On Tue, 23 Dec 2014 21:54:00 +0100, Stefan G. Weichinger wrote:
>
>> In the other direction: what protects against these errors you mention?
>
> ceph scrub :)
>

Are you sure about that?  I was under the impression that it just
checked that everything was retrievable.  I'm not sure if it compares
all the copies of everything to make sure that they match, and if they
don't match I don't think that it has any way to know which one is
right.  I believe an algorithm just picks one as the official version,
and it may or may not be identical to the one that was originally
stored.

If the data is on btrfs then it is protected from silent corruption
since the filesystem will give an error when that node tries to read a
file, and presumably the cluster will find another copy elsewhere.  On
the other hand if the file were logically overwritten in some way
above the btrfs layer then btrfs won't complain and the cluster won't
realize the file has been corrupted.

If I'm wrong on this by all means point me to the truth.  From
everything I read though I don't think that ceph maintains a list of
checksums on all the data that is stored while it is at rest.

--
Rich


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] Re: ceph on gentoo?
  2014-12-24  3:24             ` Rich Freeman
@ 2014-12-24  4:34               ` Bill Kenworthy
  2014-12-24 10:16               ` Holger Hoffstätte
  1 sibling, 0 replies; 32+ messages in thread
From: Bill Kenworthy @ 2014-12-24  4:34 UTC (permalink / raw
  To: gentoo-user

On 24/12/14 11:24, Rich Freeman wrote:
> On Tue, Dec 23, 2014 at 4:08 PM, Holger Hoffstätte
> <holger.hoffstaette@googlemail.com> wrote:
>> On Tue, 23 Dec 2014 21:54:00 +0100, Stefan G. Weichinger wrote:
>>
>>> In the other direction: what protects against these errors you mention?
>>
>> ceph scrub :)
>>
> 
> Are you sure about that?  I was under the impression that it just
> checked that everything was retrievable.  I'm not sure if it compares
> all the copies of everything to make sure that they match, and if they
> don't match I don't think that it has any way to know which one is
> right.  I believe an algorithm just picks one as the official version,
> and it may or may not be identical to the one that was originally
> stored.
> 
> If the data is on btrfs then it is protected from silent corruption
> since the filesystem will give an error when that node tries to read a
> file, and presumably the cluster will find another copy elsewhere.  On
> the other hand if the file were logically overwritten in some way
> above the btrfs layer then btrfs won't complain and the cluster won't
> realize the file has been corrupted.
> 
> If I'm wrong on this by all means point me to the truth.  From
> everything I read though I don't think that ceph maintains a list of
> checksums on all the data that is stored while it is at rest.
> 
> --
> Rich
> 

Scrub used to pick up and fix errors - well mostly fix.  Sometimes the
whole thing collapses in a heap.  The problem with small systems is that
they are already very I/O restricted and you add either a scrub or deep
scrub and it slows very noticeably more. On terrabytes of data it would
take many hours after which checking the logs might find another error
message so it had to be triggered again.  I suspect some errors I got
were btrfs related and but ceph certainly contributed its share.  Not
sure of the cause but they "seemed" to occur when the cluster was doing
anything other than idle.  As I used the "golden master/clone" approach
to vm's corruption in the wrong place was very noticeable :(

Towards the point I gave up it was getting better but I came to the
conclusion the expensive upgrades I needed to fix the I/O problems of
running lots of VM's at once wasn't worth it.

BillK



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-24  1:02     ` [gentoo-user] " Andrew Savchenko
@ 2014-12-24  9:58       ` Stefan G. Weichinger
  2014-12-24 18:15         ` Andrew Savchenko
  2014-12-26  6:38       ` Bruce Hill
  1 sibling, 1 reply; 32+ messages in thread
From: Stefan G. Weichinger @ 2014-12-24  9:58 UTC (permalink / raw
  To: gentoo-user

Am 24.12.2014 um 02:02 schrieb Andrew Savchenko:

>> Ad "slow": what kind of hardware did you use and how many nodes/osds?
> 
> We used 3 servers, where each server was both node and osd (that's
> our hardware limitation). Each machine had hardware alike 2x
> Xeon E5450, 16 GB and 2 Gbps network connectivity (via bonding of
> two 1 Gbps interfaces).
> 
> We went through a lot of software and kernel tuning, this helped to
> solve many issues, but not all of them: ceph nodes still got kernel
> panics once in a while. This was unacceptable and we moved for
> other approaches to our issues.

Hmm, that dampens my enthusiasm ;-)

I watched a presentation on youtube yesterday where they recommended one
SSD as journal per ~4 harddisks ... and 4-8 hard disks per OSD node
maximum (if I remember correctly). Plus ~1 GHz / 1 core of CPU per OSD
... as a rule of thumb. And 500 MB RAM per OSD ... that were the
recommendations in

http://youtu.be/C3lxGuAWEWU

-

Did you have the journal separated on SSDs?
I think that would make quite a difference both in performance and cost ;)

Do you remember the kernel version and ceph version?

How many disks / OSDs?

Sorry for being so curious ..

Thanks, Stefan



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [gentoo-user] Re: ceph on gentoo?
  2014-12-24  3:24             ` Rich Freeman
  2014-12-24  4:34               ` Bill Kenworthy
@ 2014-12-24 10:16               ` Holger Hoffstätte
  2014-12-24 12:40                 ` Rich Freeman
  1 sibling, 1 reply; 32+ messages in thread
From: Holger Hoffstätte @ 2014-12-24 10:16 UTC (permalink / raw
  To: gentoo-user

On Tue, 23 Dec 2014 22:24:30 -0500, Rich Freeman wrote:

> On Tue, Dec 23, 2014 at 4:08 PM, Holger Hoffstätte
> <holger.hoffstaette@googlemail.com> wrote:
>> On Tue, 23 Dec 2014 21:54:00 +0100, Stefan G. Weichinger wrote:
>>
>>> In the other direction: what protects against these errors you
>>> mention?
>>
>> ceph scrub :)
>>
>>
> Are you sure about that?  I was under the impression that it just
> checked that everything was retrievable.  I'm not sure if it compares
> all the copies of everything to make sure that they match, and if they
> don't match I don't think that it has any way to know which one is
> right.  I believe an algorithm just picks one as the official version,
> and it may or may not be identical to the one that was originally
> stored.

There's light and deep scrub; the former does what you described,
while deep does checksumming. In case of mismatch it should create
a quorum. Whether that actually happens and/or works is another
matter. ;)

Unfortunately a full point-in-time deep scrub and the resulting creation 
of checksums is more or less economically unviable with growing amounts 
of data; this really should be incremental. All distributed databases 
suffer from the same problem, and the better ones eventually adopted the 
incremental approach.

http://ceph.com/docs/master/rados/configuration/osd-config-ref/#scrubbing

I know how btrfs scrub works, but it too (and in fact every storge system)
suffers from the problem of having to decide which copy is "good"; they 
all have different points in their timeline where they need to make a 
decision at which a checksum is considered valid. When we're talking 
about preventing bitrot, just having another copy is usually enough.

On top of that btrfs will at least tell you which file is suspected, 
thanks to its wonderful backreferences.

-h



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] Re: ceph on gentoo?
  2014-12-24 10:16               ` Holger Hoffstätte
@ 2014-12-24 12:40                 ` Rich Freeman
  0 siblings, 0 replies; 32+ messages in thread
From: Rich Freeman @ 2014-12-24 12:40 UTC (permalink / raw
  To: gentoo-user

On Wed, Dec 24, 2014 at 5:16 AM, Holger Hoffstätte
<holger.hoffstaette@googlemail.com> wrote:
>
> There's light and deep scrub; the former does what you described,
> while deep does checksumming. In case of mismatch it should create
> a quorum. Whether that actually happens and/or works is another
> matter. ;)
>

If you have 10 copies of a file and 9 are identical and 1 differs,
then there is little risk in this approach.  The problem is that if
you have two copies of the file and they are different, all it can do
is pick one, which is what I believe it does.  So, not only isn't this
as efficient as n+2 raid, or 2*n raid, but you end up needing 3-4*n
redundancy.  That is a LOT of wasted space simply to avoid having a
checksum.

> Unfortunately a full point-in-time deep scrub and the resulting creation
> of checksums is more or less economically unviable with growing amounts
> of data; this really should be incremental.

Since checksums aren't stored anywhere, you end up having to scan
every node and compare all the checksums across them.  Depending on
how that works it is likely to be a fairly synchronous operation,
which makes it much harder to deal with file access during the
operation.  If they just sequentially scan each disk, create an index,
sort the index, and then pass it on to some central node to do all the
comparisions that would be better than doing it completely
synchronously.

>
> I know how btrfs scrub works, but it too (and in fact every storge system)
> suffers from the problem of having to decide which copy is "good"; they
> all have different points in their timeline where they need to make a
> decision at which a checksum is considered valid. When we're talking
> about preventing bitrot, just having another copy is usually enough.
>
> On top of that btrfs will at least tell you which file is suspected,
> thanks to its wonderful backreferences.

btrfs maintains checksums for every block on the disk apart from those
blocks.  Sure, if your metadata and data all gets corrupted at once
you could have problems, but you'll at least know that you have
problems.  A btrfs scrub is asynchronous - each disk can be checked
independently of the others as there is no need to compare checksums
for files across disks, since the checksums are pre-calculated.  If a
bad extent is found, it is re-copied from one of the good disks (which
of course is synchronous).

Since the scans are asynchronous it performs a lot better than a RAID
scrub, since a read against a mirror can be allowed to disrupt just
one of the device scrubs while the other proceeds.  Indeed, you could
just scrub the devices one at a time and then only writes or parallel
reads take a hit (for mirrored mode).

Btrfs is of course immature and can't recover errors for raid5/6
modes, and of course those raid modes would not perform as well when
being scrubbed since a read requires access to n disks and a write
requires access to n+1/2 disks (note though that the use of checksums
makes it safe to do a read without reading full parity - I have no
idea if the btrfs implementation takes advantage of this).

For a single-node system btrfs (and of course zfs) have a much more
robust design IMHO.  Now, of course the node itself becomes the
bottleneck and that is what ceph is intended to handle.  The problem
is that like pre-zfs RAID it handles total failure well, and data
corruption less-well.  Indeed, unless it always checks multiple nodes
on every read a silent corruption is probably not going to be detected
without a scrub (while btrfs and zfs compare checksums on EVERY read,
since that is much less expensive than reading multiple devices).

I'm sure this could be fixed in ceph, but it doesn't seem like anybody
is prioritizing that.

--
Rich


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-24  9:58       ` Stefan G. Weichinger
@ 2014-12-24 18:15         ` Andrew Savchenko
  0 siblings, 0 replies; 32+ messages in thread
From: Andrew Savchenko @ 2014-12-24 18:15 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 514 bytes --]

On Wed, 24 Dec 2014 10:58:35 +0100 Stefan G. Weichinger wrote:
> Did you have the journal separated on SSDs?

We don't have SSDs at all.

> I think that would make quite a difference both in performance and cost ;)
> 
> Do you remember the kernel version and ceph version?

Not exactly :/ It was something rather new at that time like 3.12.x.

> How many disks / OSDs?

3 OSDs with raid6 attached to each one.

> Sorry for being so curious ..

Not a problem :)

Best regards,
Andrew Savchenko

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-24  1:02     ` [gentoo-user] " Andrew Savchenko
  2014-12-24  9:58       ` Stefan G. Weichinger
@ 2014-12-26  6:38       ` Bruce Hill
  2014-12-26  7:38         ` Thomas Mueller
  2014-12-27 15:19         ` Andrew Savchenko
  1 sibling, 2 replies; 32+ messages in thread
From: Bruce Hill @ 2014-12-26  6:38 UTC (permalink / raw
  To: gentoo-user

To whoever controls this list...

I just arrived home to find my mailbox spammed with hundreds of messages from
this luser Andrew Savchenko <bircoph@gentoo.org>

What is the explanation for this please?


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re:  [gentoo-user] ceph on gentoo?
  2014-12-26  6:38       ` Bruce Hill
@ 2014-12-26  7:38         ` Thomas Mueller
  2014-12-26  8:11           ` Dale
  2014-12-27 15:19         ` Andrew Savchenko
  1 sibling, 1 reply; 32+ messages in thread
From: Thomas Mueller @ 2014-12-26  7:38 UTC (permalink / raw
  To: gentoo-user


> from Bruce Hill: 

> To whoever controls this list...

> I just arrived home to find my mailbox spammed with hundreds of messages from
> this luser Andrew Savchenko <bircoph@gentoo.org>

> What is the explanation for this please?

I didn't get these spams.  Are you sure they are from Andrew Savchenko?

Check the headers: spammers are known to fake their email address.

Tom



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-26  7:38         ` Thomas Mueller
@ 2014-12-26  8:11           ` Dale
  2014-12-26  8:15             ` Stefan G. Weichinger
  0 siblings, 1 reply; 32+ messages in thread
From: Dale @ 2014-12-26  8:11 UTC (permalink / raw
  To: gentoo-user

Thomas Mueller wrote:
>> from Bruce Hill: 
>> To whoever controls this list...
>> I just arrived home to find my mailbox spammed with hundreds of messages from
>> this luser Andrew Savchenko <bircoph@gentoo.org>
>> What is the explanation for this please?
> I didn't get these spams.  Are you sure they are from Andrew Savchenko?
>
> Check the headers: spammers are known to fake their email address.
>
> Tom
>
>
>

I didn't get any here either.  Unless Gmail filtered it which should be
disabled. 

Dale

:-)  :-)


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-26  8:11           ` Dale
@ 2014-12-26  8:15             ` Stefan G. Weichinger
  2014-12-26 13:55               ` Matti Nykyri
  0 siblings, 1 reply; 32+ messages in thread
From: Stefan G. Weichinger @ 2014-12-26  8:15 UTC (permalink / raw
  To: gentoo-user

Am 26.12.2014 um 09:11 schrieb Dale:

> I didn't get any here either.  Unless Gmail filtered it which should be
> disabled. 

me = 3rd one not getting them.
Without gmail (but other antispam-measures ...).

S



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-26  8:15             ` Stefan G. Weichinger
@ 2014-12-26 13:55               ` Matti Nykyri
  0 siblings, 0 replies; 32+ messages in thread
From: Matti Nykyri @ 2014-12-26 13:55 UTC (permalink / raw
  To: gentoo-user@lists.gentoo.org

> On Dec 26, 2014, at 10:15, "Stefan G. Weichinger" <lists@xunil.at> wrote:
> 
>> Am 26.12.2014 um 09:11 schrieb Dale:
>> 
>> I didn't get any here either.  Unless Gmail filtered it which should be
>> disabled.
> 
> me = 3rd one not getting them.
> Without gmail (but other antispam-measures ...).

+1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-23 15:20 ` Andrew Savchenko
  2014-12-23 15:36   ` Stefan G. Weichinger
@ 2014-12-27  1:47   ` Bruce Hill
  2014-12-27  1:55     ` Rich Freeman
  1 sibling, 1 reply; 32+ messages in thread
From: Bruce Hill @ 2014-12-27  1:47 UTC (permalink / raw
  To: gentoo-user

On Tue, Dec 23, 2014 at 06:20:05PM +0300, Andrew Savchenko wrote:
> Hi,
> 
> On Tue, 23 Dec 2014 15:22:26 +0100 Stefan G. Weichinger wrote:
> > Anyone here running ceph / http://ceph.com/ on gentoo?
> > 
> > As server(s) or client or ... ?
> > 
> > I am learning about this right now and currently on my way to a first
> > small test cluster. Very interesting possibilities !
> 
> We used it about a year ago for our infrastructure (backup and
> live sync of HA systems), obviously both servers and clients were
> used, both on Gentoo. We stopped this because of numerous kernel
> panics, not to mention that it was quite slow even after tuning. So
> we switch to another solution for data sync and backups: clsync. (It
> was developed from scratch for our needs, this is not a filesystem,
> but may be considered as more powerful alternative to lsyncd.)
> 
> Though this was a year ago or so. Your mileage may vary and
> it is likely that during this year stability was improved.
> Ceph is very promising by both design and capabilities.
> 
> Best regards,
> Andrew Savchenko

Andrew,

Can you answer why my email client has HUNDREDS of the same reply from you in
this thread? I've never seen this behavior in my life.

Thanks


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-27  1:47   ` Bruce Hill
@ 2014-12-27  1:55     ` Rich Freeman
  2014-12-27  8:49       ` Neil Bothwick
  0 siblings, 1 reply; 32+ messages in thread
From: Rich Freeman @ 2014-12-27  1:55 UTC (permalink / raw
  To: gentoo-user

On Fri, Dec 26, 2014 at 8:47 PM, Bruce Hill
<daddy@happypenguincomputers.com> wrote:
>
> Can you answer why my email client has HUNDREDS of the same reply from you in
> this thread? I've never seen this behavior in my life.
>

Can you take this off the list?  If you want somebody from Gentoo to
confirm that the list had nothing to do with this I suggest filing a
bug or contacting infra@g.o.  There are many things that could cause
the behavior you see, and most of them have nothing to do with Andrew.
If you'd like to receive a few thousand emails from
santaclaus@north.pole care of the list of your choice just let me
know.

--
Rich


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-27  1:55     ` Rich Freeman
@ 2014-12-27  8:49       ` Neil Bothwick
  0 siblings, 0 replies; 32+ messages in thread
From: Neil Bothwick @ 2014-12-27  8:49 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 574 bytes --]

On Fri, 26 Dec 2014 20:55:07 -0500, Rich Freeman wrote:

> If you'd like to receive a few thousand emails from
> santaclaus@north.pole care of the list of your choice just let me
> know.

Well done Rich, you've just posted Santa's address in plain text where
all the spam address harvesters will find it. You won't be getting
anything from him next year!

Mind you, at his age, those offers of viagra may be useful...


-- 
Neil Bothwick

COBOL: (n.) an old computer language, designed to be read and not
       run. Unfortunately, it is often run anyway.

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-26  6:38       ` Bruce Hill
  2014-12-26  7:38         ` Thomas Mueller
@ 2014-12-27 15:19         ` Andrew Savchenko
  2014-12-30 13:11           ` Bruce Hill, Jr.
  1 sibling, 1 reply; 32+ messages in thread
From: Andrew Savchenko @ 2014-12-27 15:19 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 691 bytes --]

Hi,

On Fri, 26 Dec 2014 00:38:58 -0600 Bruce Hill wrote:
> To whoever controls this list...
> 
> I just arrived home to find my mailbox spammed with hundreds of messages from
> this luser Andrew Savchenko <bircoph@gentoo.org>

Please stop insults and offensive language. I just sent replies to
the list, this is verifiable by mail headers.

If you have mail problems, check your MTA or whatever you are
using to receive e-mail from this list. As you can see, other
people don't have this problems.

> What is the explanation for this please?
 
Just my guess: greylisting is broken (or had a temporary lag) on
mail server you are using.

Best regards,
Andrew Savchenko

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-27 15:19         ` Andrew Savchenko
@ 2014-12-30 13:11           ` Bruce Hill, Jr.
  2014-12-30 17:43             ` J. Roeleveld
  0 siblings, 1 reply; 32+ messages in thread
From: Bruce Hill, Jr. @ 2014-12-30 13:11 UTC (permalink / raw
  To: gentoo-user

> On December 27, 2014 at 10:19 AM Andrew Savchenko <bircoph@gentoo.org> wrote:
> 
> Please stop insults and offensive language. I just sent replies to
> the list, this is verifiable by mail headers.

My apologies to you sir.

> If you have mail problems, check your MTA or whatever you are
> using to receive e-mail from this list. As you can see, other
> people don't have this problems.

On my workstation mail is POP3 using mutt and mail-mta/msmtp is the MTA.

> Just my guess: greylisting is broken (or had a temporary lag) on
> mail server you are using.

There is no greylisting/blacklisting being done. 
I checked mail at the web interface for the hosting company, and there was no
repeat of messages here; only in Mutt. Now there is another account doing the
same thing.

Can you offer any technical suggestions as for what to check?

> Best regards,
> Andrew Savchenko

Kindest regards,
Bruce


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-30 13:11           ` Bruce Hill, Jr.
@ 2014-12-30 17:43             ` J. Roeleveld
  2014-12-31 11:08               ` Bruce Hill
  0 siblings, 1 reply; 32+ messages in thread
From: J. Roeleveld @ 2014-12-30 17:43 UTC (permalink / raw
  To: gentoo-user

On Tuesday, December 30, 2014 08:11:15 AM Bruce Hill, Jr. wrote:
> > On December 27, 2014 at 10:19 AM Andrew Savchenko <bircoph@gentoo.org>
> > wrote:
> > 
> > If you have mail problems, check your MTA or whatever you are
> > using to receive e-mail from this list. As you can see, other
> > people don't have this problems.
> 
> On my workstation mail is POP3 using mutt and mail-mta/msmtp is the MTA.
> 
> > Just my guess: greylisting is broken (or had a temporary lag) on
> > mail server you are using.
> 
> There is no greylisting/blacklisting being done.
> I checked mail at the web interface for the hosting company, and there was
> no repeat of messages here; only in Mutt. Now there is another account
> doing the same thing.
> 
> Can you offer any technical suggestions as for what to check?

Do you leave the messages on the mailserver?
In that case, ensure your POP3-client keeps a list of message-ids (UIDL) and 
only downloads messages that haven't been downloaded before.

Here is what the man page for fetchmail says about it:
***


       -U | --uidl
              (Keyword: uidl)
              Force  UIDL use (effective only with POP3).  Force client-side 
tracking of 'newness' of messages (UIDL stands for "unique ID listing" and is 
described in RFC1939).  Use with 'keep' to use a mailbox as a baby news drop 
for a group
              of users. The fact that seen messages are skipped is logged, 
unless error logging is done through syslog while running in daemon mode.  
Note that fetchmail may automatically enable this option depending on upstream  
server  capa-
              bilities.  Note also that this option may be removed and forced 
enabled in a future fetchmail version. See also: --idfile.

***

I don't know if there is an equivalent for mutt as I don't use that.

--
Joost


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [gentoo-user] ceph on gentoo?
  2014-12-30 17:43             ` J. Roeleveld
@ 2014-12-31 11:08               ` Bruce Hill
  0 siblings, 0 replies; 32+ messages in thread
From: Bruce Hill @ 2014-12-31 11:08 UTC (permalink / raw
  To: gentoo-user

On Tue, Dec 30, 2014 at 06:43:26PM +0100, J. Roeleveld wrote:
> On Tuesday, December 30, 2014 08:11:15 AM Bruce Hill, Jr. wrote:
> > 
> > Can you offer any technical suggestions as for what to check?
> 
> Do you leave the messages on the mailserver?
> In that case, ensure your POP3-client keeps a list of message-ids (UIDL) and 
> only downloads messages that haven't been downloaded before.
> 
> I don't know if there is an equivalent for mutt as I don't use that.
> 
> --
> Joost

Thanks for this reply. That one original message didn't get removed from the
mailserver, and I hadn't scrolled down quite far enough to see it. Once it was
removed, it seems to have stopped repeating; and a host of other messages to
the list that hadn't arrived came through (especially all the replies in this
thread which weren't previously seen in mutt).
--
Bruce


^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2014-12-31 11:08 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-12-23 14:22 [gentoo-user] ceph on gentoo? Stefan G. Weichinger
2014-12-23 15:20 ` Andrew Savchenko
2014-12-23 15:36   ` Stefan G. Weichinger
2014-12-23 18:07     ` [gentoo-user] " James
2014-12-24  1:02     ` [gentoo-user] " Andrew Savchenko
2014-12-24  9:58       ` Stefan G. Weichinger
2014-12-24 18:15         ` Andrew Savchenko
2014-12-26  6:38       ` Bruce Hill
2014-12-26  7:38         ` Thomas Mueller
2014-12-26  8:11           ` Dale
2014-12-26  8:15             ` Stefan G. Weichinger
2014-12-26 13:55               ` Matti Nykyri
2014-12-27 15:19         ` Andrew Savchenko
2014-12-30 13:11           ` Bruce Hill, Jr.
2014-12-30 17:43             ` J. Roeleveld
2014-12-31 11:08               ` Bruce Hill
2014-12-27  1:47   ` Bruce Hill
2014-12-27  1:55     ` Rich Freeman
2014-12-27  8:49       ` Neil Bothwick
2014-12-23 15:25 ` Tomas Mozes
2014-12-23 15:28   ` Stefan G. Weichinger
2014-12-23 20:27     ` Stefan G. Weichinger
2014-12-23 20:40       ` Rich Freeman
2014-12-23 20:54         ` Stefan G. Weichinger
2014-12-23 21:02           ` Rich Freeman
2014-12-23 21:08           ` [gentoo-user] " Holger Hoffstätte
2014-12-23 21:12             ` Stefan G. Weichinger
2014-12-24  3:24             ` Rich Freeman
2014-12-24  4:34               ` Bill Kenworthy
2014-12-24 10:16               ` Holger Hoffstätte
2014-12-24 12:40                 ` Rich Freeman
2014-12-23 22:53     ` [gentoo-user] " Bill Kenworthy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox