public inbox for gentoo-user@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-user] experiences with zfsonlinux?
@ 2012-12-28 17:52 Volker Armin Hemmann
  2012-12-28 20:29 ` Stefan G. Weichinger
  0 siblings, 1 reply; 10+ messages in thread
From: Volker Armin Hemmann @ 2012-12-28 17:52 UTC (permalink / raw
  To: gentoo-user

Hi,

so in the Good/better/best filesystem for large, static video library? thread 
zfs was mentioned, since I just ordered 3 new hdd to replace the current 5 in 
my box (3 in raid5, 2 in raid1 configuration), I asked myself: instead of 
raid5+xfs or ext4 or whatever else that might be a sane solution, why not try 
zfs?

But - there aren't so many first hand accounts on people using the spl+zfs 
kernel modules on linux.

Anybody done it? Any caveats?


-- 
#163933


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [gentoo-user] experiences with zfsonlinux?
  2012-12-28 17:52 [gentoo-user] experiences with zfsonlinux? Volker Armin Hemmann
@ 2012-12-28 20:29 ` Stefan G. Weichinger
  2012-12-28 23:21   ` Scott Ellis
  0 siblings, 1 reply; 10+ messages in thread
From: Stefan G. Weichinger @ 2012-12-28 20:29 UTC (permalink / raw
  To: gentoo-user

Am 2012-12-28 18:52, schrieb Volker Armin Hemmann:
> Hi,
> 
> so in the Good/better/best filesystem for large, static video
> library? thread zfs was mentioned, since I just ordered 3 new hdd to
> replace the current 5 in my box (3 in raid5, 2 in raid1
> configuration), I asked myself: instead of raid5+xfs or ext4 or
> whatever else that might be a sane solution, why not try zfs?

Sure, go ahead :-)

> But - there aren't so many first hand accounts on people using the
> spl+zfs kernel modules on linux.
> 
> Anybody done it? Any caveats?

I used it in a former server in my basement, right now the zfs-pool is
out of order simply because I have no SATA-ports available right now
(broken mainboard etc)

It is the equivalent of a RAID1 mirror, 2 disks in a tank.

As you may have researched already it is not necessary to partition the
disks, back then it was recommended to create the pool/mirror by using
the /dev/disk/by-id/ device-notation.

That pool worked very well for me and even caught SATA-related errors
with the occasional scrub-run here and then.

I even was able to migrate that mirror from zfs-fuse to zfs-on-linux
without any problems.

As soon as I have a box with enough hdd-bays again I will re-import that
pool for sure.

Good luck, Stefan


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [gentoo-user] experiences with zfsonlinux?
  2012-12-28 20:29 ` Stefan G. Weichinger
@ 2012-12-28 23:21   ` Scott Ellis
  2012-12-28 23:54     ` Volker Armin Hemmann
  2012-12-30 14:07     ` Volker Armin Hemmann
  0 siblings, 2 replies; 10+ messages in thread
From: Scott Ellis @ 2012-12-28 23:21 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1752 bytes --]

Yeah, I use ZoL for my home server (mostly pictures, videos, and mp3s) and
it works just fine.  SSD for the / and /boot, and then ZFS for all the
important data in a mirrored pool.  Highly recommended.  (Just updated to
3.7.1 kernel and 0.6.0-rc13 ZoL, with no issues, in case you were worried
about usage with "current" pieces.)

   ScottE


On Fri, Dec 28, 2012 at 12:29 PM, Stefan G. Weichinger <lists@xunil.at>wrote:

> Am 2012-12-28 18:52, schrieb Volker Armin Hemmann:
> > Hi,
> >
> > so in the Good/better/best filesystem for large, static video
> > library? thread zfs was mentioned, since I just ordered 3 new hdd to
> > replace the current 5 in my box (3 in raid5, 2 in raid1
> > configuration), I asked myself: instead of raid5+xfs or ext4 or
> > whatever else that might be a sane solution, why not try zfs?
>
> Sure, go ahead :-)
>
> > But - there aren't so many first hand accounts on people using the
> > spl+zfs kernel modules on linux.
> >
> > Anybody done it? Any caveats?
>
> I used it in a former server in my basement, right now the zfs-pool is
> out of order simply because I have no SATA-ports available right now
> (broken mainboard etc)
>
> It is the equivalent of a RAID1 mirror, 2 disks in a tank.
>
> As you may have researched already it is not necessary to partition the
> disks, back then it was recommended to create the pool/mirror by using
> the /dev/disk/by-id/ device-notation.
>
> That pool worked very well for me and even caught SATA-related errors
> with the occasional scrub-run here and then.
>
> I even was able to migrate that mirror from zfs-fuse to zfs-on-linux
> without any problems.
>
> As soon as I have a box with enough hdd-bays again I will re-import that
> pool for sure.
>
> Good luck, Stefan
>
>

[-- Attachment #2: Type: text/html, Size: 2280 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [gentoo-user] experiences with zfsonlinux?
  2012-12-28 23:21   ` Scott Ellis
@ 2012-12-28 23:54     ` Volker Armin Hemmann
  2012-12-30 14:07     ` Volker Armin Hemmann
  1 sibling, 0 replies; 10+ messages in thread
From: Volker Armin Hemmann @ 2012-12-28 23:54 UTC (permalink / raw
  To: gentoo-user; +Cc: Scott Ellis

Am Freitag, 28. Dezember 2012, 15:21:54 schrieb Scott Ellis:
> Yeah, I use ZoL for my home server (mostly pictures, videos, and mp3s) and
> it works just fine.  SSD for the / and /boot, and then ZFS for all the
> important data in a mirrored pool.  Highly recommended.  (Just updated to
> 3.7.1 kernel and 0.6.0-rc13 ZoL, with no issues, in case you were worried
> about usage with "current" pieces.)

I am conservative with kernels 
uname -a
Linux localhost 3.4.24 #1 SMP Sun Dec 23 17:47:00 CET 2012 x86_64 AMD 
Phenom(tm) II X4 955 Processor AuthenticAMD GNU/Linux

so that is not a concern of mine. I am more worried about stability. I plan to 
put /var and my data pile on it. While losing the first would be a time 
intensive incident losing the second would be really painful. Even with 
backups.

But it would something I could recover from or I would not waste time thinking 
about zfs - mdadm+whatever fs does work good enough.

Glück Auf,

Volker

-- 
#163933


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [gentoo-user] experiences with zfsonlinux?
  2012-12-28 23:21   ` Scott Ellis
  2012-12-28 23:54     ` Volker Armin Hemmann
@ 2012-12-30 14:07     ` Volker Armin Hemmann
  2012-12-30 14:22       ` Michael Hampicke
  1 sibling, 1 reply; 10+ messages in thread
From: Volker Armin Hemmann @ 2012-12-30 14:07 UTC (permalink / raw
  To: gentoo-user; +Cc: Scott Ellis

Hi everybody,

so I did it. Three disks as a raidz pool, (zfstank), and created zfstank/var 
and zfstank/data

Set both mountpoints as legacy and put them into fstab - I know I could zfs 
deal with that, but I feel more comfortable that way. Restoring all the data 
from backup took a long time, but not as long as doing the backup in the first 
place. Write rate was, most of the time, impressive. In fact, so far it was a 
really painless experience. There was a hard lockup - and zfs recovered 
nicely. But it doesn't seem to react to 'sync' is that true or just a feeling?

Glück Auf,

Volker
-- 
#163933


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [gentoo-user] experiences with zfsonlinux?
  2012-12-30 14:07     ` Volker Armin Hemmann
@ 2012-12-30 14:22       ` Michael Hampicke
  2012-12-31  3:42         ` Volker Armin Hemmann
  0 siblings, 1 reply; 10+ messages in thread
From: Michael Hampicke @ 2012-12-30 14:22 UTC (permalink / raw
  To: gentoo-user

Am 30.12.2012 15:07, schrieb Volker Armin Hemmann:
> 
> Set both mountpoints as legacy and put them into fstab - I know I could zfs 
> deal with that, but I feel more comfortable that way.

Howdy Volker,

that's a good idea. For some reason - with the latest sys-fs/zfs upgrade
(0.6.0_rc13) - I found that both of my zfs file systems (single pool) do
not get mounted at boot time automatically. This worked fine up until
now. I'll look into that in the new year, right now, I'm on vacation :)


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [gentoo-user] experiences with zfsonlinux?
  2012-12-30 14:22       ` Michael Hampicke
@ 2012-12-31  3:42         ` Volker Armin Hemmann
  2012-12-31  7:07           ` Alan McKinnon
  2012-12-31 16:02           ` Scott Ellis
  0 siblings, 2 replies; 10+ messages in thread
From: Volker Armin Hemmann @ 2012-12-31  3:42 UTC (permalink / raw
  To: gentoo-user; +Cc: Michael Hampicke

Am Sonntag, 30. Dezember 2012, 15:22:58 schrieb Michael Hampicke:
> Am 30.12.2012 15:07, schrieb Volker Armin Hemmann:
> > Set both mountpoints as legacy and put them into fstab - I know I could
> > zfs
> > deal with that, but I feel more comfortable that way.
> 
> Howdy Volker,
> 
> that's a good idea. For some reason - with the latest sys-fs/zfs upgrade
> (0.6.0_rc13) - I found that both of my zfs file systems (single pool) do
> not get mounted at boot time automatically. This worked fine up until
> now. I'll look into that in the new year, right now, I'm on vacation :)

one thing that baffled me was when I set the mountpoint for zfstank/var - and 
it was gone.  ;) 

But seriously, setting up zfs was way easier than my first steps with mdraid + 
fitting a filesystem on it. It is different (just like office 2010 is easiert than 
2003 but people complain because it is different) but a lot easier. I hope it 
stays that way.

One thing that scares me: it is way too easy to throw away everything..

zpool history               
zsh: correct 'history' to '.history' [nyae]? n
History for 'zfstank':
2012-12-29.23:47:49 zpool create -f -o ashift=12 zfstank raidz ata-
Hitachi_HDS5C3020ALA632_ML4230FA17X6EK ata-
Hitachi_HDS5C3020ALA632_ML4230FA17X6HK ata-
Hitachi_HDS5C3020ALA632_ML4230FA17X7YK
2012-12-29.23:48:05 zfs create zfstank/var
2012-12-29.23:48:10 zfs create zfstank/data
2012-12-29.23:50:10 zfs set compression=on zfstank/var
2012-12-29.23:50:48 zfs set compression=on zfstank/data
2012-12-29.23:51:04 zfs set atime=off zfstank
2012-12-29.23:54:00 zfs set quota=100G zfstank/var
2012-12-30.00:04:16 zfs set mountpoint=/var zfstank/var
2012-12-30.00:08:00 zfs destroy zfstank/var
2012-12-30.00:08:35 zfs create -o quota=100G zfstank/var
2012-12-30.00:14:38 zfs set compression=on zfstank/var
2012-12-30.00:19:09 zfs set mountpoint=legacy zfstank/data
2012-12-30.05:55:17 zfs set mountpoint=legacy zfstank/var

compared with the hours of fiddling when I started using raid this was pretty 
much straight forward - and yes, I know, destroying var was not necessary, but 
at that point I just wanted to try it... as I said, way too easy to throw away 
everything. 

-- 
#163933


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [gentoo-user] experiences with zfsonlinux?
  2012-12-31  3:42         ` Volker Armin Hemmann
@ 2012-12-31  7:07           ` Alan McKinnon
  2012-12-31 13:27             ` Volker Armin Hemmann
  2012-12-31 16:02           ` Scott Ellis
  1 sibling, 1 reply; 10+ messages in thread
From: Alan McKinnon @ 2012-12-31  7:07 UTC (permalink / raw
  To: gentoo-user

On Mon, 31 Dec 2012 04:42:45 +0100
Volker Armin Hemmann <volkerarmin@googlemail.com> wrote:

> compared with the hours of fiddling when I started using raid this
> was pretty much straight forward - and yes, I know, destroying var
> was not necessary, but at that point I just wanted to try it... as I
> said, way too easy to throw away everything. 

If it's any comfort, 

rm -rf
fdisk

are just as easy to type :-)

I suppose the zfs tools fall in that category of "for root use ONLY"
and need to be protected like the root account itself. Great power/great
responsibility and all that

-- 
Alan McKinnon
alan.mckinnon@gmail.com



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [gentoo-user] experiences with zfsonlinux?
  2012-12-31  7:07           ` Alan McKinnon
@ 2012-12-31 13:27             ` Volker Armin Hemmann
  0 siblings, 0 replies; 10+ messages in thread
From: Volker Armin Hemmann @ 2012-12-31 13:27 UTC (permalink / raw
  To: gentoo-user; +Cc: Alan McKinnon

Am Montag, 31. Dezember 2012, 09:07:04 schrieb Alan McKinnon:
> On Mon, 31 Dec 2012 04:42:45 +0100
> 
> Volker Armin Hemmann <volkerarmin@googlemail.com> wrote:
> > compared with the hours of fiddling when I started using raid this
> > was pretty much straight forward - and yes, I know, destroying var
> > was not necessary, but at that point I just wanted to try it... as I
> > said, way too easy to throw away everything.
> 
> If it's any comfort,
> 
> rm -rf
> fdisk
> 
> are just as easy to type :-)

yeah, but with fdisk I have to make changes AND save them, with lots of 
warnings. 

destroy on the other hand... just destroys.... 

> I suppose the zfs tools fall in that category of "for root use ONLY"
> and need to be protected like the root account itself. Great power/great
> responsibility and all that

first thing I did, was changing permissions of zpios. ;)

-- 
#163933


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [gentoo-user] experiences with zfsonlinux?
  2012-12-31  3:42         ` Volker Armin Hemmann
  2012-12-31  7:07           ` Alan McKinnon
@ 2012-12-31 16:02           ` Scott Ellis
  1 sibling, 0 replies; 10+ messages in thread
From: Scott Ellis @ 2012-12-31 16:02 UTC (permalink / raw
  To: gentoo-user; +Cc: Michael Hampicke

[-- Attachment #1: Type: text/plain, Size: 260 bytes --]

I think it's a bug, but "zfs mount -a" works around it quickly.

On Sun, Dec 30, 2012 at 7:42 PM, Volker Armin Hemmann <
volkerarmin@googlemail.com> wrote:

> one thing that baffled me was when I set the mountpoint for zfstank/var -
> and
> it was gone.  ;)
>

[-- Attachment #2: Type: text/html, Size: 629 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2012-12-31 16:03 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-12-28 17:52 [gentoo-user] experiences with zfsonlinux? Volker Armin Hemmann
2012-12-28 20:29 ` Stefan G. Weichinger
2012-12-28 23:21   ` Scott Ellis
2012-12-28 23:54     ` Volker Armin Hemmann
2012-12-30 14:07     ` Volker Armin Hemmann
2012-12-30 14:22       ` Michael Hampicke
2012-12-31  3:42         ` Volker Armin Hemmann
2012-12-31  7:07           ` Alan McKinnon
2012-12-31 13:27             ` Volker Armin Hemmann
2012-12-31 16:02           ` Scott Ellis

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox