public inbox for gentoo-user@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-user] tmp on tmpfs
@ 2017-05-24  5:16 Ian Zimmerman
  2017-05-24  5:34 ` gentoo-user
                   ` (3 more replies)
  0 siblings, 4 replies; 45+ messages in thread
From: Ian Zimmerman @ 2017-05-24  5:16 UTC (permalink / raw
  To: gentoo-user

So what are gentoo users' opinions on this matter of faith?

I have long been in the camp that thinks tmpfs for /tmp has no
advantages (and may have disadvantages) over a normal filesystem like
ext3, because the files there are normally so small that they will stay
in the page cache 100% of the time.

But I see that tmpfs is the default with systemd.  Surely they have a
good reason for this? :)

-- 
Please *no* private Cc: on mailing lists and newsgroups
Personal signed mail: please _encrypt_ and sign
Don't clear-text sign:
http://primate.net/~itz/blog/the-problem-with-gpg-signatures.html


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] tmp on tmpfs
  2017-05-24  5:16 [gentoo-user] tmp on tmpfs Ian Zimmerman
@ 2017-05-24  5:34 ` gentoo-user
  2017-05-24  6:00   ` [gentoo-user] " Kai Krakow
  2017-05-24 17:00   ` [gentoo-user] " R0b0t1
  2017-05-24  6:03 ` Andrew Tselischev
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 45+ messages in thread
From: gentoo-user @ 2017-05-24  5:34 UTC (permalink / raw
  To: gentoo-user

On 17-05-23 at 22:16, Ian Zimmerman wrote:
> So what are gentoo users' opinions on this matter of faith?
I use an ext4 partition backed by zram. Gives me ~3x compression on the
things I normally have lying around there (plain text files) and ensures
that anything I throw there (or programs throw there) gets cleaned up on
reboot.

> I have long been in the camp that thinks tmpfs for /tmp has no
> advantages (and may have disadvantages) over a normal filesystem like
> ext3, because the files there are normally so small that they will stay
> in the page cache 100% of the time.
I've never actually benchmarked this. Most of the things I notice that
tend to end up there are temporary build files generated during
configure stages or temporary log files used by various programs (clang
static analyzer). Even if the entire file stays in the page cache, it'll
still generate IO overhead and extra seeks that might slow down the rest
of your system (unless your /tmp is on a different hard drive) which on
spinning rust will cause slowdowns while on an ssd it'll eat away at
your writes (which you may or may not have to worry about).

> But I see that tmpfs is the default with systemd.  Surely they have a
> good reason for this? :)
Or someone decided they liked the idea and made it the default and
nobody ever complained (or if they did were told to just change it on
their system). 

Either way, it'd be nice if someone actually benchmarked this.

-- 
Simon Thelen


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [gentoo-user] Re: tmp on tmpfs
  2017-05-24  5:34 ` gentoo-user
@ 2017-05-24  6:00   ` Kai Krakow
  2017-05-24 17:05     ` Kai Krakow
  2017-05-24 18:34     ` [gentoo-user] Re: tmp on tmpfs Ian Zimmerman
  2017-05-24 17:00   ` [gentoo-user] " R0b0t1
  1 sibling, 2 replies; 45+ messages in thread
From: Kai Krakow @ 2017-05-24  6:00 UTC (permalink / raw
  To: gentoo-user

Am Wed, 24 May 2017 07:34:34 +0200
schrieb gentoo-user@c-14.de:

> On 17-05-23 at 22:16, Ian Zimmerman wrote:
> > So what are gentoo users' opinions on this matter of faith?  
> I use an ext4 partition backed by zram. Gives me ~3x compression on
> the things I normally have lying around there (plain text files) and
> ensures that anything I throw there (or programs throw there) gets
> cleaned up on reboot.
> 
> > I have long been in the camp that thinks tmpfs for /tmp has no
> > advantages (and may have disadvantages) over a normal filesystem
> > like ext3, because the files there are normally so small that they
> > will stay in the page cache 100% of the time.  
> I've never actually benchmarked this. Most of the things I notice that
> tend to end up there are temporary build files generated during
> configure stages or temporary log files used by various programs
> (clang static analyzer). Even if the entire file stays in the page
> cache, it'll still generate IO overhead and extra seeks that might
> slow down the rest of your system (unless your /tmp is on a different
> hard drive) which on spinning rust will cause slowdowns while on an
> ssd it'll eat away at your writes (which you may or may not have to
> worry about).
> 
> > But I see that tmpfs is the default with systemd.  Surely they have
> > a good reason for this? :)  
> Or someone decided they liked the idea and made it the default and
> nobody ever complained (or if they did were told to just change it on
> their system). 
> 
> Either way, it'd be nice if someone actually benchmarked this.

While I have no benchmarks and use the systemd default of tmpfs
for /tmp, I also put /var/tmp/portage on tmpfs, automounted through
systemd so it is cleaned up when no longer used (by unmounting).

What can I say? It works so much faster: Building packages is a lot
faster most of the time, even if you'd expect gcc uses a lot of memory.

Well, why might that be? First, tmpfs is backed by swap space, that
means, you need a swap partition of course. Swap is a lot simpler than
file systems, so swapping out unused temporary files is fast and is a
good thing. Also, unused memory sitting around may be swapped out
early. Why would you want inactive memory resident? So this is also a
good thing. Portage can use memory much more efficient by this.

Applying this reasoning over to /tmp should no explain why it works so
well and why you may want it.

BTW: I also use zswap, so tmpfs sits in front of a compressed
write-back cache before being written out to swap compressed. This
should generally be much more efficient (performance-wise) than putting
/tmp on zram.

I configured tmpfs for portage to use up to 30GB of space, which is
almost twice the RAM I have. And it works because tmpfs is not required
to be resident all the time: Inactive parts will be swapped out. The
kernel handles this much similar to the page cache, with the difference
that your files aren't backed by your normal file system but by swap.
And swap has a lot lower IO overhead.

Overall, having less IO overhead (and less head movement for portage
builds) is a very very efficient thing to do. GCC constantly needs all
sorts of files from your installation (libs for linking, header files,
etc), and writes a lot of transient files which are needed once later
and then discarded. There's no point in putting it on a non-transient
file system.

I use the following measures to get more performance out of this setup:

  * I have three swap partitions spread across three HDDs
  * I have a lot of swap space (60 GB) to have space for tmpfs
  * I have bcache in front of my HDD filesystem
  * I have a relatively big SSD dedicated to bcache

My best recommendation is to separate swap and filesystem devices.
While I didn't do it that way, I still separate them through bcache
and thus decouple fs access and swap access although they are on the
same physical devices. My bcache is big enough that most accesses would
go to the SSD only. I enabled write-back to have that effect also for
write access.

If you cannot physically split swap from fs, a tmpfs setup for portage
may not be recommended (except you have a lot of memory, like 16GB or
above). But YMMV.

Still, I recommend it for /tmp, especially if your system is on SSD.
Unix semantics suggest that /tmp is not expected to survive reboots
anyways (in contrast, /var/tmp is expected to survive reboots), so
tmpfs is a logical consequence to use for /tmp.


-- 
Regards,
Kai

Replies to list-only preferred.



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] tmp on tmpfs
  2017-05-24  5:16 [gentoo-user] tmp on tmpfs Ian Zimmerman
  2017-05-24  5:34 ` gentoo-user
@ 2017-05-24  6:03 ` Andrew Tselischev
  2017-05-24  9:34 ` Rich Freeman
  2017-05-24 18:46 ` [gentoo-user] " Nikos Chantziaras
  3 siblings, 0 replies; 45+ messages in thread
From: Andrew Tselischev @ 2017-05-24  6:03 UTC (permalink / raw
  To: gentoo-user

On Tue, May 23, 2017 at 10:16:56PM -0700, Ian Zimmerman wrote:
> So what are gentoo users' opinions on this matter of faith?
> 
> I have long been in the camp that thinks tmpfs for /tmp has no
> advantages (and may have disadvantages) over a normal filesystem like
> ext3, because the files there are normally so small that they will stay
> in the page cache 100% of the time.
> 
> But I see that tmpfs is the default with systemd.  Surely they have a
> good reason for this? :)

for most purposes, it avoids thrashing your storage media with useless i/o.
if your purposes are unusual, by all means change it back.


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] tmp on tmpfs
  2017-05-24  5:16 [gentoo-user] tmp on tmpfs Ian Zimmerman
  2017-05-24  5:34 ` gentoo-user
  2017-05-24  6:03 ` Andrew Tselischev
@ 2017-05-24  9:34 ` Rich Freeman
  2017-05-24  9:43   ` gentoo-user
  2017-05-24 12:45   ` Andrew Savchenko
  2017-05-24 18:46 ` [gentoo-user] " Nikos Chantziaras
  3 siblings, 2 replies; 45+ messages in thread
From: Rich Freeman @ 2017-05-24  9:34 UTC (permalink / raw
  To: gentoo-user

On Wed, May 24, 2017 at 1:16 AM, Ian Zimmerman <itz@primate.net> wrote:
>
> I have long been in the camp that thinks tmpfs for /tmp has no
> advantages (and may have disadvantages) over a normal filesystem like
> ext3, because the files there are normally so small that they will stay
> in the page cache 100% of the time.
>

The file being in the page cache only speeds up reads of the file.  On
a conventional filesystem the file will still be forced to be
committed to disk within 30 seconds, or whatever you've set your max
writeback delay to.  That means guaranteed disk write IO.  If the
drive is mostly idle it will have no impact on performance, but if the
disk is fairly busy then it will, especially for spinning disks.  For
an SSD /tmp would be a source of erase cycles (which also have
performance implications, but there it is more of a wear issue).  When
the file is removed that would also generate write IO.

The flip side is that on most systems /tmp probably doesn't get THAT much IO.

On Gentoo doing your builds in tmpfs definitely has a large
performance impact, because there are a lot of files created during
the build process that are sizable but which don't end up getting
installed (object files mostly).  Plus you have the extraction of the
source itself.  For a typical build that is many MB of data being
extracted and then deleted after maybe a minute, which is a lot of
useless IO, especially when the actual install is probably creating a
fairly sizable IO queue on its own.

To avoid a reply, I'll also note that tmpfs does NOT require swap to
work.  It does of course require plenty of memory, and as with any
situation where lots of memory is required swap may be useful, but it
is not a requirement.

Others have mentioned zram.  I've used it, but unless something has
changed one of its limitations is that it can't give up memory.  That
is less of an issue if you're using swap since it can be swapped out
if idle.  However, if you're not using swap then you're potentially
giving up a chunk of RAM to do it, though less RAM than a tmpfs if it
is full most of the time (which I doubt is typically the case).

-- 
Rich


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] tmp on tmpfs
  2017-05-24  9:34 ` Rich Freeman
@ 2017-05-24  9:43   ` gentoo-user
  2017-05-24  9:54     ` Rich Freeman
  2017-05-24 12:45   ` Andrew Savchenko
  1 sibling, 1 reply; 45+ messages in thread
From: gentoo-user @ 2017-05-24  9:43 UTC (permalink / raw
  To: gentoo-user

On 17-05-24 at 05:34, Rich Freeman wrote:
[..]
> Others have mentioned zram.  I've used it, but unless something has
> changed one of its limitations is that it can't give up memory.  That
> is less of an issue if you're using swap since it can be swapped out
> if idle.  However, if you're not using swap then you're potentially
> giving up a chunk of RAM to do it, though less RAM than a tmpfs if it
> is full most of the time (which I doubt is typically the case).
Seems to work fine here (with kernels newer than the late 3.x when I started using zram):

radiocarbon:~% dd if=/dev/urandom of=/tmp/foo
^C3405370+0 records in
3405370+0 records out
1743549440 bytes (1.7 GB, 1.6 GiB) copied, 10.8268 s, 161 MB/s
dd if=/dev/urandom of=/tmp/foo  8 MiB 10.853 (user: 0.339, kernel: 10.482)
radiocarbon:~% zramctl
NAME       ALGORITHM DISKSIZE  DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 lz4             2G    4K   64B    4K       8 [SWAP]
/dev/zram1 lz4             3G  1.6G  1.6G  1.6G       8 /tmp
radiocarbon:~% free -m
              total        used        free      shared  buff/cache   available
Mem:           7920        3096          61         228        4763        4487
Swap:          2047           0        2047
radiocarbon:~% rm /tmp/foo
radiocarbon:~% zramctl
NAME       ALGORITHM DISKSIZE  DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 lz4             2G    4K   64B    4K       8 [SWAP]
/dev/zram1 lz4             3G  3.9M    1M  1.3M       8 /tmp
radiocarbon:~% free -m
              total        used        free      shared  buff/cache   available
Mem:           7920        1412        3458         229        3049        6171
Swap:          2047           0        2047

-- 
Simon Thelen


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] tmp on tmpfs
  2017-05-24  9:43   ` gentoo-user
@ 2017-05-24  9:54     ` Rich Freeman
  0 siblings, 0 replies; 45+ messages in thread
From: Rich Freeman @ 2017-05-24  9:54 UTC (permalink / raw
  To: gentoo-user

On Wed, May 24, 2017 at 5:43 AM,  <gentoo-user@c-14.de> wrote:
> On 17-05-24 at 05:34, Rich Freeman wrote:
> [..]
>> Others have mentioned zram.  I've used it, but unless something has
>> changed one of its limitations is that it can't give up memory.  That
>> is less of an issue if you're using swap since it can be swapped out
>> if idle.  However, if you're not using swap then you're potentially
>> giving up a chunk of RAM to do it, though less RAM than a tmpfs if it
>> is full most of the time (which I doubt is typically the case).
> Seems to work fine here (with kernels newer than the late 3.x when I started using zram):
>

Thanks.  Useful to know.  Perhaps something changed.  Then again, I
don't think I actually tested it so it could have also been
misinformation somewhere.

-- 
Rich


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] tmp on tmpfs
  2017-05-24  9:34 ` Rich Freeman
  2017-05-24  9:43   ` gentoo-user
@ 2017-05-24 12:45   ` Andrew Savchenko
  2017-05-25  4:45     ` [gentoo-user] " Martin Vaeth
  2017-05-25 22:36     ` [gentoo-user] " Kent Fredric
  1 sibling, 2 replies; 45+ messages in thread
From: Andrew Savchenko @ 2017-05-24 12:45 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 3153 bytes --]

Hi,

On Wed, 24 May 2017 05:34:09 -0400 Rich Freeman wrote:
> On Wed, May 24, 2017 at 1:16 AM, Ian Zimmerman <itz@primate.net> wrote:
> >
> > I have long been in the camp that thinks tmpfs for /tmp has no
> > advantages (and may have disadvantages) over a normal filesystem like
> > ext3, because the files there are normally so small that they will stay
> > in the page cache 100% of the time.
> >
> 
> The file being in the page cache only speeds up reads of the file.  On
> a conventional filesystem the file will still be forced to be
> committed to disk within 30 seconds, or whatever you've set your max
> writeback delay to.  That means guaranteed disk write IO.  If the
> drive is mostly idle it will have no impact on performance, but if the
> disk is fairly busy then it will, especially for spinning disks.  For
> an SSD /tmp would be a source of erase cycles (which also have
> performance implications, but there it is more of a wear issue).  When
> the file is removed that would also generate write IO.
> 
> The flip side is that on most systems /tmp probably doesn't get THAT much IO.
> 
> On Gentoo doing your builds in tmpfs definitely has a large
> performance impact, because there are a lot of files created during
> the build process that are sizable but which don't end up getting
> installed (object files mostly).  Plus you have the extraction of the
> source itself.  For a typical build that is many MB of data being
> extracted and then deleted after maybe a minute, which is a lot of
> useless IO, especially when the actual install is probably creating a
> fairly sizable IO queue on its own.
> 
> To avoid a reply, I'll also note that tmpfs does NOT require swap to
> work.  It does of course require plenty of memory, and as with any
> situation where lots of memory is required swap may be useful, but it
> is not a requirement.
> 
> Others have mentioned zram.  I've used it, but unless something has
> changed one of its limitations is that it can't give up memory.  That
> is less of an issue if you're using swap since it can be swapped out
> if idle.  However, if you're not using swap then you're potentially
> giving up a chunk of RAM to do it, though less RAM than a tmpfs if it
> is full most of the time (which I doubt is typically the case).
 
For similar needs I found zswap the most suitable, it's so much
better than zram:

- smaller CPU overhead: not every i/o is being compressed, e.g. if
there is sill enough RAM available it is used without compression
overhead as usual, but if memory is not enough, swapped out pages
are being compressed instead of swapping out to disk;

- no size limitation: if zswap pool is full, data is being pulled
to swap, the same happens with non-compressible pages;

- pool size and compression type can be dynamically adjusted, I
prefer z3fold.

So I have normal tmpfs on /tmp (and /var/tmp on hosts with lots or
RAM), but both tmpfs and running daemons/apps can benefit from
compressed memory for rarely used pages while enjoing full RAM
speed for frequently accessed ones.

Best regards,
Andrew Savchenko

[-- Attachment #2: Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] tmp on tmpfs
  2017-05-24  5:34 ` gentoo-user
  2017-05-24  6:00   ` [gentoo-user] " Kai Krakow
@ 2017-05-24 17:00   ` R0b0t1
  1 sibling, 0 replies; 45+ messages in thread
From: R0b0t1 @ 2017-05-24 17:00 UTC (permalink / raw
  To: gentoo-user

On Wed, May 24, 2017 at 12:16 AM, Ian Zimmerman <itz@primate.net> wrote:
> So what are gentoo users' opinions on this matter of faith?
>

On Wed, May 24, 2017 at 12:34 AM,  <gentoo-user@c-14.de> wrote:
> Either way, it'd be nice if someone actually benchmarked this.
>

I don't have exhaustive benchmarks but moving PORTAGE_TMPDIR to a
tmpfs makes builds at least an order of magnitude faster. For general
usage with /tmp you may or may not notice, but the lack of normal IO
overhead can only make it faster.


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [gentoo-user] Re: tmp on tmpfs
  2017-05-24  6:00   ` [gentoo-user] " Kai Krakow
@ 2017-05-24 17:05     ` Kai Krakow
  2017-05-25 18:46       ` [gentoo-user] Puzzled by zswap [Was: tmp on tmpfs] Ian Zimmerman
  2017-05-24 18:34     ` [gentoo-user] Re: tmp on tmpfs Ian Zimmerman
  1 sibling, 1 reply; 45+ messages in thread
From: Kai Krakow @ 2017-05-24 17:05 UTC (permalink / raw
  To: gentoo-user

Am Wed, 24 May 2017 08:00:33 +0200
schrieb Kai Krakow <hurikhan77@gmail.com>:

> Am Wed, 24 May 2017 07:34:34 +0200
> schrieb gentoo-user@c-14.de:
> 
> > On 17-05-23 at 22:16, Ian Zimmerman wrote:  
> > > So what are gentoo users' opinions on this matter of faith?    
> > I use an ext4 partition backed by zram. Gives me ~3x compression on
> > the things I normally have lying around there (plain text files) and
> > ensures that anything I throw there (or programs throw there) gets
> > cleaned up on reboot.
> >   
> > > I have long been in the camp that thinks tmpfs for /tmp has no
> > > advantages (and may have disadvantages) over a normal filesystem
> > > like ext3, because the files there are normally so small that they
> > > will stay in the page cache 100% of the time.    
> > I've never actually benchmarked this. Most of the things I notice
> > that tend to end up there are temporary build files generated during
> > configure stages or temporary log files used by various programs
> > (clang static analyzer). Even if the entire file stays in the page
> > cache, it'll still generate IO overhead and extra seeks that might
> > slow down the rest of your system (unless your /tmp is on a
> > different hard drive) which on spinning rust will cause slowdowns
> > while on an ssd it'll eat away at your writes (which you may or may
> > not have to worry about).
> >   
> > > But I see that tmpfs is the default with systemd.  Surely they
> > > have a good reason for this? :)    
> > Or someone decided they liked the idea and made it the default and
> > nobody ever complained (or if they did were told to just change it
> > on their system). 
> > 
> > Either way, it'd be nice if someone actually benchmarked this.  
> 
> While I have no benchmarks and use the systemd default of tmpfs
> for /tmp, I also put /var/tmp/portage on tmpfs, automounted through
> systemd so it is cleaned up when no longer used (by unmounting).
> 
> What can I say? It works so much faster: Building packages is a lot
> faster most of the time, even if you'd expect gcc uses a lot of
> memory.
> 
> Well, why might that be? First, tmpfs is backed by swap space, that
> means, you need a swap partition of course.

To get in line with Rich Freeman: I didn't want to imply that zswap
only works with swap, neither that tmpfs only works with swap. Both
work without. But if you want to put some serious amount of data into
tmpfs, you need swap as a backing device sooner or later.

> Swap is a lot simpler than
> file systems, so swapping out unused temporary files is fast and is a
> good thing. Also, unused memory sitting around may be swapped out
> early. Why would you want inactive memory resident? So this is also a
> good thing. Portage can use memory much more efficient by this.
> 
> Applying this reasoning over to /tmp should no explain why it works so
> well and why you may want it.
> 
> BTW: I also use zswap, so tmpfs sits in front of a compressed
> write-back cache before being written out to swap compressed. This
> should generally be much more efficient (performance-wise) than
> putting /tmp on zram.
> 
> I configured tmpfs for portage to use up to 30GB of space, which is
> almost twice the RAM I have. And it works because tmpfs is not
> required to be resident all the time: Inactive parts will be swapped
> out. The kernel handles this much similar to the page cache, with the
> difference that your files aren't backed by your normal file system
> but by swap. And swap has a lot lower IO overhead.
> 
> Overall, having less IO overhead (and less head movement for portage
> builds) is a very very efficient thing to do. GCC constantly needs all
> sorts of files from your installation (libs for linking, header files,
> etc), and writes a lot of transient files which are needed once later
> and then discarded. There's no point in putting it on a non-transient
> file system.
> 
> I use the following measures to get more performance out of this
> setup:
> 
>   * I have three swap partitions spread across three HDDs
>   * I have a lot of swap space (60 GB) to have space for tmpfs
>   * I have bcache in front of my HDD filesystem
>   * I have a relatively big SSD dedicated to bcache
> 
> My best recommendation is to separate swap and filesystem devices.
> While I didn't do it that way, I still separate them through bcache
> and thus decouple fs access and swap access although they are on the
> same physical devices. My bcache is big enough that most accesses
> would go to the SSD only. I enabled write-back to have that effect
> also for write access.
> 
> If you cannot physically split swap from fs, a tmpfs setup for portage
> may not be recommended (except you have a lot of memory, like 16GB or
> above). But YMMV.
> 
> Still, I recommend it for /tmp, especially if your system is on SSD.
> Unix semantics suggest that /tmp is not expected to survive reboots
> anyways (in contrast, /var/tmp is expected to survive reboots), so
> tmpfs is a logical consequence to use for /tmp.


-- 
Regards,
Kai

Replies to list-only preferred.



^ permalink raw reply	[flat|nested] 45+ messages in thread

* [gentoo-user] Re: tmp on tmpfs
  2017-05-24  6:00   ` [gentoo-user] " Kai Krakow
  2017-05-24 17:05     ` Kai Krakow
@ 2017-05-24 18:34     ` Ian Zimmerman
  2017-05-24 19:30       ` Rich Freeman
  2017-05-25  3:36       ` Kai Krakow
  1 sibling, 2 replies; 45+ messages in thread
From: Ian Zimmerman @ 2017-05-24 18:34 UTC (permalink / raw
  To: gentoo-user

On 2017-05-24 08:00, Kai Krakow wrote:

> While I have no benchmarks and use the systemd default of tmpfs for
> /tmp, I also put /var/tmp/portage on tmpfs, automounted through
> systemd so it is cleaned up when no longer used (by unmounting).
> 
> What can I say? It works so much faster: Building packages is a lot
> faster most of the time, even if you'd expect gcc uses a lot of
> memory.
> 
> Well, why might that be? First, tmpfs is backed by swap space, that
> means, you need a swap partition of course. Swap is a lot simpler than
> file systems, so swapping out unused temporary files is fast and is a
> good thing. Also, unused memory sitting around may be swapped out
> early. Why would you want inactive memory resident? So this is also a
> good thing. Portage can use memory much more efficient by this.
> 
> Applying this reasoning over to /tmp should no explain why it works so
> well and why you may want it.
> 
> BTW: I also use zswap, so tmpfs sits in front of a compressed
> write-back cache before being written out to swap compressed. This
> should generally be much more efficient (performance-wise) than
> putting /tmp on zram.
> 
> I configured tmpfs for portage to use up to 30GB of space, which is
> almost twice the RAM I have. And it works because tmpfs is not
> required to be resident all the time: Inactive parts will be swapped
> out. The kernel handles this much similar to the page cache, with the
> difference that your files aren't backed by your normal file system
> but by swap.  And swap has a lot lower IO overhead.
> 
> Overall, having less IO overhead (and less head movement for portage
> builds) is a very very efficient thing to do. GCC constantly needs all
> sorts of files from your installation (libs for linking, header files,
> etc), and writes a lot of transient files which are needed once later
> and then discarded. There's no point in putting it on a non-transient
> file system.
> 
> I use the following measures to get more performance out of this
> setup:
> 
>   * I have three swap partitions spread across three HDDs
>   * I have a lot of swap space (60 GB) to have space for tmpfs
>   * I have bcache in front of my HDD filesystem
>   * I have a relatively big SSD dedicated to bcache
> 
> My best recommendation is to separate swap and filesystem devices.
> While I didn't do it that way, I still separate them through bcache
> and thus decouple fs access and swap access although they are on the
> same physical devices. My bcache is big enough that most accesses
> would go to the SSD only. I enabled write-back to have that effect
> also for write access.
> 
> If you cannot physically split swap from fs, a tmpfs setup for portage
> may not be recommended (except you have a lot of memory, like 16GB or
> above). But YMMV.
> 
> Still, I recommend it for /tmp, especially if your system is on SSD.

All interesting points, and you convinced me to at least give tmpfs a
try on the desktop.

My laptop is different, though.  It doesn't have that much RAM by
comparison (4G) and it _only_ has a SSD.  Builds have been slow :(  I am
afraid to mess with it lest I increase the wear on the SSD.

> Unix semantics suggest that /tmp is not expected to survive reboots
> anyways (in contrast, /var/tmp is expected to survive reboots), so
> tmpfs is a logical consequence to use for /tmp.

/tmp is wiped by the bootmisc init job anyway.

-- 
Please *no* private Cc: on mailing lists and newsgroups
Personal signed mail: please _encrypt_ and sign
Don't clear-text sign:
http://primate.net/~itz/blog/the-problem-with-gpg-signatures.html


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [gentoo-user] Re: tmp on tmpfs
  2017-05-24  5:16 [gentoo-user] tmp on tmpfs Ian Zimmerman
                   ` (2 preceding siblings ...)
  2017-05-24  9:34 ` Rich Freeman
@ 2017-05-24 18:46 ` Nikos Chantziaras
  3 siblings, 0 replies; 45+ messages in thread
From: Nikos Chantziaras @ 2017-05-24 18:46 UTC (permalink / raw
  To: gentoo-user

On 05/24/2017 08:16 AM, Ian Zimmerman wrote:
> So what are gentoo users' opinions on this matter of faith?
> 
> I have long been in the camp that thinks tmpfs for /tmp has no
> advantages (and may have disadvantages) over a normal filesystem like
> ext3, because the files there are normally so small that they will stay
> in the page cache 100% of the time.
> 
> But I see that tmpfs is the default with systemd.  Surely they have a
> good reason for this? :)

Their reason is described here:

   https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems

It seems that they consider it an important *default* to have /tmp exist 
even if nothing else exists yet during boot-up.

Normally I wouldn't care too much whether /tmp is tmpfs or not. The only 
cases where I do care, is when unpacking a huge tarball with contents I 
didn't intend to keep around. But I stopped doing that in /tmp anyway. I 
have my own ~/tmp for that now. When using /tmp for that, you need to rm 
-rf what you don't need anymore, since it eats up RAM.

Another case is when I download something big that I intend to install 
(*.bin installers) or unpack into a final location on disk. In those 
cases, /tmp on tmpfs helps since it lowers disk fragmentation: you 
download it to RAM, then install to disk.



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] Re: tmp on tmpfs
  2017-05-24 18:34     ` [gentoo-user] Re: tmp on tmpfs Ian Zimmerman
@ 2017-05-24 19:30       ` Rich Freeman
  2017-05-24 21:16         ` Andrew Savchenko
  2017-05-25  3:38         ` Kai Krakow
  2017-05-25  3:36       ` Kai Krakow
  1 sibling, 2 replies; 45+ messages in thread
From: Rich Freeman @ 2017-05-24 19:30 UTC (permalink / raw
  To: gentoo-user

On Wed, May 24, 2017 at 11:34 AM, Ian Zimmerman <itz@primate.net> wrote:
> On 2017-05-24 08:00, Kai Krakow wrote:
>
>> Unix semantics suggest that /tmp is not expected to survive reboots
>> anyways (in contrast, /var/tmp is expected to survive reboots), so
>> tmpfs is a logical consequence to use for /tmp.
>
> /tmp is wiped by the bootmisc init job anyway.
>

In general I haven't found anything that is bothered by /var/tmp being
lost on reboot, but obviously that is something you need to be
prepared for if you put it on tmpfs.

One thing that wasn't mentioned is that having /tmp in tmpfs might
also have security benefits depending on what is stored there, since
it won't be written to disk.  If you have a filesystem on tmpfs and
your swap is encrypted (which you should consider setting up since it
is essentially "free") then /tmp also becomes a useful dumping ground
for stuff that is decrypted for temporary processing.  For example, if
you keep your passwords in a gpg-encrypted file you could copy it to
/tmp, decrypt it there, do what you need to, and then delete it.  That
wouldn't leave any recoverable traces of the file.

There are lots of guides about encrypted swap.  It is the sort of
thing that is convenient to set up since there is no value in
preserving a swap file across reboots, so you can just generate a
random key on each boot.  I suspect that would break down if you're
using hibernation / suspend to disk.

-- 
Rich


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] Re: tmp on tmpfs
  2017-05-24 19:30       ` Rich Freeman
@ 2017-05-24 21:16         ` Andrew Savchenko
  2017-05-24 22:40           ` Rich Freeman
  2017-05-25  3:38         ` Kai Krakow
  1 sibling, 1 reply; 45+ messages in thread
From: Andrew Savchenko @ 2017-05-24 21:16 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 2316 bytes --]

On Wed, 24 May 2017 12:30:36 -0700 Rich Freeman wrote:
> On Wed, May 24, 2017 at 11:34 AM, Ian Zimmerman <itz@primate.net> wrote:
> > On 2017-05-24 08:00, Kai Krakow wrote:
> >
> >> Unix semantics suggest that /tmp is not expected to survive reboots
> >> anyways (in contrast, /var/tmp is expected to survive reboots), so
> >> tmpfs is a logical consequence to use for /tmp.
> > 
> > /tmp is wiped by the bootmisc init job anyway.
> >
> 
> In general I haven't found anything that is bothered by /var/tmp being
> lost on reboot, but obviously that is something you need to be
> prepared for if you put it on tmpfs.
> 
> One thing that wasn't mentioned is that having /tmp in tmpfs might
> also have security benefits depending on what is stored there, since
> it won't be written to disk.  If you have a filesystem on tmpfs and
> your swap is encrypted (which you should consider setting up since it
> is essentially "free") then /tmp also becomes a useful dumping ground
> for stuff that is decrypted for temporary processing.  For example, if
> you keep your passwords in a gpg-encrypted file you could copy it to
> /tmp, decrypt it there, do what you need to, and then delete it.  That
> wouldn't leave any recoverable traces of the file.
> 
> There are lots of guides about encrypted swap.  It is the sort of
> thing that is convenient to set up since there is no value in
> preserving a swap file across reboots, so you can just generate a
> random key on each boot.  I suspect that would break down if you're
> using hibernation / suspend to disk.

It is easy to use both encrypted swap and encrypted hibernation
image (I do this on my laptop). Just before s2disk call disable swap
completely, then create empty unencrypted swap and run s2disk
(swappiness may be disabled to protect from accidental write of
unencrypted data before fresh swap creation and s2disk call).

Afterwards s2disk may be used to create encrypted memory image and
store it in the swap partition. On resume just reverse actions.

Apparently it is pointless to encrypt swap if unencrypted
hibernation image is used, because all memory is accessible through
that image (and even if it is deleted later, it can be restored
from hdd and in some cases from ssd).

Best regards,
Andrew Savchenko

[-- Attachment #2: Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] Re: tmp on tmpfs
  2017-05-24 21:16         ` Andrew Savchenko
@ 2017-05-24 22:40           ` Rich Freeman
  2017-05-25  6:34             ` J. Roeleveld
  0 siblings, 1 reply; 45+ messages in thread
From: Rich Freeman @ 2017-05-24 22:40 UTC (permalink / raw
  To: gentoo-user

On Wed, May 24, 2017 at 2:16 PM, Andrew Savchenko <bircoph@gentoo.org> wrote:
>
> Apparently it is pointless to encrypt swap if unencrypted
> hibernation image is used, because all memory is accessible through
> that image (and even if it is deleted later, it can be restored
> from hdd and in some cases from ssd).
>

Yeah, that was my main concern with an approach like that.  I imagine
you could use a non-random key and enter it on each boot and restore
from the encrypted swap, though I haven't actually used hibernation on
linux so I'd have to look into how to make that work.  I imagine with
an initramfs it should be possible.

-- 
Rich


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [gentoo-user] Re: tmp on tmpfs
  2017-05-24 18:34     ` [gentoo-user] Re: tmp on tmpfs Ian Zimmerman
  2017-05-24 19:30       ` Rich Freeman
@ 2017-05-25  3:36       ` Kai Krakow
  1 sibling, 0 replies; 45+ messages in thread
From: Kai Krakow @ 2017-05-25  3:36 UTC (permalink / raw
  To: gentoo-user

Am Wed, 24 May 2017 11:34:20 -0700
schrieb Ian Zimmerman <itz@primate.net>:

> On 2017-05-24 08:00, Kai Krakow wrote:
> 
> > While I have no benchmarks and use the systemd default of tmpfs for
> > /tmp, I also put /var/tmp/portage on tmpfs, automounted through
> > systemd so it is cleaned up when no longer used (by unmounting).
> > 
> > What can I say? It works so much faster: Building packages is a lot
> > faster most of the time, even if you'd expect gcc uses a lot of
> > memory.
> > 
> > Well, why might that be? First, tmpfs is backed by swap space, that
> > means, you need a swap partition of course. Swap is a lot simpler
> > than file systems, so swapping out unused temporary files is fast
> > and is a good thing. Also, unused memory sitting around may be
> > swapped out early. Why would you want inactive memory resident? So
> > this is also a good thing. Portage can use memory much more
> > efficient by this.
> > 
> > Applying this reasoning over to /tmp should no explain why it works
> > so well and why you may want it.
> > 
> > BTW: I also use zswap, so tmpfs sits in front of a compressed
> > write-back cache before being written out to swap compressed. This
> > should generally be much more efficient (performance-wise) than
> > putting /tmp on zram.
> > 
> > I configured tmpfs for portage to use up to 30GB of space, which is
> > almost twice the RAM I have. And it works because tmpfs is not
> > required to be resident all the time: Inactive parts will be swapped
> > out. The kernel handles this much similar to the page cache, with
> > the difference that your files aren't backed by your normal file
> > system but by swap.  And swap has a lot lower IO overhead.
> > 
> > Overall, having less IO overhead (and less head movement for portage
> > builds) is a very very efficient thing to do. GCC constantly needs
> > all sorts of files from your installation (libs for linking, header
> > files, etc), and writes a lot of transient files which are needed
> > once later and then discarded. There's no point in putting it on a
> > non-transient file system.
> > 
> > I use the following measures to get more performance out of this
> > setup:
> > 
> >   * I have three swap partitions spread across three HDDs
> >   * I have a lot of swap space (60 GB) to have space for tmpfs
> >   * I have bcache in front of my HDD filesystem
> >   * I have a relatively big SSD dedicated to bcache
> > 
> > My best recommendation is to separate swap and filesystem devices.
> > While I didn't do it that way, I still separate them through bcache
> > and thus decouple fs access and swap access although they are on the
> > same physical devices. My bcache is big enough that most accesses
> > would go to the SSD only. I enabled write-back to have that effect
> > also for write access.
> > 
> > If you cannot physically split swap from fs, a tmpfs setup for
> > portage may not be recommended (except you have a lot of memory,
> > like 16GB or above). But YMMV.
> > 
> > Still, I recommend it for /tmp, especially if your system is on
> > SSD.  
> 
> All interesting points, and you convinced me to at least give tmpfs a
> try on the desktop.
> 
> My laptop is different, though.  It doesn't have that much RAM by
> comparison (4G) and it _only_ has a SSD.  Builds have been slow :(  I
> am afraid to mess with it lest I increase the wear on the SSD.

You still may want to test /var/tmp/portage as tmpfs for small
packages... Or manually call:

# sudo PORTAGE_TMPDIR=/path/to/tmpfs emerge -1a small-package

For big packages, I suggest to nfs mount some storage from your desktop.
It probably will still be slow (maybe a little bit slower) but should
be much better for your SSD lifetime.


> > Unix semantics suggest that /tmp is not expected to survive reboots
> > anyways (in contrast, /var/tmp is expected to survive reboots), so
> > tmpfs is a logical consequence to use for /tmp.  
> 
> /tmp is wiped by the bootmisc init job anyway.

That's why such jobs exist, and why usually /tmp is wiped completely
while /var/tmp is wiped based on atime/mtime...


-- 
Regards,
Kai

Replies to list-only preferred.



^ permalink raw reply	[flat|nested] 45+ messages in thread

* [gentoo-user] Re: tmp on tmpfs
  2017-05-24 19:30       ` Rich Freeman
  2017-05-24 21:16         ` Andrew Savchenko
@ 2017-05-25  3:38         ` Kai Krakow
  2017-05-25  7:19           ` J. Roeleveld
  1 sibling, 1 reply; 45+ messages in thread
From: Kai Krakow @ 2017-05-25  3:38 UTC (permalink / raw
  To: gentoo-user

Am Wed, 24 May 2017 12:30:36 -0700
schrieb Rich Freeman <rich0@gentoo.org>:

> On Wed, May 24, 2017 at 11:34 AM, Ian Zimmerman <itz@primate.net>
> wrote:
> > On 2017-05-24 08:00, Kai Krakow wrote:
> >  
> >> Unix semantics suggest that /tmp is not expected to survive reboots
> >> anyways (in contrast, /var/tmp is expected to survive reboots), so
> >> tmpfs is a logical consequence to use for /tmp.  
> >
> > /tmp is wiped by the bootmisc init job anyway.
> >  
> 
> In general I haven't found anything that is bothered by /var/tmp being
> lost on reboot, but obviously that is something you need to be
> prepared for if you put it on tmpfs.
> 
> One thing that wasn't mentioned is that having /tmp in tmpfs might
> also have security benefits depending on what is stored there, since
> it won't be written to disk.  If you have a filesystem on tmpfs and
> your swap is encrypted (which you should consider setting up since it
> is essentially "free") then /tmp also becomes a useful dumping ground
> for stuff that is decrypted for temporary processing.  For example, if
> you keep your passwords in a gpg-encrypted file you could copy it to
> /tmp, decrypt it there, do what you need to, and then delete it.  That
> wouldn't leave any recoverable traces of the file.

Interesting point... How much performance impact does encrypted swap
have? I don't mean any benchmark numbers but real life experience from
your perspective when the system experiences memory pressure?

> There are lots of guides about encrypted swap.  It is the sort of
> thing that is convenient to set up since there is no value in
> preserving a swap file across reboots, so you can just generate a
> random key on each boot.  I suspect that would break down if you're
> using hibernation / suspend to disk.


-- 
Regards,
Kai

Replies to list-only preferred.



^ permalink raw reply	[flat|nested] 45+ messages in thread

* [gentoo-user] Re: tmp on tmpfs
  2017-05-24 12:45   ` Andrew Savchenko
@ 2017-05-25  4:45     ` Martin Vaeth
  2017-05-25  7:24       ` Mick
  2017-05-25 22:36     ` [gentoo-user] " Kent Fredric
  1 sibling, 1 reply; 45+ messages in thread
From: Martin Vaeth @ 2017-05-25  4:45 UTC (permalink / raw
  To: gentoo-user

Andrew Savchenko <bircoph@gentoo.org> wrote:
> For similar needs I found zswap the most suitable, it's so much
> better than zram:

This sounds like one is an alternative to the other.
This is not the case. It can even make sense to use them together.
For instance, the swap device necessarily required for zswap
can be a zram device. Whether this is advantegous depends on your
usage pattern and swappiness value.



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] Re: tmp on tmpfs
  2017-05-24 22:40           ` Rich Freeman
@ 2017-05-25  6:34             ` J. Roeleveld
  2017-05-25 11:04               ` Kai Krakow
  0 siblings, 1 reply; 45+ messages in thread
From: J. Roeleveld @ 2017-05-25  6:34 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1062 bytes --]

It is possible. I have it set up like that on my laptop.
Apart from a small /boot partition. The whole drive is encrypted.
Decryption keys are stored encrypted in the initramfs, which is embedded in the kernel.

--
Joost

On May 25, 2017 12:40:12 AM GMT+02:00, Rich Freeman <rich0@gentoo.org> wrote:
>On Wed, May 24, 2017 at 2:16 PM, Andrew Savchenko <bircoph@gentoo.org>
>wrote:
>>
>> Apparently it is pointless to encrypt swap if unencrypted
>> hibernation image is used, because all memory is accessible through
>> that image (and even if it is deleted later, it can be restored
>> from hdd and in some cases from ssd).
>>
>
>Yeah, that was my main concern with an approach like that.  I imagine
>you could use a non-random key and enter it on each boot and restore
>from the encrypted swap, though I haven't actually used hibernation on
>linux so I'd have to look into how to make that work.  I imagine with
>an initramfs it should be possible.
>
>-- 
>Rich

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

[-- Attachment #2: Type: text/html, Size: 1484 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] Re: tmp on tmpfs
  2017-05-25  3:38         ` Kai Krakow
@ 2017-05-25  7:19           ` J. Roeleveld
  0 siblings, 0 replies; 45+ messages in thread
From: J. Roeleveld @ 2017-05-25  7:19 UTC (permalink / raw
  To: gentoo-user



On May 25, 2017 5:38:35 AM GMT+02:00, Kai Krakow <hurikhan77@gmail.com> wrote:
>Am Wed, 24 May 2017 12:30:36 -0700
>schrieb Rich Freeman <rich0@gentoo.org>:
>
>> On Wed, May 24, 2017 at 11:34 AM, Ian Zimmerman <itz@primate.net>
>> wrote:
>> > On 2017-05-24 08:00, Kai Krakow wrote:
>> >  
>> >> Unix semantics suggest that /tmp is not expected to survive
>reboots
>> >> anyways (in contrast, /var/tmp is expected to survive reboots), so
>> >> tmpfs is a logical consequence to use for /tmp.  
>> >
>> > /tmp is wiped by the bootmisc init job anyway.
>> >  
>> 
>> In general I haven't found anything that is bothered by /var/tmp
>being
>> lost on reboot, but obviously that is something you need to be
>> prepared for if you put it on tmpfs.
>> 
>> One thing that wasn't mentioned is that having /tmp in tmpfs might
>> also have security benefits depending on what is stored there, since
>> it won't be written to disk.  If you have a filesystem on tmpfs and
>> your swap is encrypted (which you should consider setting up since it
>> is essentially "free") then /tmp also becomes a useful dumping ground
>> for stuff that is decrypted for temporary processing.  For example,
>if
>> you keep your passwords in a gpg-encrypted file you could copy it to
>> /tmp, decrypt it there, do what you need to, and then delete it. 
>That
>> wouldn't leave any recoverable traces of the file.
>
>Interesting point... How much performance impact does encrypted swap
>have? I don't mean any benchmark numbers but real life experience from
>your perspective when the system experiences memory pressure?

I have my laptop encrypted. Has 16GB and occasionally it does use swap. With it all being on SSD.
I am not noticing any slowdowns because of it.

--
Joost
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] Re: tmp on tmpfs
  2017-05-25  4:45     ` [gentoo-user] " Martin Vaeth
@ 2017-05-25  7:24       ` Mick
  2017-05-25 15:46         ` Martin Vaeth
  0 siblings, 1 reply; 45+ messages in thread
From: Mick @ 2017-05-25  7:24 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 647 bytes --]

On Thursday 25 May 2017 04:45:24 Martin Vaeth wrote:
> Andrew Savchenko <bircoph@gentoo.org> wrote:
> > For similar needs I found zswap the most suitable, it's so much
> 
> > better than zram:
> This sounds like one is an alternative to the other.
> This is not the case. It can even make sense to use them together.
> For instance, the swap device necessarily required for zswap
> can be a zram device. Whether this is advantegous depends on your
> usage pattern and swappiness value.

Do either of these reduce the effect of (spinning) drive thrashing and desktop 
latency increasing when swapping takes place?

-- 
Regards,
Mick

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [gentoo-user] Re: tmp on tmpfs
  2017-05-25  6:34             ` J. Roeleveld
@ 2017-05-25 11:04               ` Kai Krakow
  2017-05-25 12:23                 ` Rich Freeman
  2017-05-25 14:16                 ` J. Roeleveld
  0 siblings, 2 replies; 45+ messages in thread
From: Kai Krakow @ 2017-05-25 11:04 UTC (permalink / raw
  To: gentoo-user

Am Thu, 25 May 2017 08:34:10 +0200
schrieb "J. Roeleveld" <joost@antarean.org>:

> It is possible. I have it set up like that on my laptop.
> Apart from a small /boot partition. The whole drive is encrypted.
> Decryption keys are stored encrypted in the initramfs, which is
> embedded in the kernel.

And the kernel is on /boot which is unencrypted, so are your encryption
keys. This is not much better, I guess...

> On May 25, 2017 12:40:12 AM GMT+02:00, Rich Freeman
> <rich0@gentoo.org> wrote:
> >On Wed, May 24, 2017 at 2:16 PM, Andrew Savchenko
> ><bircoph@gentoo.org> wrote:  
> >>
> >> Apparently it is pointless to encrypt swap if unencrypted
> >> hibernation image is used, because all memory is accessible through
> >> that image (and even if it is deleted later, it can be restored
> >> from hdd and in some cases from ssd).
> >>  
> >
> >Yeah, that was my main concern with an approach like that.  I imagine
> >you could use a non-random key and enter it on each boot and restore
> >from the encrypted swap, though I haven't actually used hibernation
> >on linux so I'd have to look into how to make that work.  I imagine
> >with an initramfs it should be possible.


-- 
Regards,
Kai

Replies to list-only preferred.



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] Re: tmp on tmpfs
  2017-05-25 11:04               ` Kai Krakow
@ 2017-05-25 12:23                 ` Rich Freeman
  2017-05-25 14:16                 ` J. Roeleveld
  1 sibling, 0 replies; 45+ messages in thread
From: Rich Freeman @ 2017-05-25 12:23 UTC (permalink / raw
  To: gentoo-user

On Thu, May 25, 2017 at 7:04 AM, Kai Krakow <hurikhan77@gmail.com> wrote:
> Am Thu, 25 May 2017 08:34:10 +0200
> schrieb "J. Roeleveld" <joost@antarean.org>:
>
>> It is possible. I have it set up like that on my laptop.
>> Apart from a small /boot partition. The whole drive is encrypted.
>> Decryption keys are stored encrypted in the initramfs, which is
>> embedded in the kernel.
>
> And the kernel is on /boot which is unencrypted, so are your encryption
> keys. This is not much better, I guess...
>

Agree.  There are only a few ways to do persistent encryption in a secure way:
1.  Require entry of a key during boot, protected by lots of rounds to
deter brute force.
2.  Store the key on some kind of hardware token that the user keeps
on their person.
3.  Store the key in a TPM, protected either by:
   a. entry of a PIN/password of some sort with protections on attempt
frequency/total
   b. verification of the boot path (which should be possible with
existing software on linux, but I'm not aware of any distro that
actually implements this)

If you don't have hibernation then you can just generate a random key
on boot, and that is very secure, but your swap is unrecoverable after
power-off.

Of the options above 3b is the only one that really lets you do this
with the same level of convenience.  This is how most full-drive
encryption solutions work in the Windows world.  Chromebooks use a
variation on 3a I believe using your google account password as one
component of the key and putting it through the TPM to have a hardware
component and to throttle attempts.

-- 
Rich


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] Re: tmp on tmpfs
  2017-05-25 11:04               ` Kai Krakow
  2017-05-25 12:23                 ` Rich Freeman
@ 2017-05-25 14:16                 ` J. Roeleveld
  2017-05-25 16:06                   ` Rich Freeman
  1 sibling, 1 reply; 45+ messages in thread
From: J. Roeleveld @ 2017-05-25 14:16 UTC (permalink / raw
  To: gentoo-user

On May 25, 2017 1:04:07 PM GMT+02:00, Kai Krakow <hurikhan77@gmail.com> wrote:
>Am Thu, 25 May 2017 08:34:10 +0200
>schrieb "J. Roeleveld" <joost@antarean.org>:
>
>> It is possible. I have it set up like that on my laptop.
>> Apart from a small /boot partition. The whole drive is encrypted.
>> Decryption keys are stored encrypted in the initramfs, which is
>> embedded in the kernel.
>
>And the kernel is on /boot which is unencrypted, so are your encryption
>keys. This is not much better, I guess...

A file full of random characters is encrypted using GPG.
Unencrypted, this is passed to cryptsetup.

The passphrase to decrypt the key needs to be entered upon boot.
How can this be improved?

--
Joost

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [gentoo-user] Re: tmp on tmpfs
  2017-05-25  7:24       ` Mick
@ 2017-05-25 15:46         ` Martin Vaeth
  0 siblings, 0 replies; 45+ messages in thread
From: Martin Vaeth @ 2017-05-25 15:46 UTC (permalink / raw
  To: gentoo-user

Mick <michaelkintzios@gmail.com> wrote:
> Do either of these reduce the effect of (spinning) drive thrashing and
> desktop latency increasing when swapping takes place?

I never made any benchmarks. I just heard that some people are using
the combination of both to avoid swap altogether (or only have an
fallback swap which is used only in emergency situations although
swappiness values are kept normal).



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] Re: tmp on tmpfs
  2017-05-25 14:16                 ` J. Roeleveld
@ 2017-05-25 16:06                   ` Rich Freeman
  2017-05-25 16:28                     ` J. Roeleveld
  0 siblings, 1 reply; 45+ messages in thread
From: Rich Freeman @ 2017-05-25 16:06 UTC (permalink / raw
  To: gentoo-user

On Thu, May 25, 2017 at 10:16 AM, J. Roeleveld <joost@antarean.org> wrote:
> On May 25, 2017 1:04:07 PM GMT+02:00, Kai Krakow <hurikhan77@gmail.com> wrote:
>>Am Thu, 25 May 2017 08:34:10 +0200
>>schrieb "J. Roeleveld" <joost@antarean.org>:
>>
>>> It is possible. I have it set up like that on my laptop.
>>> Apart from a small /boot partition. The whole drive is encrypted.
>>> Decryption keys are stored encrypted in the initramfs, which is
>>> embedded in the kernel.
>>
>>And the kernel is on /boot which is unencrypted, so are your encryption
>>keys. This is not much better, I guess...
>
> A file full of random characters is encrypted using GPG.
> Unencrypted, this is passed to cryptsetup.
>
> The passphrase to decrypt the key needs to be entered upon boot.
> How can this be improved?
>

The need to enter a passphrase was the missing bit here.  I thought
you were literally just storing the key in the clear.

As far as I can tell gpg symmetric encryption does salting and
iterations by default, so you're probably fairly secure.  I'm not sure
if the defaults were always set up this way - if you set up that file
a long time ago you might just want to check that, unless your
passphrase is really complex.

-- 
Rich


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] Re: tmp on tmpfs
  2017-05-25 16:06                   ` Rich Freeman
@ 2017-05-25 16:28                     ` J. Roeleveld
  2017-05-25 16:41                       ` Rich Freeman
  0 siblings, 1 reply; 45+ messages in thread
From: J. Roeleveld @ 2017-05-25 16:28 UTC (permalink / raw
  To: gentoo-user

On May 25, 2017 6:06:45 PM GMT+02:00, Rich Freeman <rich0@gentoo.org> wrote:
>On Thu, May 25, 2017 at 10:16 AM, J. Roeleveld <joost@antarean.org>
>wrote:
>> On May 25, 2017 1:04:07 PM GMT+02:00, Kai Krakow
><hurikhan77@gmail.com> wrote:
>>>Am Thu, 25 May 2017 08:34:10 +0200
>>>schrieb "J. Roeleveld" <joost@antarean.org>:
>>>
>>>> It is possible. I have it set up like that on my laptop.
>>>> Apart from a small /boot partition. The whole drive is encrypted.
>>>> Decryption keys are stored encrypted in the initramfs, which is
>>>> embedded in the kernel.
>>>
>>>And the kernel is on /boot which is unencrypted, so are your
>encryption
>>>keys. This is not much better, I guess...
>>
>> A file full of random characters is encrypted using GPG.
>> Unencrypted, this is passed to cryptsetup.
>>
>> The passphrase to decrypt the key needs to be entered upon boot.
>> How can this be improved?
>>
>
>The need to enter a passphrase was the missing bit here.  I thought
>you were literally just storing the key in the clear.
>
>As far as I can tell gpg symmetric encryption does salting and
>iterations by default, so you're probably fairly secure.  I'm not sure
>if the defaults were always set up this way - if you set up that file
>a long time ago you might just want to check that, unless your
>passphrase is really complex.

Not sure how long ago this was. I'm planning on redoing the whole laptop in the near future anyway.

If anyone knows of a better way (that works without TPM) I would like to hear about it.

--
Joost
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] Re: tmp on tmpfs
  2017-05-25 16:28                     ` J. Roeleveld
@ 2017-05-25 16:41                       ` Rich Freeman
  0 siblings, 0 replies; 45+ messages in thread
From: Rich Freeman @ 2017-05-25 16:41 UTC (permalink / raw
  To: gentoo-user

On Thu, May 25, 2017 at 12:28 PM, J. Roeleveld <joost@antarean.org> wrote:
>
> Not sure how long ago this was. I'm planning on redoing the whole laptop in the near future anyway.
>
> If anyone knows of a better way (that works without TPM) I would like to hear about it.
>

I'd read up on LUKS.  That seems to be the way everybody is doing
stuff like this today.  It probably isn't much different in security
but it is more standard, which means more convenience when booting
from rescue disks and so on.  I bet with something like dracut you can
probably configure it more easily as well.  However, I've not looked
into the details.

-- 
Rich


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [gentoo-user] Puzzled by zswap [Was: tmp on tmpfs]
  2017-05-24 17:05     ` Kai Krakow
@ 2017-05-25 18:46       ` Ian Zimmerman
  2017-05-25 19:16         ` [gentoo-user] " Martin Vaeth
  2017-05-26  6:00         ` Kai Krakow
  0 siblings, 2 replies; 45+ messages in thread
From: Ian Zimmerman @ 2017-05-25 18:46 UTC (permalink / raw
  To: gentoo-user

On 2017-05-24 19:05, Kai Krakow wrote:

> To get in line with Rich Freeman: I didn't want to imply that zswap
> only works with swap, neither that tmpfs only works with swap. Both
> work without. But if you want to put some serious amount of data into
> tmpfs, you need swap as a backing device sooner or later.

Looking at zswap, I have several questions
(even after reading linux/Documentation/vm/zswap.txt).

1.  How does it know which swap device to use as backing store, if any?
Clearly at boot time no swap configuration exists, even if
initrd/initramfs is used, which here it is not.  So when the kernel sees
zswap.enable=1 in the command line, what happens?

2.  The doc says it can be turned on at runtime by means of
/sys/module/zswap/parameters/enabled.  But kconfig doesn't make it
possible to build the support as a module, only built-in, and so it is
not surprising that this path doesn't exist.

3.  It seems to require zbud to also be turned on, but this is not
enforced by kconfig.  Is this a bug or what?

4.  Quoting:

 Zswap seeks to be simple in its policies.  Sysfs attributes allow for one user
 controlled policy:
 * max_pool_percent - The maximum percentage of memory that the compressed
     pool can occupy.

Does this mean this is another (hypothetical) node in
/sys/module/zswap/parameters/ ?

-- 
Please *no* private Cc: on mailing lists and newsgroups
Personal signed mail: please _encrypt_ and sign
Don't clear-text sign:
http://primate.net/~itz/blog/the-problem-with-gpg-signatures.html


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [gentoo-user] Re: Puzzled by zswap [Was: tmp on tmpfs]
  2017-05-25 18:46       ` [gentoo-user] Puzzled by zswap [Was: tmp on tmpfs] Ian Zimmerman
@ 2017-05-25 19:16         ` Martin Vaeth
  2017-05-26  6:00         ` Kai Krakow
  1 sibling, 0 replies; 45+ messages in thread
From: Martin Vaeth @ 2017-05-25 19:16 UTC (permalink / raw
  To: gentoo-user

Ian Zimmerman <itz@primate.net> wrote:
>
> 1.  How does it know which swap device to use as backing store, if any?

My understanding is that zswap is lower level: only in the moment when a
page _would_ be stored, the zswap code is called. That's why activating
zsawp has absolutely no effect if there is no swap device: it simply is
never called in this case.

Concerning your other questions: The interface has changed several times.
I guess the documentation is simply not up-to-date anymore...



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] tmp on tmpfs
  2017-05-24 12:45   ` Andrew Savchenko
  2017-05-25  4:45     ` [gentoo-user] " Martin Vaeth
@ 2017-05-25 22:36     ` Kent Fredric
  2017-05-28 10:07       ` Mick
  1 sibling, 1 reply; 45+ messages in thread
From: Kent Fredric @ 2017-05-25 22:36 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 797 bytes --]

On Wed, 24 May 2017 15:45:45 +0300
Andrew Savchenko <bircoph@gentoo.org> wrote:

> - smaller CPU overhead: not every i/o is being compressed, e.g. if
> there is sill enough RAM available it is used without compression
> overhead as usual, but if memory is not enough, swapped out pages
> are being compressed instead of swapping out to disk;

I found the opposite problem somehow. CPU started becomming frequently pegged
in zswap for no obvious reason, while the underlying IO that zswap was doing
was only measurable in kb/s , far, far, far below the noise thresholds and
by no means a strain on even my crappy spinning rust based swap.

And to add to that, zswap introduced general protection faults and kernel panics.

So nah, I'm glad I turned that off, it was a huge mistake.

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [gentoo-user] Re: Puzzled by zswap [Was: tmp on tmpfs]
  2017-05-25 18:46       ` [gentoo-user] Puzzled by zswap [Was: tmp on tmpfs] Ian Zimmerman
  2017-05-25 19:16         ` [gentoo-user] " Martin Vaeth
@ 2017-05-26  6:00         ` Kai Krakow
  1 sibling, 0 replies; 45+ messages in thread
From: Kai Krakow @ 2017-05-26  6:00 UTC (permalink / raw
  To: gentoo-user

Am Thu, 25 May 2017 11:46:45 -0700
schrieb Ian Zimmerman <itz@primate.net>:

> On 2017-05-24 19:05, Kai Krakow wrote:
> 
> > To get in line with Rich Freeman: I didn't want to imply that zswap
> > only works with swap, neither that tmpfs only works with swap. Both
> > work without. But if you want to put some serious amount of data
> > into tmpfs, you need swap as a backing device sooner or later.  
> 
> Looking at zswap, I have several questions
> (even after reading linux/Documentation/vm/zswap.txt).
> 
> 1.  How does it know which swap device to use as backing store, if
> any? Clearly at boot time no swap configuration exists, even if
> initrd/initramfs is used, which here it is not.  So when the kernel
> sees zswap.enable=1 in the command line, what happens?

You simply don't assign a swap device to zswap. It's transparently
inserted into the swapping chain of the kernel. Thus pages are first
compressed, and later swapped out by normal kernel processing.

> 2.  The doc says it can be turned on at runtime by means of
> /sys/module/zswap/parameters/enabled.  But kconfig doesn't make it
> possible to build the support as a module, only built-in, and so it is
> not surprising that this path doesn't exist.

I wonder why this doesn't exist. All my builtin modules have their
parameters in /sys/module:

# lsmod | fgrep zswap | wc -l
0
# ls -ald /sys/module/zswap
drwxr-xr-x 3 root root 0 26. Mai 07:54 /sys/module/zswap

> 3.  It seems to require zbud to also be turned on, but this is not
> enforced by kconfig.  Is this a bug or what?

No idea, I enabled it...

> 4.  Quoting:
> 
>  Zswap seeks to be simple in its policies.  Sysfs attributes allow
> for one user controlled policy:
>  * max_pool_percent - The maximum percentage of memory that the
> compressed pool can occupy.
> 
> Does this mean this is another (hypothetical) node in
> /sys/module/zswap/parameters/ ?

grep ^ /sys/module/zswap/parameters/*
/sys/module/zswap/parameters/compressor:lzo
/sys/module/zswap/parameters/enabled:Y
/sys/module/zswap/parameters/max_pool_percent:20
/sys/module/zswap/parameters/zpool:zbud 

This also implies that zbud it required for zswap to even operate. If
you didn't include it, it may be the reason why zswap is missing
in /sys/module.


-- 
Regards,
Kai

Replies to list-only preferred.



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] tmp on tmpfs
  2017-05-25 22:36     ` [gentoo-user] " Kent Fredric
@ 2017-05-28 10:07       ` Mick
  2017-05-31  0:36         ` Kent Fredric
  0 siblings, 1 reply; 45+ messages in thread
From: Mick @ 2017-05-28 10:07 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 955 bytes --]

On Friday 26 May 2017 10:36:40 Kent Fredric wrote:
> On Wed, 24 May 2017 15:45:45 +0300
> 
> Andrew Savchenko <bircoph@gentoo.org> wrote:
> > - smaller CPU overhead: not every i/o is being compressed, e.g. if
> > there is sill enough RAM available it is used without compression
> > overhead as usual, but if memory is not enough, swapped out pages
> > are being compressed instead of swapping out to disk;
> 
> I found the opposite problem somehow. CPU started becomming frequently
> pegged in zswap for no obvious reason, while the underlying IO that zswap
> was doing was only measurable in kb/s , far, far, far below the noise
> thresholds and by no means a strain on even my crappy spinning rust based
> swap.
> 
> And to add to that, zswap introduced general protection faults and kernel
> panics.
> 
> So nah, I'm glad I turned that off, it was a huge mistake.

Did you also have zbud enabled at the time?

-- 
Regards,
Mick

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [gentoo-user] Kernel did not finding root partition
@ 2017-05-29 11:09 Raphael MD
  2017-05-29 11:19 ` Rasmus Thomsen
  2017-05-29 17:48 ` [gentoo-user] " Kai Krakow
  0 siblings, 2 replies; 45+ messages in thread
From: Raphael MD @ 2017-05-29 11:09 UTC (permalink / raw
  To: Gentoo User

[-- Attachment #1: Type: text/plain, Size: 851 bytes --]

I'm trying to install Gentoo in my notebook, but kernel, during the boot,
do not find the root partition.

I'm using UEFI boot, I've tried Genkernel, I've checked XFS's support in
kernel's menuconfig and re-cheked GRUB config files, but is a pain, do not
work.

I've installed Funtoo with Debian Kernel first, but Funtoo KDE's ebuild was
pointing to a invalid URL and I've switched to Gentoo and now I'm suffering
this problem to boot.

Have anyone some information, about this Kernel's boot didn't finding root
partition? Is better configure kernel without Genkernel? I need to pass
some commands to Kernel via GRUB?

PS.: Appear to be very simple configure UEFI, because I'm using Refind and
it was working with Funtoo, and I realized this problem is with gentoo
kernel's config, but I do not know where I need to config.

Any suggestions?

Thanks!

[-- Attachment #2: Type: text/html, Size: 1023 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] Kernel did not finding root partition
  2017-05-29 11:09 [gentoo-user] Kernel did not finding root partition Raphael MD
@ 2017-05-29 11:19 ` Rasmus Thomsen
  2017-05-29 17:48 ` [gentoo-user] " Kai Krakow
  1 sibling, 0 replies; 45+ messages in thread
From: Rasmus Thomsen @ 2017-05-29 11:19 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1314 bytes --]

Hi,

make sure you have XFS compiled into the kernel ( not as module), or include it into the initramfs ( dunno how you would do that with genkernel though, I don't use an initramfs ). Grub should include a line saying root=xxx, maybe you have to set rootfstype, had to do that for BTRFS. Also make sure that you have sata ( or nvme if you use that ) compiled into your kernel.

Rasmus

-------- Original Message --------
On 29 May 2017, 13:09, Raphael MD wrote:

I'm trying to install Gentoo in my notebook, but kernel, during the boot, do not find the root partition.

I'm using UEFI boot, I've tried Genkernel, I've checked XFS's support in kernel's menuconfig and re-cheked GRUB config files, but is a pain, do not work.

I've installed Funtoo with Debian Kernel first, but Funtoo KDE's ebuild was pointing to a invalid URL and I've switched to Gentoo and now I'm suffering this problem to boot.

Have anyone some information, about this Kernel's boot didn't finding root partition? Is better configure kernel without Genkernel? I need to pass some commands to Kernel via GRUB?

PS.: Appear to be very simple configure UEFI, because I'm using Refind and it was working with Funtoo, and I realized this problem is with gentoo kernel's config, but I do not know where I need to config.

Any suggestions?

Thanks!

[-- Attachment #2: Type: text/html, Size: 1593 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [gentoo-user] Re: Kernel did not finding root partition
  2017-05-29 11:09 [gentoo-user] Kernel did not finding root partition Raphael MD
  2017-05-29 11:19 ` Rasmus Thomsen
@ 2017-05-29 17:48 ` Kai Krakow
  2017-05-29 18:07   ` Raphael MD
  1 sibling, 1 reply; 45+ messages in thread
From: Kai Krakow @ 2017-05-29 17:48 UTC (permalink / raw
  To: gentoo-user

Am Mon, 29 May 2017 08:09:02 -0300
schrieb Raphael MD <raphaxx@gmail.com>:

> I'm trying to install Gentoo in my notebook, but kernel, during the
> boot, do not find the root partition.
> 
> I'm using UEFI boot, I've tried Genkernel, I've checked XFS's support
> in kernel's menuconfig and re-cheked GRUB config files, but is a
> pain, do not work.
> 
> I've installed Funtoo with Debian Kernel first, but Funtoo KDE's
> ebuild was pointing to a invalid URL and I've switched to Gentoo and
> now I'm suffering this problem to boot.
> 
> Have anyone some information, about this Kernel's boot didn't finding
> root partition? Is better configure kernel without Genkernel? I need
> to pass some commands to Kernel via GRUB?
> 
> PS.: Appear to be very simple configure UEFI, because I'm using
> Refind and it was working with Funtoo, and I realized this problem is
> with gentoo kernel's config, but I do not know where I need to config.
> 
> Any suggestions?

For UEFI boot the best way is to install the kernel to the ESP,
especially if it is directly loaded by EFI. Which exact message do you
see? It is not clear if the kernel already booted and just cannot find
the rootfs, or if even the kernel cannot load.

I don't know reFind, but some EFI loaders like gummiboot / systemd-boot
expect the kernel to have an EFI stub because the kernel is
chain-loaded through EFI...

So we need to know a few things:

1. partition layout
2. kernel cmdline
3. boot-loader config
4. exact error message on screen


-- 
Regards,
Kai

Replies to list-only preferred.



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] Re: Kernel did not finding root partition
  2017-05-29 17:48 ` [gentoo-user] " Kai Krakow
@ 2017-05-29 18:07   ` Raphael MD
  2017-05-29 18:16     ` Neil Bothwick
  0 siblings, 1 reply; 45+ messages in thread
From: Raphael MD @ 2017-05-29 18:07 UTC (permalink / raw
  To: Gentoo User

[-- Attachment #1: Type: text/plain, Size: 1977 bytes --]

On May 29, 2017 14:51, "Kai Krakow" <hurikhan77@gmail.com> wrote:
>
> Am Mon, 29 May 2017 08:09:02 -0300
> schrieb Raphael MD <raphaxx@gmail.com>:
>
> > I'm trying to install Gentoo in my notebook, but kernel, during the
> > boot, do not find the root partition.
> >
> > I'm using UEFI boot, I've tried Genkernel, I've checked XFS's support
> > in kernel's menuconfig and re-cheked GRUB config files, but is a
> > pain, do not work.
> >
> > I've installed Funtoo with Debian Kernel first, but Funtoo KDE's
> > ebuild was pointing to a invalid URL and I've switched to Gentoo and
> > now I'm suffering this problem to boot.
> >
> > Have anyone some information, about this Kernel's boot didn't finding
> > root partition? Is better configure kernel without Genkernel? I need
> > to pass some commands to Kernel via GRUB?
> >
> > PS.: Appear to be very simple configure UEFI, because I'm using
> > Refind and it was working with Funtoo, and I realized this problem is
> > with gentoo kernel's config, but I do not know where I need to config.
> >
> > Any suggestions?
>
> For UEFI boot the best way is to install the kernel to the ESP,
> especially if it is directly loaded by EFI. Which exact message do you
> see? It is not clear if the kernel already booted and just cannot find
> the rootfs, or if even the kernel cannot load.
>
> I don't know reFind, but some EFI loaders like gummiboot / systemd-boot
> expect the kernel to have an EFI stub because the kernel is
> chain-loaded through EFI...
>
> So we need to know a few things:
>
> 1. partition layout
> 2. kernel cmdline
> 3. boot-loader config
> 4. exact error message on screen
>
>
> --
> Regards,
> Kai
>
> Replies to list-only preferred.
>
>

1. partition layout
/dev/sda1 vfat boot
/dev/sda3 xfs   root
/dev/sda2 swap

> 2. kernel cmdline
None

> 3. boot-loader config
Grub, without any different config.

> 4. exact error message on screen
Kernel boot up, start to load drivers and stop asking for root partition.

[-- Attachment #2: Type: text/html, Size: 2719 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] Re: Kernel did not finding root partition
  2017-05-29 18:07   ` Raphael MD
@ 2017-05-29 18:16     ` Neil Bothwick
  2017-05-29 19:42       ` Kai Krakow
  0 siblings, 1 reply; 45+ messages in thread
From: Neil Bothwick @ 2017-05-29 18:16 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1109 bytes --]

On Mon, 29 May 2017 15:07:48 -0300, Raphael MD wrote:

> > > PS.: Appear to be very simple configure UEFI, because I'm using
> > > Refind and it was working with Funtoo, and I realized this problem
> > > is with gentoo kernel's config, but I do not know where I need to
> > > config.

> > 1. partition layout
> > 2. kernel cmdline
> > 3. boot-loader config
> > 4. exact error message on screen

> 1. partition layout
> /dev/sda1 vfat boot
> /dev/sda3 xfs   root
> /dev/sda2 swap

That looks OK.

> > 2. kernel cmdline  
> None

Are you letting rEFInd auto-detect it? Maybe you need to configure it
manually with a root= setting.
 
> > 3. boot-loader config  
> Grub, without any different config.

You said you were using rEFInd, why have you got GRUB as well. rEFInd can
work without a config, GRUB cannot.

> > 4. exact error message on screen  
> Kernel boot up, start to load drivers and stop asking for root
> partition.

That's a summary, not an exact message. As such it gives no useful
information.


-- 
Neil Bothwick

Life's a cache, and then you flush...

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [gentoo-user] Re: Kernel did not finding root partition
  2017-05-29 18:16     ` Neil Bothwick
@ 2017-05-29 19:42       ` Kai Krakow
  2017-05-30  8:26         ` Peter Humphrey
  0 siblings, 1 reply; 45+ messages in thread
From: Kai Krakow @ 2017-05-29 19:42 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 2069 bytes --]

Am Mon, 29 May 2017 19:16:11 +0100
schrieb Neil Bothwick <neil@digimed.co.uk>:

> On Mon, 29 May 2017 15:07:48 -0300, Raphael MD wrote:
> 
>  [...]  
> 
> > > 1. partition layout
> > > 2. kernel cmdline
> > > 3. boot-loader config
> > > 4. exact error message on screen  
> 
> > 1. partition layout
> > /dev/sda1 vfat boot
> > /dev/sda3 xfs   root
> > /dev/sda2 swap  
> 
> That looks OK.

Yes, but I am missing some info:

Is sda1 marked as ESP?

Also, you should mark sda3 as root partition through gptfdisk.

That way, any modern EFI boot loader should be able to auto-configure
everything.

> > > 2. kernel cmdline    
> > None  
> 
> Are you letting rEFInd auto-detect it? Maybe you need to configure it
> manually with a root= setting.

I think you need a working initrd for auto-detection to work. At least,
systemd is able to assemble the partitions from GPT partition type
settings and can autodetect boot, swap and rootfs.

Otherwise, you should give at least a root= cmdline.

> > > 3. boot-loader config    
> > Grub, without any different config.  
> 
> You said you were using rEFInd, why have you got GRUB as well. rEFInd
> can work without a config, GRUB cannot.

This puzzles me, too... Maybe rEFInd was installed to sda and grub
installed to sda1, so rEFInd would chain-boot through grub.

Grub, however, won't work without a config file. I'd also suggest to
skip grub completely and use just one loader.

> > > 4. exact error message on screen    
> > Kernel boot up, start to load drivers and stop asking for root
> > partition.  
> 
> That's a summary, not an exact message. As such it gives no useful
> information.

Yes, this is not helpful. How could one expect us to be helpful if
she/he refuses to give details? Nobody requires to copy the screen
contents by hand. For me, a useful screen shot taken with a mobile
phone camera would be a first step.

I think there are even services which can OCR such a screen shot...


-- 
Regards,
Kai

Replies to list-only preferred.

[-- Attachment #2: Digitale Signatur von OpenPGP --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] Re: Kernel did not finding root partition
  2017-05-29 19:42       ` Kai Krakow
@ 2017-05-30  8:26         ` Peter Humphrey
  2017-05-30 17:08           ` Raphael MD
  2017-05-30 19:28           ` Kai Krakow
  0 siblings, 2 replies; 45+ messages in thread
From: Peter Humphrey @ 2017-05-30  8:26 UTC (permalink / raw
  To: gentoo-user

On Monday 29 May 2017 21:42:28 Kai Krakow wrote:
> Am Mon, 29 May 2017 19:16:11 +0100
> 
> schrieb Neil Bothwick <neil@digimed.co.uk>:
> > On Mon, 29 May 2017 15:07:48 -0300, Raphael MD wrote:
[...]
> > > 3. boot-loader config
> > > 
> > > Grub, without any different config.
> > 
> > You said you were using rEFInd, why have you got GRUB as well. rEFInd
> > can work without a config, GRUB cannot.
> 
> This puzzles me, too... Maybe rEFInd was installed to sda and grub
> installed to sda1, so rEFInd would chain-boot through grub.
> 
> Grub, however, won't work without a config file. I'd also suggest to
> skip grub completely and use just one loader.

Not only that, but for some reason I couldn't get grub to work at all on my 
Asus UEFI system. I use systemd-boot only, with a separate config file for 
each kernel I might want to boot. (I do not have the rest of systemd in this 
openrc system; just its boot program.)

It might not help the OP but this is my script for compiling a kernel:

# cat /usr/local/bin/kmake 
#!/bin/bash 
mount /boot 
cd /usr/src/linux 
time (make -j12 && make modules_install && make install &&\ 
	/bin/ls -lh --color=auto /boot &&\ 
	echo &&\ 
	cp -v ./arch/x86/boot/bzImage /boot/EFI/Boot/bootX64.efi
) &&\ 
echo; echo "Rebuilding modules..."; echo &&\ 
emerge --jobs --load-average=48 @module-rebuild @x11-module-rebuild

He may be missing the copying step; that would explain his inability either 
to boot or to supply the info you asked him for.

-- 
Regards
Peter



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] Re: Kernel did not finding root partition
  2017-05-30  8:26         ` Peter Humphrey
@ 2017-05-30 17:08           ` Raphael MD
  2017-05-30 18:05             ` Mick
  2017-05-30 19:28           ` Kai Krakow
  1 sibling, 1 reply; 45+ messages in thread
From: Raphael MD @ 2017-05-30 17:08 UTC (permalink / raw
  To: Gentoo User

[-- Attachment #1: Type: text/plain, Size: 1187 bytes --]

Thank you all, for the help until now.

I didn't solve my problem yet, but I realised some troubles and mistakes
that I've being made.

First I'll divide those problematic situations I've suffered:

1.I was using Genkernel to configure and build the kernel, but Genkernel’s
menuconfig doesn’t work like make menuconfig. Genkernel replace my .config
everytime I run genkernel –menuconfig all, with this, I my .config has lost
XFS build-in, because default Genkernel .config has setted XFS as a module.

2. I’m using rEFInd, installed from Windows 10, I’ll need dual boot. Now I
understand that rEFInd can substitute GRUB, but I’ve read a lots of wikis,
and it became a little bit confusing. Based on wikis I’ve configured my
kernel with EFI stub thinking that is necessary to boot with GRUB only
because UEFI.

3. GRUB has booted my kernel, but this EFI Stub’s over-configured kernel,
maybe has complicated the situation with GRUB. (I've only supposed that).

Expose that, or I configure kernel to use GRUB or rEFInd.

I’m leaned to use on rEFInd, but I suffered to create initramfs with
gernkernel once, in fact I do not like genkernel at all.

[-- Attachment #2: Type: text/html, Size: 1323 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] Re: Kernel did not finding root partition
  2017-05-30 17:08           ` Raphael MD
@ 2017-05-30 18:05             ` Mick
  0 siblings, 0 replies; 45+ messages in thread
From: Mick @ 2017-05-30 18:05 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 2337 bytes --]

On Tuesday 30 May 2017 14:08:08 Raphael MD wrote:
> Thank you all, for the help until now.
> 
> I didn't solve my problem yet, but I realised some troubles and mistakes
> that I've being made.
> 
> First I'll divide those problematic situations I've suffered:
> 
> 1.I was using Genkernel to configure and build the kernel, but Genkernel’s
> menuconfig doesn’t work like make menuconfig. Genkernel replace my .config
> everytime I run genkernel –menuconfig all, with this, I my .config has lost
> XFS build-in, because default Genkernel .config has setted XFS as a module.

I don't use genkernel myself, but in capable hands can be a quick process 
compared to manual kernel configuration and installation.  You probably did 
not read this guide which should help you build what you need:

 https://wiki.gentoo.org/wiki/Genkernel


> 2. I’m using rEFInd, installed from Windows 10, I’ll need dual boot. Now I
> understand that rEFInd can substitute GRUB, but I’ve read a lots of wikis,
> and it became a little bit confusing. Based on wikis I’ve configured my
> kernel with EFI stub thinking that is necessary to boot with GRUB only
> because UEFI.

rEFInd is a fine tool, if you need to multiboot.  I don't know what wikis you 
may have read, but here you go:

 https://wiki.gentoo.org/wiki/Refind


> 3. GRUB has booted my kernel, but this EFI Stub’s over-configured kernel,
> maybe has complicated the situation with GRUB. (I've only supposed that).
> 
> Expose that, or I configure kernel to use GRUB or rEFInd.

You can use either rEFInd or GRUB.  It does not make sense to use both, unless 
you enjoy slowing your boot process by chainloading one boot manager after 
another.

If for some reason you want to use GRUB, then have a read here:

 https://wiki.gentoo.org/wiki/GRUB2


> I’m leaned to use on rEFInd, but I suffered to create initramfs with
> gernkernel once, in fact I do not like genkernel at all.

Genkernel will create an initramfs for you and some people use genkernel 
mainly for this reason alone.  Have a read here to get some grounding on 
initramfs:

 https://wiki.gentoo.org/wiki/Initramfs/Guide

and here to use dracut as an alternative initramfs builder application:

 https://wiki.gentoo.org/wiki/Dracut

HTH.
-- 
Regards,
Mick

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [gentoo-user] Re: Kernel did not finding root partition
  2017-05-30  8:26         ` Peter Humphrey
  2017-05-30 17:08           ` Raphael MD
@ 2017-05-30 19:28           ` Kai Krakow
  1 sibling, 0 replies; 45+ messages in thread
From: Kai Krakow @ 2017-05-30 19:28 UTC (permalink / raw
  To: gentoo-user

Am Tue, 30 May 2017 09:26:03 +0100
schrieb Peter Humphrey <peter@prh.myzen.co.uk>:

> On Monday 29 May 2017 21:42:28 Kai Krakow wrote:
> > Am Mon, 29 May 2017 19:16:11 +0100
> > 
> > schrieb Neil Bothwick <neil@digimed.co.uk>:  
> > > On Mon, 29 May 2017 15:07:48 -0300, Raphael MD wrote:  
> [...]
>  [...]  
> > > 
> > > You said you were using rEFInd, why have you got GRUB as well.
> > > rEFInd can work without a config, GRUB cannot.  
> > 
> > This puzzles me, too... Maybe rEFInd was installed to sda and grub
> > installed to sda1, so rEFInd would chain-boot through grub.
> > 
> > Grub, however, won't work without a config file. I'd also suggest to
> > skip grub completely and use just one loader.  
> 
> Not only that, but for some reason I couldn't get grub to work at all
> on my Asus UEFI system. I use systemd-boot only, with a separate
> config file for each kernel I might want to boot. (I do not have the
> rest of systemd in this openrc system; just its boot program.)
> 
> It might not help the OP but this is my script for compiling a kernel:
> 
> # cat /usr/local/bin/kmake 
> #!/bin/bash 
> mount /boot 
> cd /usr/src/linux 
> time (make -j12 && make modules_install && make install &&\ 
> 	/bin/ls -lh --color=auto /boot &&\ 
> 	echo &&\ 
> 	cp -v ./arch/x86/boot/bzImage /boot/EFI/Boot/bootX64.efi
> ) &&\ 
> echo; echo "Rebuilding modules..."; echo &&\ 
> emerge --jobs --load-average=48 @module-rebuild @x11-module-rebuild
> 
> He may be missing the copying step; that would explain his inability
> either to boot or to supply the info you asked him for.

I hooked into the install hook infrastructure of the kernel instead:

$ cat /etc/kernel/postinst.d/70_rebuild-modules
#!/bin/bash
exec env -i PATH=$PATH /usr/bin/emerge -1v --usepkg=n @module-rebuild

$ cat /etc/kernel/postinst.d/90_systemd
#!/bin/bash
/usr/bin/kernel-install remove $1 $2
/usr/bin/kernel-install add $1 $2

This takes care of everything and the kernel-install script from
systemd also rebuilds the dracut initrd (because it installed hooks
to /usr/lib/kernel/install.d).

eclean-kernel can then be used to properly clean up obsolete kernel
versions. I'm running it through cron to keep only the most recent 5
kernels at weekly intervals.

For the hooks to properly execute at the right time, it is important to
give the "make install" target last:

$ cd /usr/src/linux
$ make oldconfig
# make -j9 
# make modules_install firmware_install install

The "install" target triggers the hooks, so modules have to be already
installed at that time.

Additionally I have a script to rebuild dracut easily on demand (e.g.,
when early boot components were updated or changed):

$ cat /usr/local/sbin/rebuild-dracut.sh
#!/bin/bash
set -e
if [ "$1" == "-a" ]; then
        versions=$(cd /boot && ls vmlinuz-* | fgrep -v .old | sed 's/vmlinuz-//')
else
        versions="$@"
fi
versions=${versions:=$(uname -r)}
for hook in $(ls /etc/kernel/postinst.d/*_{dracut,grub,systemd} 2>/dev/null); do
        for version in $versions; do
                ${hook} ${version%.old} /boot/vmlinuz-${version}
        done
done


-- 
Regards,
Kai

Replies to list-only preferred.



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] tmp on tmpfs
  2017-05-28 10:07       ` Mick
@ 2017-05-31  0:36         ` Kent Fredric
  2017-05-31  7:33           ` Mick
  0 siblings, 1 reply; 45+ messages in thread
From: Kent Fredric @ 2017-05-31  0:36 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 5589 bytes --]

On Sun, 28 May 2017 11:07:03 +0100
Mick <michaelkintzios@gmail.com> wrote:

> Did you also have zbud enabled at the time?

Historical kernel configs say yes:

xzcat /root/kernels/04.04.26-gentoo/2016-11-30-23-33-29_success.xz  | grep -E "Z(SWAP|BUD)"
CONFIG_ZSWAP=y
CONFIG_ZBUD=y

Though I should mention there are other issues with that box on top of this that could be exacerbated
by this, which are only occasionally a problem without this.

But it used to be all these would trigger kernel panics.

[1262560.644640] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1262560.644750] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1262560.644860] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1262560.644970] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1262566.614082] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1262566.614213] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1262566.614321] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1262566.656214] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1262566.656329] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1262566.656440] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1262566.656550] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1262566.656660] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1262566.656770] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1262566.670106] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1349283.357400] ksoftirqd/0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1606358.941209] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1606358.941565] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1606358.941680] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1606358.941789] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1606358.941896] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1606358.942013] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1606358.942120] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1606358.942226] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1606358.942331] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1606358.942469] irq/30-eth0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[1607776.644830] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1607776.687657] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1612837.743021] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1658262.328936] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1666011.039154] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1668636.093637] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1669722.355688] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1680913.653645] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1680919.640022] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1680962.743563] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1680962.755535] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1681008.201625] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1681008.513501] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1690596.427305] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1690596.427499] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1690596.435733] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1690851.884134] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1691003.944968] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1691037.167644] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1691037.173233] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1691386.668001] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1691386.668170] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1691472.820944] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1691929.615462] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1692993.908335] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1693396.399589] irq/30-eth0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [gentoo-user] tmp on tmpfs
  2017-05-31  0:36         ` Kent Fredric
@ 2017-05-31  7:33           ` Mick
  0 siblings, 0 replies; 45+ messages in thread
From: Mick @ 2017-05-31  7:33 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1128 bytes --]

On Wednesday 31 May 2017 12:36:37 Kent Fredric wrote:
> On Sun, 28 May 2017 11:07:03 +0100
> 
> Mick <michaelkintzios@gmail.com> wrote:
> > Did you also have zbud enabled at the time?
> 
> Historical kernel configs say yes:
> 
> xzcat /root/kernels/04.04.26-gentoo/2016-11-30-23-33-29_success.xz  | grep
> -E "Z(SWAP|BUD)" CONFIG_ZSWAP=y
> CONFIG_ZBUD=y
> 
> Though I should mention there are other issues with that box on top of this
> that could be exacerbated by this, which are only occasionally a problem
> without this.
> 
> But it used to be all these would trigger kernel panics.
> 
> [1262560.644640] irq/30-eth0: page allocation failure: order:0,
> mode:0x2080020(GFP_ATOMIC) [1262560.644750] irq/30-eth0: page allocation
> failure: order:0, mode:0x2080020(GFP_ATOMIC) [1262560.644860] irq/30-eth0:
> page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[snip ...]

Fair enough.  I didn't have any such problems here, but I noticed dekstop 
latency going through the roof when paging started taking place.  I have 
disabled zswap and will see if things improve.

-- 
Regards,
Mick

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

end of thread, other threads:[~2017-05-31  7:34 UTC | newest]

Thread overview: 45+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-05-29 11:09 [gentoo-user] Kernel did not finding root partition Raphael MD
2017-05-29 11:19 ` Rasmus Thomsen
2017-05-29 17:48 ` [gentoo-user] " Kai Krakow
2017-05-29 18:07   ` Raphael MD
2017-05-29 18:16     ` Neil Bothwick
2017-05-29 19:42       ` Kai Krakow
2017-05-30  8:26         ` Peter Humphrey
2017-05-30 17:08           ` Raphael MD
2017-05-30 18:05             ` Mick
2017-05-30 19:28           ` Kai Krakow
  -- strict thread matches above, loose matches on Subject: below --
2017-05-24  5:16 [gentoo-user] tmp on tmpfs Ian Zimmerman
2017-05-24  5:34 ` gentoo-user
2017-05-24  6:00   ` [gentoo-user] " Kai Krakow
2017-05-24 17:05     ` Kai Krakow
2017-05-25 18:46       ` [gentoo-user] Puzzled by zswap [Was: tmp on tmpfs] Ian Zimmerman
2017-05-25 19:16         ` [gentoo-user] " Martin Vaeth
2017-05-26  6:00         ` Kai Krakow
2017-05-24 18:34     ` [gentoo-user] Re: tmp on tmpfs Ian Zimmerman
2017-05-24 19:30       ` Rich Freeman
2017-05-24 21:16         ` Andrew Savchenko
2017-05-24 22:40           ` Rich Freeman
2017-05-25  6:34             ` J. Roeleveld
2017-05-25 11:04               ` Kai Krakow
2017-05-25 12:23                 ` Rich Freeman
2017-05-25 14:16                 ` J. Roeleveld
2017-05-25 16:06                   ` Rich Freeman
2017-05-25 16:28                     ` J. Roeleveld
2017-05-25 16:41                       ` Rich Freeman
2017-05-25  3:38         ` Kai Krakow
2017-05-25  7:19           ` J. Roeleveld
2017-05-25  3:36       ` Kai Krakow
2017-05-24 17:00   ` [gentoo-user] " R0b0t1
2017-05-24  6:03 ` Andrew Tselischev
2017-05-24  9:34 ` Rich Freeman
2017-05-24  9:43   ` gentoo-user
2017-05-24  9:54     ` Rich Freeman
2017-05-24 12:45   ` Andrew Savchenko
2017-05-25  4:45     ` [gentoo-user] " Martin Vaeth
2017-05-25  7:24       ` Mick
2017-05-25 15:46         ` Martin Vaeth
2017-05-25 22:36     ` [gentoo-user] " Kent Fredric
2017-05-28 10:07       ` Mick
2017-05-31  0:36         ` Kent Fredric
2017-05-31  7:33           ` Mick
2017-05-24 18:46 ` [gentoo-user] " Nikos Chantziaras

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox