public inbox for gentoo-amd64@lists.gentoo.org
 help / color / mirror / Atom feed
From: Duncan <1i5t5.duncan@cox.net>
To: gentoo-amd64@lists.gentoo.org
Subject: [gentoo-amd64]  Re: tmpfs help
Date: Wed, 13 Feb 2008 11:24:37 +0000 (UTC)	[thread overview]
Message-ID: <pan.2008.02.13.11.24.37@cox.net> (raw)
In-Reply-To: 47B17CA6.9000506@corobor.com

Pascal BERTIN <pascal.bertin@corobor.com> posted
47B17CA6.9000506@corobor.com, excerpted below, on  Tue, 12 Feb 2008
12:01:58 +0100:

> Beso a écrit :
[Pascal wrote...]
>>     You can give a size for tmpfs in the option here is an extract from
>>     my /etc/fstab :
>> 
>>     tmp   /tmp            tmpfs           size=1000000000 0 0
>> 
>> i'll try that. setting it to about 3/4 of swap is good?! i have 8gb
>> swap and 1 gb ram but ram is always full. after setting tmpfs the ram
>> is full but also the swap fills-up quite well.
> 
> On one of my system, with 1G of RAM and 6 GB of swap, I set it to 6GB
> (so that I can compile openoffice). Although It's slow (anyway I start
> openoffice compilation at the end of a day and check on the next
> morning), it works well, and openoofice compiles.

Agreed.

On the worth it or not thing, certainly 1 gig real memory is a bit low 
for compiling into using tmpfs, but for the reasons Richard F gave, it 
should in theory at least, remain faster than compiling into a temp 
location on disk.

As he points out, if it's compiled to disk, it (1) sits in cache and is 
thus in memory anyway, and (2) if it's in cache more than a few seconds, 
it's flushed to disk thus incurring the slowdown of writing to disk.

If it's compiled to tmpfs, then as the kernel needs memory, it writes out 
the least recently accessed stuff to swap.  That applies regardless of 
whether it's apps or tmpfs data that's last used (as for cache, that 
depends on the swappiness setting, /proc/sys/vm/swappiness, with a 
default of 60, 0 means dump all cache before swapping apps, 100 or higher 
means swap apps before dumping cache, so 60 is leaning just slightly 
toward keeping cache and swapping apps).  Thus, the active and most 
recently active parts of the compiler won't be swapped out if there's 
less recently active tmpfs files to swap first.  Similarly with the 
scratch data on tmpfs.  In theory, that should be the most efficient use 
of memory possible, certainly more so than using conventional disk for 
temp data, since that forces it to disk while likely keeping it in cache 
anyway.

However, the theory doesn't always match reality as I'm sure most of us 
realize by now.  Whether it does here or not, I haven't tested, and as 
I've 8 gigs memory, don't care to test for the 1 gig situation.

Regardless, I think we can all agree that it'll be far more practical 
with 2-4 gigs of memory, and for most, the upgrade from 1 to 2 gigs 
minimum for memory should be well worth it in terms of value for cost.  

Meanwhile, I say let the folks dealing with the problem decide their 
policy, testing it and reporting the results if they feel the desire too, 
or just going with what they think works best for them if not.

Given that some are choosing to try it, regardless of why, the original 
question deserves an answer, as Pascal posted above.  It may however be 
worth noting that mount also accepts more human readable numbers as well, 
6G or whatever instead of the long string of numbers.  Thus, here's my 
entry:

/tmp   /tmp      tmpfs  size=6g,nodev,nosuid                      0 0

Again, that's with 8 gigs RAM, so I don't have to worry about swapping so 
much, tho even when I do it's to 4-way striped swap, so 4 times as fast 
as a single disk would be, and I have swappiness set to 100, so the 
kernel swaps apps (at 4-way-stripped swap, twice as fast as reading them 
in off 4-way RAID-6, thus two-way-striped), keeping cache intact.  That 
works great for me! =8^)  Whether it works better than a disk based /tmp 
and PORTAGE_TMPDIR for those with only a gig of memory is something they 
ultimately must decide, regardless of the arguments pro and con here.

FWIW and for clarity, since there seems to be a bit of confusion between 
tmpfs as used here and the FHS/LSB mandated /dev/shm, I have an entry for 
that as well:

shm    /dev/shm  tmpfs  size=20m,noexec,nodev,nosuid              0 0

Note that it's a separate entry.  More than one tmpfs mount is allowed 
and they are all kept separate.  Also note that I have the max size set 
far smaller for it than for /tmp, since I don't have much that uses it 
and only keep it around in case something wants to.  (I do have 
PORTAGE_TMPFS, which is used for very small files, lock files and the 
like, set to /dev/shm, so portage uses it for that even tho 
PORTAGE_TMPDIR is set to /tmp, for the much bigger stuff.)

I actually have another tmpfs mount as well, /dev, for udev:

dev    /dev      tmpfs  mode=0755,size=2m,noexec,noauto          0 0

baselayout-1 users won't have that entry in fstab, as it sets it 
differently, but may have the following instead (tho I think this was 
left over from the never stabilized AFAIK baselayout-1.13-alphas):

init.d /lib64/rcscripts/init.d tmpfs  mode=0755,size=512k,noauto 0 0

So I actually have four tmpfs entries in fstab, altho only three are 
active as that last one is left from an earlier time because the scripts 
no longer call for it to be mounted and the general system mount ignores 
it due to the noauto.  All four are entirely separate mounts, separate 
options, seperate mountpoints, and separately treated by the system.

Also, a warning I've started noting every time this comes up, those who 
point PORTAGE_TMPDIR at a tmpfs AND use ccache will want to make sure 
CCACHE_DIR is pointed somewhere other than a subdir of PORTAGE_TMPDIR, 
since keeping ccache files around is rather the point of running it, but 
the default as found in make.conf.example is to point it at a subdir of 
PORTAGE_TMPDIR.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

-- 
gentoo-amd64@lists.gentoo.org mailing list



  parent reply	other threads:[~2008-02-13 11:24 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-02-12  9:51 [gentoo-amd64] tmpfs help Beso
2008-02-12 10:21 ` Mateusz Mierzwinski
2008-02-12 10:37   ` Beso
2008-02-12 10:31 ` Pascal BERTIN
2008-02-12 10:47   ` Beso
2008-02-12 11:01     ` Pascal BERTIN
2008-02-12 11:25       ` Beso
2008-02-12 23:36       ` Mateusz Mierzwinski
2008-02-13 11:24       ` Duncan [this message]
2008-02-13 12:46         ` [gentoo-amd64] " Volker Armin Hemmann
2008-02-13 15:26           ` Richard Freeman
2008-02-13 17:22           ` Duncan
2008-02-13 19:27             ` Beso
2008-02-13 21:30               ` Duncan
2008-02-13 22:12               ` Mateusz Mierzwinski
2008-02-13  0:49 ` [gentoo-amd64] " Volker Armin Hemmann
2008-02-13  1:17   ` Richard Freeman
2008-02-13  3:04     ` Volker Armin Hemmann
2008-02-13  6:47       ` Steve Buzonas
2008-02-13 15:16       ` Richard Freeman
2008-02-13 16:17         ` Volker Armin Hemmann
2008-02-13 17:04           ` [gentoo-amd64] " Duncan
2008-02-13 19:42           ` [gentoo-amd64] " Richard Freeman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=pan.2008.02.13.11.24.37@cox.net \
    --to=1i5t5.duncan@cox.net \
    --cc=gentoo-amd64@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox