From: Duncan <1i5t5.duncan@cox.net>
To: gentoo-amd64@lists.gentoo.org
Subject: [gentoo-amd64] Re: tmpfs help
Date: Wed, 13 Feb 2008 21:30:30 +0000 (UTC) [thread overview]
Message-ID: <pan.2008.02.13.21.30.30@cox.net> (raw)
In-Reply-To: d257c3560802131127g51bccaf6s920e73d1f2416766@mail.gmail.com
Beso <givemesugarr@gmail.com> posted
d257c3560802131127g51bccaf6s920e73d1f2416766@mail.gmail.com, excerpted
below, on Wed, 13 Feb 2008 19:27:44 +0000:
> i'll try out duncan's speedups for shm and but for the dev one i don't
> use baselayout 2 and i'm still with the 1st version, since i don't feel
> like upgrading to it yet. but i'd like to know some more about what are
> the improvements of the second version.
Well, among other things it's possible to /really/ do parallel startup
now. The option was there before, but it didn't parallelize that much.
Of course, if you have any startup scripts running that the dependencies
aren't straight on, that might be hidden now, but will very likely show
itself once the system actually does try things in parallel. (For those
who prefer traditional serial startup, that remains the safer default.)
Startup's also much faster as certain parts are written in C now, instead
of as scripts. Of course, the various service scripts remain just that,
scripts.
With baselayout-1, the very early core scripts, clock, modules, lvm,
raid, etc, were actually ordered based on a list rather than on their
normal Gentoo dependencies (before, after, uses, needs, etc). That was
because the dependency resolver didn't work quite right that early on.
That has now been fixed, and all scripts including the early stuff like
clock, start in the order the dependencies would indicate, not based on
an arbitrary list.
Various settings are in more logical/traditional locations with
baselayout 2. An example is the /dev filesystem mounted if you have udev
active. Previously, its configuration was in one or more of the
baselayout config files (probably /etc/conf.d/rc but that was quite
awhile ago here, and I've forgotten the details so can't be sure). Now,
the setting in /etc/fstab for that filesystem is honored, as one might
ordinarily expect for any filesystem.
Former addons like lvm and raid now have their own initscripts, just as
any other boot service.
> 5. as for the raid stuff i cannot do it since i've only got one disk.
> i'll try to see what happens with swap set to 100.
For a single disk, it's possible you'll actually want it set the other
way, toward 0 from 60, especially if you are annoyed at the time it takes
to swap programs back in after big compiles or a night's system scan for
slocate (I forgot what the scanner is called, as I don't have slocate
installed here so don't run the database updater/scanner). On a system
such as mine with swap roughly twice as fast as the main filesystems,
however, keeping cache and swapping out apps makes much more sense, since
it's faster to read back in the apps from swap than it is to reload the
data I'd dump from cache to keep the apps in memory. I actually don't
even notice swap usage unless I happen to look at ksysguard, unless it's
swapping over a gig, which doesn't happen too often with 8 gigs RAM.
Still, it's quite common to have a quarter to three quarter gig of
swapped out apps since I've set swappiness to 100, thereby telling the
kernel to keep cache if at all possible, and I routinely do -j12
compiles, often several at a time even, so several gigs of tmpfs plus
several gigs of gcc instances in memory, thereby forcing minor swapping
even with 8 gig RAM, isn't unusual. (Of course, I use emerge --pretend
to ensure the packages I'm emerging in parallel don't conflict with or
depend on each other, so the merges remain parallel.)
> 6. if i use some
> other programs while compiling into tmpfs bigones i need to nice the
> process or i'll get some misbehaviour from the other programs.
Here's another hint. Consider setting PORTAGE_NICENESS=19 in make.conf.
Not only does it stop the hogging, the system then considers it batch
priority and gives it longer timeslices. Thus, counter-intuitively for
lowering the priority, a merge can actually take less real clock time,
because the CPU is spending less time shuffling processes around and more
time actually doing work. Of course, if you run folding@home or the
like, you'll either want to turn that off when doing this, or set portage
to 18 or better instead of 19, so it's not competing directly with your
idle time client.
With kernel 2.6.24, there's also a new scheduling option that groups by
user. By default, root gets 2048 while other users get 1024, half the
share of root. If you have portage set to compile with userprivs, it'll
be using the portage user, with its own 1024 default setting, but if it's
using root privs, it'll be getting root's default 2048 share.
Particularly if you use a -j setting >2 jobs per cpu/core, you may wish
to tweak this some. I use FEATURES=userpriv and userfetch, so most of
the work gets done as the portage user altho the actual qmerge step is
obviously done as root, and MAKEOPTS="-j20 -l12" (twenty jobs, but don't
start new ones if the load is above 12) and my load average runs pretty
close, actually about 12.5-13 when emerging stuff. With four cores (two
CPUs times two cores each), that's a load average of just over three jobs
per core. As mentioned, I have PORTAGE_NICENESS=19. Given all that, I
find increasing my regular user to 2048 works about the best, keeping X
and my various scrollers updating smoothly, amarok playing smoothly (it
normally does) and updating its voiceprint almost smoothly (it doesn't,
if I don't adjust the user share to 2048), etc.
This setting can be found under /sys/kernel/uids/<uid>/cpushare. Read it
to see what a user's share is, write it to write a new share for that
user. (/etc/passwd lists the user/uid correspondence.)
In ordered to have these files and this feature, you'll need to have Fair
group CPU scheduling (FAIR_GROUP_SCHED), under General setup, and then
its suboption, Basis for grouping tasks, set to user id
(FAIR_USER_SCHED). Unfortunately, the files appear as a user logs in (or
is invoked by the system, for daemon users) and disappear as they logout,
so it's not as simple as setting an initscript to set these up at boot
and forgetting about it. There's a sample script available that's
supposed to automate setting and resetting these shares based on kernel
events, but I couldn't get it to work last time I tried it, which was
during the rcs I should say, so it's possible it wasn't all working just
right, yet. However, altering the share files manually works, and isn't
too bad for simple changes, as long as you don't go logging in and out a
lot, thus losing the file and the changes made to it every time there's
nothing running as a particular user any longer.
Another setting that may be useful is the kernel's I/O scheduler. That's
configured under Block layer, I/O Schedulers. CFQ (IOSCHED_CFQ) is what
I choose. Among other things, it automatically prioritizes I/O requests
based on CPU job priority (altho less granularly, as there's only a
handful of I/O priority levels compared to the 40 CPU priority levels).
You can tweak priorities further if desired, but this seems a great
default and is something deadline definitely doesn't have and I don't
believe anticipatory has either. With memory as tight as a gig, having
the ability to batch-priority schedule i/o along with CPU is a very good
thing, particularly when PORTAGE_NICENESS is set. Due to the nature of
disk I/O, it's not going to stop the worst thrashing entirely, but it
should significantly lessen its impact, and at less than worst case, it
will likely make the system significantly more workable than it might be
otherwise under equivalent load.
Of course, YMMV, but I'd look at those, anyway. It could further
increase efficiency.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
--
gentoo-amd64@lists.gentoo.org mailing list
next prev parent reply other threads:[~2008-02-13 21:30 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-02-12 9:51 [gentoo-amd64] tmpfs help Beso
2008-02-12 10:21 ` Mateusz Mierzwinski
2008-02-12 10:37 ` Beso
2008-02-12 10:31 ` Pascal BERTIN
2008-02-12 10:47 ` Beso
2008-02-12 11:01 ` Pascal BERTIN
2008-02-12 11:25 ` Beso
2008-02-12 23:36 ` Mateusz Mierzwinski
2008-02-13 11:24 ` [gentoo-amd64] " Duncan
2008-02-13 12:46 ` Volker Armin Hemmann
2008-02-13 15:26 ` Richard Freeman
2008-02-13 17:22 ` Duncan
2008-02-13 19:27 ` Beso
2008-02-13 21:30 ` Duncan [this message]
2008-02-13 22:12 ` Mateusz Mierzwinski
2008-02-13 0:49 ` [gentoo-amd64] " Volker Armin Hemmann
2008-02-13 1:17 ` Richard Freeman
2008-02-13 3:04 ` Volker Armin Hemmann
2008-02-13 6:47 ` Steve Buzonas
2008-02-13 15:16 ` Richard Freeman
2008-02-13 16:17 ` Volker Armin Hemmann
2008-02-13 17:04 ` [gentoo-amd64] " Duncan
2008-02-13 19:42 ` [gentoo-amd64] " Richard Freeman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=pan.2008.02.13.21.30.30@cox.net \
--to=1i5t5.duncan@cox.net \
--cc=gentoo-amd64@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox