From: Duncan <1i5t5.duncan@cox.net>
To: gentoo-amd64@lists.gentoo.org
Subject: [gentoo-amd64] Memory usage Was: [OT] AGPART [SOLVED]
Date: Sun, 6 May 2007 14:36:16 +0000 (UTC) [thread overview]
Message-ID: <pan.2007.05.06.14.36.15@cox.net> (raw)
In-Reply-To: 200705050856.32180.bss03@volumehost.net
"Boyd Stephen Smith Jr." <bss03@volumehost.net> posted
200705050856.32180.bss03@volumehost.net, excerpted below, on Sat, 05 May
2007 08:56:27 -0500:
> On Saturday 05 May 2007, Duncan <1i5t5.duncan@cox.net> wrote about
> '[gentoo-amd64] Re: [OT] AGPART [SOLVED]':
>> Actually, here, I have 8 gigs. That's a bit overkill. I'd probably
>> stick with four if I were doing it over, as over four gigs remains
>> entirely empty, most of the time, not even used for cache.
>
> Odd, here I run 4G and it's consistently filled. It's mostly cache and
> buffers, but it is most definitely used. I've even got a few 100Mio
> swapped out.
It's probably just usage patterns. After awhile up, I'll have serious
cache, but there's several things that prevents it from getting too big
most of the time.
1) I swsusp to disk fairly frequently (every day or two, generally).
That dumps cache, so I start over when I resume. (OTOH, swsusp also
means I too carry some swapped out stuff, generally ~120-200 MB that
never swaps back in between suspends.)
2) I run MAKEOPTS=-j1000. (Why? Mainly just because I can! =8^) Few
merges split even 100 jobs, but some of them do (it's really fun watching
the minute load average jump up and up and up to peak at 500 or so,
compiling the kernel! =8^), and it's not entirely unusual for C++ jobs to
use a gig or more of memory for a single job. Since I also run parallel
merges on occasion, it's not unusual at all for me to see 2-3 gigs of
temporary (maybe two minutes, peaking for just a few seconds) application
memory in use by portage jobs, in addition to the half gig to gig of
regular app memory in use, and the possibly several gigs of tmpfs
PORTAGE_TMPDIR in use as scratch space by parallel merges. Of course,
that squeezes out regular cache, and I often see memory use including
cache drop by four gigs, sometimes more, from peak merge usage to post
merge.
3) I don't run the indexer for slocate. In fact, I don't even have it
merged. On a lot of systems, that's the big daily cache gobbler right
there. If it's indexing 50 gigs of disk files, pretty moderate by
today's standards, it'd fill 50 gigs of cache memory, if it had it to
fill. Obviously, anyone who runs that is going to have a full cache
until they do something that grabs the memory and then releases it, no
matter /what/ their memory size (within reason).
4) My actual daily working fileset isn't that great. When I play music,
it's often off the net, not off my disk, so I'm not using disk for that.
I don't have the big movie cache many have. I don't play gigabytes worth
of games. Etc. I have gigs of files, but don't tend to use them daily,
and with swsusp every day or two, and running many of the kernel rcs and
sometimes even the daily git snapshots (not to mention when I have a
kernel bug open and I'm rebooting new kernel builds multiple times a
day), many times I just don't actually /read/ (or write, since those
would be cached after write as well) multiple gigs of files between cache-
dumps.
So as I said, practically speaking, four gigs of memory would be plenty,
as I'd be a bit more conservative on my merges then, and would figure 2-3
gigs of cache and 1-2 gigs of app memory most of the time.
(Right now, after returning from swsusp a few hours ago, and spending
most of my time since in the text groups/lists, I'm running about 200 MB
still swapped out from the suspend, and total memory use, app, buffer,
and cache, of only ~1/2 GB. That's as displayed on ksysguard, with KDE
including kmail and amarok in the system tray, and pan open to read and
reply to the lists (gmane list2news gateway) with, all started before my
last swsusp, so only the apps and state I've actually used since then
have been swapped back in. If I closed and reopened pan, so it had to
reread its lists, and ran an emerge --pretend world, to recache that
info, I'd be back up at a gig to a gig and a half total usage, cache
included, probably.)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
--
gentoo-amd64@gentoo.org mailing list
next prev parent reply other threads:[~2007-05-06 14:38 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-05-04 17:00 [gentoo-amd64] [OT] AGPART Daniel Iliev
2007-05-04 17:36 ` Jeffrey Gardner
2007-05-04 18:21 ` Wil Reichert
2007-05-04 18:47 ` Jeffrey Gardner
2007-05-05 1:58 ` [gentoo-amd64] " Duncan
2007-05-05 7:58 ` [gentoo-amd64] [OT] AGPART [SOLVED] Daniel Iliev
2007-05-05 9:15 ` [gentoo-amd64] " Duncan
2007-05-05 9:49 ` Peter Humphrey
2007-05-05 13:56 ` Boyd Stephen Smith Jr.
2007-05-06 14:36 ` Duncan [this message]
2007-05-06 8:34 ` [gentoo-amd64] Re: [OT] AGPART [SOLVED] DRIFT: RAM USAGE Daniel Iliev
2007-05-06 13:36 ` Duncan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=pan.2007.05.06.14.36.15@cox.net \
--to=1i5t5.duncan@cox.net \
--cc=gentoo-amd64@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox