* [gentoo-user] Idle Process Scheduling
@ 2009-06-13 5:46 Jason Lynch
2009-06-13 21:39 ` Sascha Hlusiak
0 siblings, 1 reply; 4+ messages in thread
From: Jason Lynch @ 2009-06-13 5:46 UTC (permalink / raw
To: gentoo-user
I'm having a strange problem on my Q6600 that cropped up starting with
the 2.6.29 series of the kernel, and is still present in 2.6.30.
Essentially, at all times, I have four nice 19 processes running, which
for the sake of this post, we'll call "dnetc". All four cores are
utilized. At this point, if I start another CPU-bound process that isn't
niced, it begins to take up an entire core. This is expected. What isn't
expected, however, is that another core begins idling inexplicably. As a
result, despite 5 processes currently available to run, only 3 are
actually running at any given time (the non-niced process, and two
instances of dnetc).
I have no idea where to begin diagnosing this, so if anyone has any
pointers or knows anything, I'd like to hear about it. I've done numerous
searches of mailing lists, bug trackers, etc., but haven't found
anything. Maybe I just can't find the right keywords.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [gentoo-user] Idle Process Scheduling
2009-06-13 5:46 [gentoo-user] Idle Process Scheduling Jason Lynch
@ 2009-06-13 21:39 ` Sascha Hlusiak
2009-06-14 6:07 ` [gentoo-user] " Jason Lynch
0 siblings, 1 reply; 4+ messages in thread
From: Sascha Hlusiak @ 2009-06-13 21:39 UTC (permalink / raw
To: gentoo-user; +Cc: Jason Lynch
[-- Attachment #1: Type: text/plain, Size: 1045 bytes --]
Am Saturday 13 June 2009 07:46:34 schrieb Jason Lynch:
> I'm having a strange problem on my Q6600 that cropped up starting with
> the 2.6.29 series of the kernel, and is still present in 2.6.30.
>
> Essentially, at all times, I have four nice 19 processes running, which
> for the sake of this post, we'll call "dnetc". All four cores are
> utilized. At this point, if I start another CPU-bound process that isn't
> niced, it begins to take up an entire core. This is expected. What isn't
> expected, however, is that another core begins idling inexplicably. As a
> result, despite 5 processes currently available to run, only 3 are
> actually running at any given time (the non-niced process, and two
> instances of dnetc).
How do you know how many processes are running? What does 'top' say about CPU
usage and load? Maybe dnetc has two threads, which can each occupy a core, so
you have still 4 threads that are running, in 3 processes. You still should
get a load of 5 or higher.
You don't have a lot of IO load, do you?
[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* [gentoo-user] Re: Idle Process Scheduling
2009-06-13 21:39 ` Sascha Hlusiak
@ 2009-06-14 6:07 ` Jason Lynch
2009-06-14 7:39 ` Mike Kazantsev
0 siblings, 1 reply; 4+ messages in thread
From: Jason Lynch @ 2009-06-14 6:07 UTC (permalink / raw
To: gentoo-user
On Sat, 13 Jun 2009 23:39:52 +0200, Sascha Hlusiak wrote:
> How do you know how many processes are running? What does 'top' say
> about CPU usage and load? Maybe dnetc has two threads, which can each
> occupy a core, so you have still 4 threads that are running, in 3
> processes. You still should get a load of 5 or higher.
> You don't have a lot of IO load, do you?
Technically, in the scenario I described, I only have two processes, as
dnetc is running with four threads. To simplify the situation, I created
a simple Python script that does nothing other than loop indefinitely. I
then start four separate nice 19 copies of it in four separate terminals.
At this point, top reports that each CPU is almost entirely executing
niced code. Load average is a little bit above 4, as expected.
At this point, I leave these four copies running, and execute a fifth
copy without nicing it, so it ends up with a nice value of 0. At this
point, cpu0 is executing almost 100% user. cpu2 and cpu3 are executing
almost 100% nice. Finally, cpu1 is almost 100% idle. (The actual CPU
numbering seems to shift around every so often.)
Thus, I have five processes, four at nice 19, one at nice 0, a load
average of just over 5, but only 3 out of the 4 cores are actually doing
anything.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [gentoo-user] Re: Idle Process Scheduling
2009-06-14 6:07 ` [gentoo-user] " Jason Lynch
@ 2009-06-14 7:39 ` Mike Kazantsev
0 siblings, 0 replies; 4+ messages in thread
From: Mike Kazantsev @ 2009-06-14 7:39 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 972 bytes --]
On Sun, 14 Jun 2009 06:07:16 +0000 (UTC)
Jason Lynch <jason@calindora.com> wrote:
> Thus, I have five processes, four at nice 19, one at nice 0, a load
> average of just over 5, but only 3 out of the 4 cores are actually doing
> anything.
That's an interesting observation with quite a trivial scenario.
So I thought to check it out and ran 8 niced copies "while True: pass"
script on 8-core machine, atop showed 799-800% load, 100% for each core.
Ninth, non-niced process indeed drops the load to 700-710%, with one
core absolutely free.
Then, I've tried to remove nice form the equation and load held at 800%
with 8, 9, 10 and more processes. Nice-only processes behave similary,
loading all eight cores.
So I guess the problem (or feature?) is related to nice / non-nice
processes' scheduling and exists at least in 2.6.29 kernel.
Gotta google it a bit later, bet someone on lkml should've noticed it.
--
Mike Kazantsev // fraggod.net
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2009-06-14 7:41 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-06-13 5:46 [gentoo-user] Idle Process Scheduling Jason Lynch
2009-06-13 21:39 ` Sascha Hlusiak
2009-06-14 6:07 ` [gentoo-user] " Jason Lynch
2009-06-14 7:39 ` Mike Kazantsev
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox