From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id 0059413881B for ; Sun, 19 Oct 2014 12:03:00 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 5DA12E09F1; Sun, 19 Oct 2014 12:02:53 +0000 (UTC) Received: from uberouter3.guranga.net (unknown [81.19.48.176]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 42FB4E08AB for ; Sun, 19 Oct 2014 12:02:52 +0000 (UTC) Received: from [192.168.1.115] (unknown [91.235.98.134]) (using TLSv1 with cipher ECDHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by uberouter3.guranga.net (Postfix) with ESMTPSA id 12A7C415 for ; Sun, 19 Oct 2014 13:02:41 +0100 (BST) Message-ID: <5443A864.2030508@thegeezer.net> Date: Sun, 19 Oct 2014 13:02:44 +0100 From: thegeezer User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.8.0 Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org MIME-Version: 1.0 To: gentoo-user@lists.gentoo.org Subject: Re: [gentoo-user] Re: gigabyte mobo latency References: <5442DAC8.2030106@thegeezer.net> <5442F17C.7040904@thegeezer.net> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Archives-Salt: 0653cc39-aaee-4fb5-9a2f-694a68c18117 X-Archives-Hash: 8aeb4c7b265842681fc6c4ceb628f736 On 19/10/14 04:15, James wrote: > thegeezer thegeezer.net> writes: > > >> there is a little more here >> http://gentoo-en.vfose.ru/wiki/Improve_responsiveness_with_cgroups >> which will allow you to script creating a cgroup with the processID of >> an interactive shell, that you can start from to help save hunting down >> all the threads spawned by chrome. >> you can then do fun stuff with echo $$ > >> /sys/fs/cgroup/cpu/high_priority/tasks > Yea this is cool. But when it's a cluster, with thousands of processes cgroups are hierarchical, so for example if you start a bash script which is in cgroup "cpu/high_prio" which then starts your processes, all called programs go into the same cgroup which makes it a bit simpler. also openrc will start your services in the correct cgroup too > this seem to be limited by the manual parsing and CLI actions that > are necessary for large/busy environments. (We shall see). > >> hopefully this will give you a bit more control over all of that though > > Gmane mandates that the previous lines be culled. That said; you have given > me much to think about, test and refine. > > In /sys/fs/cgroup/cpu I have: > > cgroup.clone_children cgroup.procs cpu.shares release_agent > cgroup.event_control cgroup.sane_behavior notify_on_release tasks > > So I'll have to research creating and priotizing dirs like "high_priority" > > > I certainly appreciate your lucid and direct explanations. > Let me play with this a bit and I'll post back when I munge things > up..... Are there any "graphical tools" for adjusting and managing > cgroups? i thought that htop did this but i was wrong.. it only shows which cgroup processes are in. that would be a killer feature though. > Surely when I apply this to the myriad of things running > on my mesos+spark cluster I'm going to need a well thoughout tool > for cgroup management, especially for non-local systems. other distros have apps such as "cgclassify" which provides some shortcut to managing cgroups -- creation / and moving process in and out you can also have a nohup process that does ps -eLf to search for process you want to classify and move them into the appropriate cgroup for default cgroups you can also use inotify a quick search shows http://libcg.sourceforge.net/ which daemonises this process. all this is a bit hack'n'slash though i appreciate, so if anyone else knows of suitable tools i'd also be interested to hear of them > particularly on memory resources organization > and allocations as spark is an "in_memory" environment that seems > sensitive to OOM issues of all sorts. > > thx, > James > > > >