From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id 48A3C13881C for ; Sun, 19 Oct 2014 03:16:31 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 20FE7E0946; Sun, 19 Oct 2014 03:16:24 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 10AE1E0936 for ; Sun, 19 Oct 2014 03:16:22 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp.gentoo.org (Postfix) with ESMTP id 0FF0A3401B3 for ; Sun, 19 Oct 2014 03:16:21 +0000 (UTC) X-Virus-Scanned: by amavisd-new using ClamAV at gentoo.org X-Spam-Flag: NO X-Spam-Score: -2.278 X-Spam-Level: X-Spam-Status: No, score=-2.278 tagged_above=-999 required=5.5 tests=[AWL=0.323, BAYES_00=-1.9, RCVD_IN_DNSWL_LOW=-0.7, RP_MATCHES_RCVD=0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=no Received: from smtp.gentoo.org ([127.0.0.1]) by localhost (smtp.gentoo.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id sTFuYCaX7YeR for ; Sun, 19 Oct 2014 03:16:15 +0000 (UTC) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id D863E340139 for ; Sun, 19 Oct 2014 03:16:13 +0000 (UTC) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1XfgyV-0004um-Rs for gentoo-user@gentoo.org; Sun, 19 Oct 2014 05:16:08 +0200 Received: from rrcs-71-40-157-251.se.biz.rr.com ([71.40.157.251]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Sun, 19 Oct 2014 05:16:07 +0200 Received: from wireless by rrcs-71-40-157-251.se.biz.rr.com with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Sun, 19 Oct 2014 05:16:07 +0200 X-Injected-Via-Gmane: http://gmane.org/ To: gentoo-user@lists.gentoo.org From: James Subject: [gentoo-user] Re: gigabyte mobo latency Date: Sun, 19 Oct 2014 03:15:51 +0000 (UTC) Message-ID: References: <5442DAC8.2030106@thegeezer.net> <5442F17C.7040904@thegeezer.net> Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Complaints-To: usenet@ger.gmane.org X-Gmane-NNTP-Posting-Host: sea.gmane.org User-Agent: Loom/3.14 (http://gmane.org/) X-Loom-IP: 71.40.157.251 (Mozilla/5.0 (X11; Linux x86_64; rv:29.0) Gecko/20100101 Firefox/29.0 SeaMonkey/2.26.1) X-Archives-Salt: 29ae1706-caf4-4a4e-b073-38d919e5554d X-Archives-Hash: 81fbe8ac27bae99508b0fd4cae4a58bf thegeezer thegeezer.net> writes: > there is a little more here > http://gentoo-en.vfose.ru/wiki/Improve_responsiveness_with_cgroups > which will allow you to script creating a cgroup with the processID of > an interactive shell, that you can start from to help save hunting down > all the threads spawned by chrome. > you can then do fun stuff with echo $$ > > /sys/fs/cgroup/cpu/high_priority/tasks Yea this is cool. But when it's a cluster, with thousands of processes this seem to be limited by the manual parsing and CLI actions that are necessary for large/busy environments. (We shall see). > hopefully this will give you a bit more control over all of that though Gmane mandates that the previous lines be culled. That said; you have given me much to think about, test and refine. In /sys/fs/cgroup/cpu I have: cgroup.clone_children cgroup.procs cpu.shares release_agent cgroup.event_control cgroup.sane_behavior notify_on_release tasks So I'll have to research creating and priotizing dirs like "high_priority" I certainly appreciate your lucid and direct explanations. Let me play with this a bit and I'll post back when I munge things up..... Are there any "graphical tools" for adjusting and managing cgroups? Surely when I apply this to the myriad of things running on my mesos+spark cluster I'm going to need a well thoughout tool for cgroup management, particularly on memory resources organization and allocations as spark is an "in_memory" environment that seems sensitive to OOM issues of all sorts. thx, James