From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id 1423C138010 for ; Thu, 6 Sep 2012 12:27:13 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 9B9ADE08BA; Thu, 6 Sep 2012 12:26:14 +0000 (UTC) Received: from mail-yw0-f52.google.com (mail-yw0-f52.google.com [209.85.213.52]) by pigeon.gentoo.org (Postfix) with ESMTP id 5821AE089D for ; Thu, 6 Sep 2012 12:22:01 +0000 (UTC) Received: by yhpp61 with SMTP id p61so300069yhp.11 for ; Thu, 06 Sep 2012 05:22:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:x-enigmail-version:content-type :content-transfer-encoding; bh=gO2S0logUvLdLfkIEc2Tb4p4ZQ2FsDc8esDXd68iFEA=; b=GGejC1YDXl+9PXXm5fUfmonmd4KNmo7EEfPKuxjP8ngFD0McgDhsOdEE7LSmnRmoQN R3voEBt6ZAFs5ZKzV0CyqBAIOFAilFDfc2S31PHdYXQY0QdGa5SCqf1PFYQD3gKS19Nx HUL97bheDxRZ8V54gI4g0IM1WXUgpHV9IZEUelWTZYKhPm3ceOOEc0ofMs+oaEYGGTm2 u57ZgxeX09KPp0K5fqnFQIE+MITwtLO+ZQLeZevpNl68Vu9r8RB5dJuFXqd939V/ZaUv 5jrqIv1EulUBRmJFVp8QTuFOev0PefoD1viINBtLSkAn+fyt2H/SG02u3Op8EKatlYfR JOzQ== Received: by 10.236.77.7 with SMTP id c7mr1549303yhe.2.1346934120841; Thu, 06 Sep 2012 05:22:00 -0700 (PDT) Received: from [192.168.2.5] (adsl-65-0-93-201.jan.bellsouth.net. [65.0.93.201]) by mx.google.com with ESMTPS id x3sm3510562yhd.9.2012.09.06.05.21.58 (version=SSLv3 cipher=OTHER); Thu, 06 Sep 2012 05:21:59 -0700 (PDT) Message-ID: <50489565.3020404@gmail.com> Date: Thu, 06 Sep 2012 07:21:57 -0500 From: Dale User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120902 Firefox/15.0 SeaMonkey/2.12 Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org MIME-Version: 1.0 To: gentoo-user@lists.gentoo.org Subject: Re: [gentoo-user] Re: aligning SSD partitions References: <50474B1D.8070306@gmail.com> <20120905161743.1f2ecd9d@hactar.digimed.co.uk> <504791EB.2030103@gmail.com> <20120906065755.GA2442@nicolas-desktop> <504869AB.20908@gmail.com> <20120906104744.7f0972a7@hactar.digimed.co.uk> <5048750B.4080206@gmail.com> <20120906104103.GB2442@nicolas-desktop> <504882FC.4040708@gmail.com> <20120906113732.GD2442@nicolas-desktop> In-Reply-To: <20120906113732.GD2442@nicolas-desktop> X-Enigmail-Version: 1.4.4 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Archives-Salt: b1e7c1c7-b0f8-4ad6-95c4-818932760cf4 X-Archives-Hash: 30363e404d7a985e89ba5f0a76dfc271 Nicolas Sebrecht wrote: > The 06/09/12, Dale wrote: > >> The point you are missing is this. Between those tests, I CLEARED that >> cache. The thing you and Neil claim that makes a difference does not >> exist after you clear the cache. I CLEARED that cache between EACH and >> every test that was ran whether using tmpfs or not. I did this instead >> of rebooting my system after each test. > We clearly understand that you cleared the cache between the tests. We > pretend that it is not much relevant for your tests because of another > process. > >> So, in theory I would say that using tmpfs would >> result in faster compile times. After testing, theory left the building >> and reality showed that it did not make much if any difference. > Yes, because you did the tests on a system with lot of RAM. > > If the kernel needs to retrieve a file, there is basically the following > workflow: > > 1. retrieve file from kernel cache; > 2. if not found, retrieve file from tmpfs cache; > 3. if not found, retrieve file from swap cache; > 4. if not found, retrieve file from disk cache; > 5. if not found, retrieve file from disk. > > This is simplified workflow but you get the idea. I do get it. I CLEARED #1 and #2, there is no usage of #3 and #4 is not large enough here to matter. So, it is left with #5. See the point? The test was a NORMAL emerge with portages work directory on tmpfs and a NORMAL emerge with portages work directory on disk and compare the results. The test resulted in little if any difference. If I ran the test and did not clear the cache, then I would expect skewed results because after the first emerge, some files would be cached in ram and the drive would not be used. If you clear the cache, then it has to take the same steps regardless of whether it was run first, second or third time. > > Now, what we are saying is that *when you have lot of RAM*, the kernel > never hit 2, 3, 4 and 5. The problem with the kernel cache is that files > stored in this cache are dropped from it very fast. tmpfs allows to have > better files persistence in RAM. But if you have lot of RAM, the files > stored in the kernel cache are /not/ dropped from it which allows the > kernel to work with files in RAM only. > > Clearing the kernel cache between the tests does not change much since > files are stored in RAM again, at the unpack process time. What makes > compilation very slow from the disk are all the _next reads and writes_ > required by the compilation. > >> Well, why say that caching makes a difference then say it doesn't matter >> when those caches are cleared? Either caches matter or it doesn't. > It does make a difference if you don't have enough RAM for the kernel > cache to store all the files involved in the whole emerge process and > every other process run by the kernel during the emerge. > But if you CLEAR the kernel cache between each test, then it doesn't matter either. I am clearing the KERNEL cache which includes pagecache, dentries and inodes. I can see the difference in gkrellm, top and in what the command free gives me. Put another way. I run a emerge on tmpfs and note the emerge times. I reboot. I run the same emerge again with it not on tmpfs. Do we agree that that would result in a actual real result? If yes then using the command to clear the cache is the same as rebooting. It's the whole point of having the feature in the kernel. The file drop_caches when set to 3 with the echo command erases, deletes or whatever you want to call it, the caches. That's from the kernel folks as linked to in another reply. That's not me saying it, it is the kernel folks saying it. Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!