From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id ADA8B138010 for ; Thu, 6 Sep 2012 09:16:06 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id B1D9AE09B1; Thu, 6 Sep 2012 09:15:27 +0000 (UTC) Received: from mail-yx0-f181.google.com (mail-yx0-f181.google.com [209.85.213.181]) by pigeon.gentoo.org (Postfix) with ESMTP id 5728CE093B for ; Thu, 6 Sep 2012 09:13:36 +0000 (UTC) Received: by yenq6 with SMTP id q6so282918yen.40 for ; Thu, 06 Sep 2012 02:13:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:x-enigmail-version:content-type :content-transfer-encoding; bh=Ut05ADLne925MqTB3KD8FeuPtPq0qk9cks665hsEEuU=; b=pLaRJf55AbfaQnZeDAGU3kAFQ5iOVYFQt3C+xIHPsHCmN9aUH4iGN/YV8JgMIVkFQw bFBu4Y58T3BlEhKcgq5zIwyS2HKPh6CC62DvPGmzwXxeQDJ0Rj7c8uqFXHHuvMj3M67f exgV9NmwOxT1rO9QvHfg+Xi+T0uBJLnkqExyt0h+oXB0rKPVDxUpLZN7uF/G1bNn8PPY QOOObNmBHlvGxK2F8T1YnFrjS5XGdaYvCpFnBWNQ/TRIIuhKPywBDMNviyWhIto7T0qm pxRDHzprh6ww9d/cbxzdv9fGLIm9cBsr+mgDqhkSay5jswG4EwWnwqJV34o8HE5gQCdr vjOA== Received: by 10.236.138.132 with SMTP id a4mr1050383yhj.123.1346922815728; Thu, 06 Sep 2012 02:13:35 -0700 (PDT) Received: from [192.168.2.5] (adsl-65-0-93-201.jan.bellsouth.net. [65.0.93.201]) by mx.google.com with ESMTPS id l10sm1308611ang.12.2012.09.06.02.13.33 (version=SSLv3 cipher=OTHER); Thu, 06 Sep 2012 02:13:34 -0700 (PDT) Message-ID: <5048693C.7020601@gmail.com> Date: Thu, 06 Sep 2012 04:13:32 -0500 From: Dale User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120902 Firefox/15.0 SeaMonkey/2.12 Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org MIME-Version: 1.0 To: gentoo-user@lists.gentoo.org Subject: Re: [gentoo-user] aligning SSD partitions References: <20120904072003.GD3095@ca.inter.net> <20120905092358.7bd9915f@hactar.digimed.co.uk> <20120905090249.GB3097@ca.inter.net> <201209051023.52973.peter@humphrey.ukfsn.org> <50473261.9040704@gmail.com> <20120905133146.2ad0ffa1@hactar.digimed.co.uk> <50474B1D.8070306@gmail.com> <20120905161743.1f2ecd9d@hactar.digimed.co.uk> <504791EB.2030103@gmail.com> <20120906004251.7b9dca2a@digimed.co.uk> In-Reply-To: <20120906004251.7b9dca2a@digimed.co.uk> X-Enigmail-Version: 1.4.4 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Archives-Salt: f705b5f2-d97f-415a-802c-62879b991217 X-Archives-Hash: b9108320d2f36eeecbe84f1b767cbe39 Neil Bothwick wrote: > On Wed, 05 Sep 2012 12:54:51 -0500, Dale wrote: > >>>>>>> I might also add, I see no speed improvements in putting portages >>>>>>> work directory on tmpfs. I have tested this a few times and the >>>>>>> difference in compile times is just not there. >>>>>> Probably because with 16GB everything stays cached anyway. >>>>> I cleared the cache between the compiles. This is the command I >>>>> use: >>>>> >>>>> echo 3 > /proc/sys/vm/drop_caches >>>> But you are still using the RAM as disk cache during the emerge, the >>>> data doesn't stay around long enough to need to get written to disk >>>> with so much RAM for cache. >>> Indeed. Try setting the mount to write-through to see the difference. >> When I run that command, it clears all the cache. It is the same as if >> I rebooted. Certainly you are not thinking that cache survives a >> reboot? > You clear the cache between the two emerge runs, not during them. If I recall correctly the process, I cleared the cache, ran emerge with the portage work directory on disk. Then cleared cache again and run on tmpfs. If you think that the cache would make any difference for the second run then it would be faster just because of that *while using tmpfs* since that was the second run. Thing is, it wasn't faster. In some tests it was actually slower. I'm trying to catch on to why you think that clearing the cache means it is still there. That's the whole point of clearing cache is that it is gone. When I was checking on drop_caches, my understanding was that clearing that was the same as a reboot. This is from kernel.org drop_caches Writing to this will cause the kernel to drop clean caches, dentries and inodes from memory, causing that memory to become free. To free pagecache: echo 1 > /proc/sys/vm/drop_caches To free dentries and inodes: echo 2 > /proc/sys/vm/drop_caches To free pagecache, dentries and inodes: echo 3 > /proc/sys/vm/drop_caches According to that, the 3 option clears all cache. One site I found that on even recommended running sync first just in case something in ram was not yet written to disk. > >> If you are talking about ram on the drive itself, well, when it is on >> tmpfs, it is not on the drive to be cached. That's the whole point of >> tmpfs is to get the slow drive out of the way. By the way, there are >> others that ran tests with the same results. It just doesn't speed up >> anything since drives are so much faster nowadays. > Drives are still orders of magnitude slower than RAM, that's why using > swap is so slow. What appears to be happening here is that because > files are written and then read again in short succession, they are still > in the kernel's disk cache, so the speed of the disk is irrelevant. Bear > in mind that tmpfs is basically a cached disk without the disk, so you > are effectively comparing the same thing twice. > > I agree that they are slower but for whatever reason, it doesn't seem to matter as much as one thought. I was expecting a huge difference in this with using tmpfs being much faster. Thing is, that is NOT what I got. Theory meets reality and it was not what I expected or what others expected either. Putting portages work directory on tmpfs doesn't make much if any difference in emerge times. Dale :-) :-) -- I am only responsible for what I said ... Not for what you understood or how you interpreted my words!