From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id 06AE81381F3 for ; Mon, 24 Jun 2013 15:28:00 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 631D1E09DA; Mon, 24 Jun 2013 15:27:56 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 5D242E0956 for ; Mon, 24 Jun 2013 15:27:55 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp.gentoo.org (Postfix) with ESMTP id 74F0633DD01 for ; Mon, 24 Jun 2013 15:27:54 +0000 (UTC) X-Virus-Scanned: by amavisd-new using ClamAV at gentoo.org X-Spam-Flag: NO X-Spam-Score: -1.811 X-Spam-Level: X-Spam-Status: No, score=-1.811 tagged_above=-999 required=5.5 tests=[AWL=-0.739, RCVD_IN_DNSWL_NONE=-0.0001, RP_MATCHES_RCVD=-1.07, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=unavailable Received: from smtp.gentoo.org ([IPv6:::ffff:127.0.0.1]) by localhost (smtp.gentoo.org [IPv6:::ffff:127.0.0.1]) (amavisd-new, port 10024) with ESMTP id GvZFjIkWG2s2 for ; Mon, 24 Jun 2013 15:27:48 +0000 (UTC) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id C196133DB75 for ; Mon, 24 Jun 2013 15:27:45 +0000 (UTC) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1Ur8g7-0001vC-H8 for gentoo-dev@gentoo.org; Mon, 24 Jun 2013 17:27:39 +0200 Received: from ip68-231-22-224.ph.ph.cox.net ([68.231.22.224]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Mon, 24 Jun 2013 17:27:39 +0200 Received: from 1i5t5.duncan by ip68-231-22-224.ph.ph.cox.net with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Mon, 24 Jun 2013 17:27:39 +0200 X-Injected-Via-Gmane: http://gmane.org/ To: gentoo-dev@lists.gentoo.org From: Duncan <1i5t5.duncan@cox.net> Subject: [gentoo-dev] Re: Packages up for grabs Date: Mon, 24 Jun 2013 15:27:19 +0000 (UTC) Message-ID: References: <1371376191.10717.15.camel@localhost> <1371390923.28535.67.camel@big_daddy.dol-sen.ca> <20130616164445.0c8f8f55@TOMWIJ-GENTOO> <1371402560.28535.79.camel@big_daddy.dol-sen.ca> <1371403298.22480.8.camel@localhost> <20130616202324.45cb3262@TOMWIJ-GENTOO> <20130616232427.063566d4@TOMWIJ-GENTOO> Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-dev@lists.gentoo.org Reply-to: gentoo-dev@lists.gentoo.org Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Complaints-To: usenet@ger.gmane.org X-Gmane-NNTP-Posting-Host: ip68-231-22-224.ph.ph.cox.net User-Agent: Pan/0.140 (Chocolate Salty Balls; GIT 459f52e /usr/src/portage/src/egit-src/pan2) X-Archives-Salt: c91f50cf-78c3-4a56-b4f0-8b66736fe4e5 X-Archives-Hash: 0f06f05965a875b2824db12f876ad3ed Tom Wijsman posted on Sun, 16 Jun 2013 23:24:27 +0200 as excerpted: > On Sun, 16 Jun 2013 19:33:53 +0000 (UTC) > Duncan <1i5t5.duncan@cox.net> wrote: > >> TL;DR: SSDs help. =:^) > > TL;DR: SSDs help, but they don't solve the underlying problem. =:-( Well, there's the long-term fix to the underlying problem, and there's coping strategies to help with where things are at now. I was simply saying that an SSD helps a LOT in dealing with the inefficiencies of the current code. See the "quite apart... practical question of ... dealing with the problem /now/" bit quoted below. > I have one; it's great to help make my boot short, but it isn't really a > great improvement for the Portage tree. Better I/O isn't a solution to > computational complexity; it doesn't deal with the CPU bottleneck. But here, agreed with ciaranm, the cpu's not the bottleneck, at least not from cold-cache. It doesn't even up the cpu clocking from minimum as it's mostly filesystem access. Once the cache is warm, then yes, it ups the CPU speed and I see the single-core behavior you mention, but cold- cache, no way; it's I/O bound. And with an ssd, the portage tree update (the syncs both of gentoo and the overlays) went from a /crawling/ console scroll, to scrolling so fast I can't read it. >> Quite apart from the theory and question of making the existing code >> faster vs. a new from-scratch implementation, there's the practical >> question of what options one can actually use to deal with the problem >> /now/. > > Don't rush it: Do you know the problem well? Does the solution properly > deal with it? Is it still usable some months / years from now? Not necessarily. But first we must /get/ to some months / years from now, and that's a lot easier if the best is made of the current situation, while a long term fix is being developed. >> FWIW, one solution (particularly for folks who don't claim to have >> reasonable coding skills and thus have limited options in that regard) >> is to throw hardware at the problem. > > Improvements in algorithmic complexity (exponential) are much bigger > than improvements you can achieve by buying new hardware (linear). Same song different verse. Fixing the algorithmic complexity is fine and certainly a good idea longer term, but it's not something I can use at my next update. Throwing hardware at the problem is usable now. >> --- >> [1] I'm running ntp and the initial ntp-client connection and time sync >> takes ~12 seconds a lot of the time, just over the initial 10 seconds >> down, 50 to go, trigger on openrc's 1-minute timeout. > > Why do you make your boot wait for NTP to sync its time? Well, ntpd is waiting for the initial step so it doesn't have to slew so hard for so long if the clock's multiple seconds off. And ntpd is in my default runlevel, with a few local service tasks that are after * and need a good clock time anyway, so... > How could hardware make this time sync go any faster? Which is what I said, that as a practical matter, my boot didn't speed up much /because/ I'm running (and waiting for) the ntp-client time- stepper. Thus, I'd not /expect/ a hardware update (unless it's to a more direct net connection) to help much. >> [2] ... SNIP ... runs ~1 hour ... SNIP ... > > Sounds great, but the same thing could run in much less time. I have > worse hardware, and it doesn't take much longer than yours do; so, I > don't really see the benefits new hardware bring to the table. And that > HDD to SSD change, that's really a once in a lifetime flood. I expect I'm more particular than most about checking changelogs. I certainly don't read them all, but if there's a revision-bump for instance, I like to see what the gentoo devs considered important enough to do a revision bump. And I religiously check portage logs, selecting mentioned bug numbers probably about half the time, which pops up a menu with a gentoo bug search on the number, from which I check the bug details and sometimes the actual git commit code. For all my overlays I check the git whatchanged logs, and I have a helper script that lets me fetch and then check git whatchanged for a number of my live packages, including openrc (where I switched to live-git precisely /because/ I was following it closely enough to find the git whatchanged logs useful, both for general information and for troubleshooting when something went wrong -- release versions simply didn't have enough resolution, too many things changing in each openrc release to easily track down problems and file bugs as appropriate), as well. And you're probably not rebuilding well over a hundred live-packages (thank $DEITY and the devs in question for ccache!) at every update, in addition to the usual (deep) @world version-bump and newuse updates, are you? Of course maybe you are, but I did specify that, and I didn't see anything in your comments indicating anything like an apples to apples comparision. >> [3] Also relevant, 16 gigs RAM, PORTAGETMPDIR on tmpfs. > > Sounds all cool, but think about your CPU again; saturate it... > > Building the Linux kernel with `make -j32 -l8` versus `make -j8` is a > huge difference; most people follow the latter instructions, without > really thinking through what actually happens with the underlying data. > The former queues up jobs for your processor; so the moment a job is > done a new job will be ready, so, you don't need to wait on the disk. Truth is, I used to run a plain make -j (no number and no -l at all) on my kernel builds, just to watch the system stress and then so elegantly recover. It's an amazing thing to watch, this Linux kernel thing and how it deals with cpu oversaturation. =:^) But I suppose I've gotten more conservative in my old age. =:^P Needlessly oversaturating the CPU (and RAM) only slows things down and forces cache dump and swappage. These days according to my kernel-build- script configuration I only run -j24, which seems a reasonable balance as it keeps the CPUs busy but stays safely enough within a few gigs of RAM so I don't dump-cache or hit swap. Timing a kernel build from make clean suggests it's the same sub-seconds range from -j10 or so, up to (from memory) -j50 or so, after which build time starts to go up, not down. > Something completely different; look at the history of data mining, > today's algorithms are much much faster than those of years ago. > > Just to point out that different implementations and configurations have > much more power in cutting time than the typical hardware change does. I agree and am not arguing that. All I'm saying is that there are measures that a sysadmin can take today to at least help work around the problem, today, while all those faster algorithms are being developed, implemented, tested and deployed. =:^) -- Duncan - List replies preferred. No HTML msgs. "Every nonfree program has a lord, a master -- and if you use the program, he is your master." Richard Stallman