From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id 64728138247 for ; Fri, 17 Jan 2014 23:32:17 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id AF45DE0ACC; Fri, 17 Jan 2014 23:32:09 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 04446E0AC6 for ; Fri, 17 Jan 2014 23:32:07 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp.gentoo.org (Postfix) with ESMTP id 2787E33F8AF for ; Fri, 17 Jan 2014 23:32:07 +0000 (UTC) X-Virus-Scanned: by amavisd-new using ClamAV at gentoo.org X-Spam-Flag: NO X-Spam-Score: -1.365 X-Spam-Level: X-Spam-Status: No, score=-1.365 tagged_above=-999 required=5.5 tests=[AWL=-1.034, RCVD_IN_DNSWL_NONE=-0.0001, RP_MATCHES_RCVD=-0.329, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=no Received: from smtp.gentoo.org ([IPv6:::ffff:127.0.0.1]) by localhost (smtp.gentoo.org [IPv6:::ffff:127.0.0.1]) (amavisd-new, port 10024) with ESMTP id OroUVeAfoYFG for ; Fri, 17 Jan 2014 23:32:01 +0000 (UTC) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 1B98733F8C1 for ; Fri, 17 Jan 2014 23:31:59 +0000 (UTC) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1W4ItI-0005vO-3K for gentoo-dev@gentoo.org; Sat, 18 Jan 2014 00:31:56 +0100 Received: from ip68-231-22-224.ph.ph.cox.net ([68.231.22.224]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Sat, 18 Jan 2014 00:31:56 +0100 Received: from 1i5t5.duncan by ip68-231-22-224.ph.ph.cox.net with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Sat, 18 Jan 2014 00:31:56 +0100 X-Injected-Via-Gmane: http://gmane.org/ To: gentoo-dev@lists.gentoo.org From: Duncan <1i5t5.duncan@cox.net> Subject: [gentoo-dev] Re: RFC: Hosting daily gx86 squashfs images and deltas Date: Fri, 17 Jan 2014 23:31:32 +0000 (UTC) Message-ID: References: <20140117172730.0c504246@pomiot.lan> <20140117203000.01841974@pomiot.lan> Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-dev@lists.gentoo.org Reply-to: gentoo-dev@lists.gentoo.org Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Complaints-To: usenet@ger.gmane.org X-Gmane-NNTP-Posting-Host: ip68-231-22-224.ph.ph.cox.net User-Agent: Pan/0.140 (Chocolate Salty Balls; GIT 6daf184 /usr/src/portage/src/egit-src/pan2) X-Archives-Salt: 82f41906-5690-4297-b529-00cbc62c72ee X-Archives-Hash: 40212f8767711256e2c6435c94b70086 Michał Górny posted on Fri, 17 Jan 2014 20:30:00 +0100 as excerpted: > Dnia 2014-01-17, o godz. 19:19:14 Duncan <1i5t5.duncan@cox.net> > napisał(a): > >> Michał Górny posted on Fri, 17 Jan 2014 17:27:30 +0100 as excerpted: >> >> > 96M portage-20140108.sqfs >> > For deltas [...] >> > >> > 6,3M portage-20140109.sqfs-portage-20140110.sqfs.vcdiff.djw >> > applying it takes ~2.5 second on my 2 GHz Athlon64. >> >> diffs are ~1/16 the full squashfs size[.] So people updating once a >> week [or] 10 days would see a bandwidth savings, provided the sync >> script was intelligent enough to apply updates serially. >> >> The breakover point would be roughly an update every two weeks, or >> twice a month > > However, it may be actually beneficial to provide other durations, like > weekly deltas. In my tests, the daily updates for this week summed up to > almost 50M while the weekly was barely 20M. That's useful additional data. Thanks. And yes, a weekly delta would be quite useful, taking the breakover point out to about a month or so. Practically speaking, I'd guess most gentooers update once a month or more, so that should cover the vast majority. Beyond a month, just downloading a new full squashfs makes as much sense anyway, and as the cutover would be automated, users on the borderline wouldn't have to worry about whether they should just do the normal sync or download an entirely new tarball, as they now need to do, if they even bother at all. For those users, it'd be an even BIGGER win. =:^) -- Duncan - List replies preferred. No HTML msgs. "Every nonfree program has a lord, a master -- and if you use the program, he is your master." Richard Stallman