From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from pigeon.gentoo.org ([208.92.234.80] helo=lists.gentoo.org) by finch.gentoo.org with esmtp (Exim 4.77) (envelope-from ) id 1SqTXu-0005hl-TN for garchives@archives.gentoo.org; Sun, 15 Jul 2012 18:27:55 +0000 Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id CDB5E21C010; Sun, 15 Jul 2012 18:27:16 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) by pigeon.gentoo.org (Postfix) with ESMTP id E84D7E06C2 for ; Sun, 15 Jul 2012 18:26:03 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp.gentoo.org (Postfix) with ESMTP id 5042F1B40C0 for ; Sun, 15 Jul 2012 18:26:03 +0000 (UTC) X-Virus-Scanned: by amavisd-new using ClamAV at gentoo.org X-Spam-Flag: NO X-Spam-Score: -2.472 X-Spam-Level: X-Spam-Status: No, score=-2.472 tagged_above=-999 required=5.5 tests=[AWL=-0.560, BAYES_00=-1.9, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01] autolearn=no Received: from smtp.gentoo.org ([127.0.0.1]) by localhost (smtp.gentoo.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id SW0yP6ZhlsRs for ; Sun, 15 Jul 2012 18:25:57 +0000 (UTC) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id DA13C1B4154 for ; Sun, 15 Jul 2012 18:25:55 +0000 (UTC) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1SqTVw-0006Ez-Fd for gentoo-dev@gentoo.org; Sun, 15 Jul 2012 20:25:52 +0200 Received: from ip68-231-22-224.ph.ph.cox.net ([68.231.22.224]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Sun, 15 Jul 2012 20:25:52 +0200 Received: from 1i5t5.duncan by ip68-231-22-224.ph.ph.cox.net with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Sun, 15 Jul 2012 20:25:52 +0200 X-Injected-Via-Gmane: http://gmane.org/ To: gentoo-dev@lists.gentoo.org From: Duncan <1i5t5.duncan@cox.net> Subject: [gentoo-dev] Re: udev <-> mdev Date: Sun, 15 Jul 2012 18:25:40 +0000 (UTC) Message-ID: References: <20120712200741.GB3723@waltdnes.org> <20120712222931.GA3044@linux1> <20120713200449.GA6292@waltdnes.org> <50008143.3050708@gentoo.org> <20120714001343.GA6879@waltdnes.org> <20120714031327.GA8799@linux1> <20120714210221.30059.qmail@stuge.se> Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-dev@lists.gentoo.org Reply-to: gentoo-dev@lists.gentoo.org Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-Complaints-To: usenet@dough.gmane.org X-Gmane-NNTP-Posting-Host: ip68-231-22-224.ph.ph.cox.net User-Agent: Pan/0.139 (Sexual Chocolate; GIT 014d082 /usr/src/portage/src/egit-src/pan2) Content-Transfer-Encoding: quoted-printable X-Archives-Salt: ea5e117a-8246-45b4-84b3-c8f1e90c46c7 X-Archives-Hash: a41d4cbae7d8f29b7512779b85977dc0 Rich Freeman posted on Sun, 15 Jul 2012 08:30:31 -0400 as excerpted: > Looking at the docs it seems like you'd need a hook for the cmdline > stage that sets rootok (assuming it gets that far without a root, or if > you set it to something like root=3DTMPFS). Then you'd install a hook = to > mount to mount the tmpfs, and then use the fstab-sys module to mount > everything else. You'd need to create mountpoints for everything of > course, and not just the boot-critical stuff, since otherwise openrc > won't be able to finish mounting mounting everything. The last bit I had already anticipated, as I'm doing something similar=20 with my tmpfs-based /tmp and /var/tmp (symlinked to /tmp). Nothing=20 mounted on top, but I'm creating subdirs inside it, setting permissions,=20 etc. A critical difference is that this is on a full rootfs so I don't=20 have to worry about not having the necessary tools available yet, but I=20 do have the general ideas down. And I'm doing some bind-mounts as well,=20 which require a remount to let all the options take effect, and of course= =20 there's mount ordering to worry about, etc. So I have the general idea,=20 but doing it from an initr* with limited tools available will be=20 interesting. As for the tmpfs rootfs itself, I have the vague idea that I'd=20 "simply" (note the scare-quotes) use what's normally the initial root=20 that's essentially thrown away, only I'd not throw it away, I'd just=20 mount everything on top, keep using it, and /somehow/ ensure that=20 anything running from it directly terminates one way or another, so that=20 I don't have old processes stuck around using the mounted-over points. =20 >> The big problem with btrfs subvolumes from my perspective is that >> they're still all on a single primary filesystem, and if that >> filesystem develops problems... all your eggs/data are in one big >> basket, good luck if the bottom drops out of it! >=20 > Maybe, but does it really buy you much if you only lose /lib, and not > /usr? I guess it is less data to restore from backup, but... Which is why I keep /usr (and /lib64 and /usr/lib64) on rootfs currently,= =20 tho the traditional /usr/src/, /usr/local, and /usr/portage are either=20 pointed elsewhere with the appropriate vars or mountpoints/symlinks to=20 elsewhere. Of course that'd have to change a bit for a tmpfs rootfs,=20 since /lib64, /usr and /etc would obviously be mounted from elsewhere,=20 but they could still be either symlinked or bind-mounted to the=20 appropriate location on the single (read-only) system-filesystem. FWIW I remember being truly fascinated with the power of symlinks when I=20 first switched from MS. Now I consider them normal, but the power and=20 flexibility of bind-mounts still amazes me, especially since, as with=20 symlinks, it's possible to bind-mount individual files, but unlike=20 symlinks (more like hard-links but cross-filesystem), it's possible to=20 have some of the bind-mounts read-write (or dev, exec, etc) while others=20 are read-only (or nodev, noexec...). > The beauty of btrfs subvolumes is that it lets you manage all your > storage as a single pool, even more flexibly than LVM. Sure, chopping > it up does reduce the impact of failure a bit, but I'd hate to have to > maintain such a system. Filesystem failure should be a very rare > occurance for any decent filesystem (of course, this is why I won't be > using btrfs in production for a while). Very rare, yes. Hardware issues happen tho. I remember the a/c failing=20 at one point, thus causing ambient temps (Phoenix summer) to reach 50C or= =20 so, and who knows how much in the computer. Head-crash time. But after=20 cooling off, the unmounted-at-the-time filesystems were damaged very=20 little, while a couple of the mounted filesystems surely had physical=20 grooves in the platter. Had that been all one filesystem, the damage=20 would have been far less confined. That's one example. Another one, happened back when I was beta testing IE4 on MS, was due to=20 a system software error on their part. IE started bypassing the=20 filesystem and writing to the cache index directly, but it wasn't set=20 system attribute, so the defragger moved the file and put something else=20 in that physical disk location. I had my temp-inet-files on tmp, which=20 was its own partition and didn't have significant issues, but some of the= =20 other betatesters lost valuable data, overwritten by IE, which was still=20 bypassing the filesystem and writing directly to what it thought was its=20 cache index file. So it's not always filesystem failure, itself. But I tried btrfs for a=20 bit just to get an idea what it was all about, and agree totally with you= =20 there. I'm off of it entirely now, and won't be touching it again until=20 I'd guess early next year at the earliest. The thing simply isn't ready=20 for the expectations I have of my filesystems, and anybody using it now=20 without backups is simply playing Russian Roulette with their data. --=20 Duncan - List replies preferred. No HTML msgs. "Every nonfree program has a lord, a master -- and if you use the program, he is your master." Richard Stallman