From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from pigeon.gentoo.org ([208.92.234.80] helo=lists.gentoo.org) by finch.gentoo.org with esmtp (Exim 4.77) (envelope-from ) id 1SqDFt-00065e-L0 for garchives@archives.gentoo.org; Sun, 15 Jul 2012 01:04:13 +0000 Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 39B2CE062B; Sun, 15 Jul 2012 01:03:59 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) by pigeon.gentoo.org (Postfix) with ESMTP id A0015E05ED for ; Sun, 15 Jul 2012 01:03:14 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp.gentoo.org (Postfix) with ESMTP id 3453E1B4018 for ; Sun, 15 Jul 2012 01:03:14 +0000 (UTC) X-Virus-Scanned: by amavisd-new using ClamAV at gentoo.org X-Spam-Flag: NO X-Spam-Score: -2.473 X-Spam-Level: X-Spam-Status: No, score=-2.473 tagged_above=-999 required=5.5 tests=[AWL=-0.561, BAYES_00=-1.9, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01] autolearn=no Received: from smtp.gentoo.org ([127.0.0.1]) by localhost (smtp.gentoo.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id yB5WtQeaIRzH for ; Sun, 15 Jul 2012 01:03:04 +0000 (UTC) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 0742D1B4006 for ; Sun, 15 Jul 2012 01:03:01 +0000 (UTC) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1SqDEe-0002O7-7G for gentoo-dev@gentoo.org; Sun, 15 Jul 2012 03:02:56 +0200 Received: from ip68-231-22-224.ph.ph.cox.net ([68.231.22.224]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Sun, 15 Jul 2012 03:02:56 +0200 Received: from 1i5t5.duncan by ip68-231-22-224.ph.ph.cox.net with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Sun, 15 Jul 2012 03:02:56 +0200 X-Injected-Via-Gmane: http://gmane.org/ To: gentoo-dev@lists.gentoo.org From: Duncan <1i5t5.duncan@cox.net> Subject: [gentoo-dev] Re: udev <-> mdev Date: Sun, 15 Jul 2012 01:02:47 +0000 (UTC) Message-ID: References: <20120712200741.GB3723@waltdnes.org> <20120712222931.GA3044@linux1> <20120713200449.GA6292@waltdnes.org> <50008143.3050708@gentoo.org> <20120714001343.GA6879@waltdnes.org> <20120714031327.GA8799@linux1> <20120714210221.30059.qmail@stuge.se> Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-dev@lists.gentoo.org Reply-to: gentoo-dev@lists.gentoo.org Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-Complaints-To: usenet@dough.gmane.org X-Gmane-NNTP-Posting-Host: ip68-231-22-224.ph.ph.cox.net User-Agent: Pan/0.139 (Sexual Chocolate; GIT 014d082 /usr/src/portage/src/egit-src/pan2) Content-Transfer-Encoding: quoted-printable X-Archives-Salt: 3a931d59-7790-42dd-bdab-3af508329860 X-Archives-Hash: ed651cccdeacabd4cc5019f8f3a0792f Rich Freeman posted on Sat, 14 Jul 2012 19:57:41 -0400 as excerpted: > On Sat, Jul 14, 2012 at 7:38 PM, Duncan <1i5t5.duncan@cox.net> wrote: >> BTW, any "gentooish" documentation out there on rootfs as tmpfs, with >> /etc and the like mounted on top of it, operationally ro, rw remounted >> for updates? >> >> That's obviously going to take an initr*, which I've never really >> understood to the point I'm comfortable with my ability to recover fro= m >> problems so I've not run one since my Mandrake era, but that's a statu= s >> that can change, and what with the /usr move and some computer problem= s >> I just finished dealing with, I've been thinking about the possibility >> lately. So if there's some good docs on the topic someone can point m= e >> at, I'd be grateful. =3D:^) >=20 > I doubt anybody has tried it, so you'll have to experiment. "Anybody" /anybody/, or "anybody" on gentoo? FWIW, there are people=20 running it in general (IIRC much of the discussion was on Debian, some on= =20 Fedora/RH), but I didn't see anything out there written from a gentoo=20 perspective. Gentoo-based docs/perspective does help, as one isn't=20 constantly having to translate binary-based assumptions into "gentooese",= =20 but there's enough out there in general that a suitably determined/ motivated person at the usual experienced gentoo user level should be=20 able to do it, without having to be an /extreme/ wizard. But so far I've= =20 not been /that/ motivated, and if there was gentoo docs available, it=20 would bring the barriers down far enough that I likely /would/ then have=20 the (now lower) required motivation/determination. Just looking for that shortcut, is all. =3D:^) > I imagine you could do it with a dracut module. There is already a > module that will parse a pre-boot fstab (/etc/fstab.sys). The trick is > that you need to create the root filesystem and the mountpoints within > it first. The trick will be how dracut handles not specifying a root > filesystem. While I do know dracut is an initr* helper, you just made me quite aware=20 of just how much research I'd have to do on the topic. =3D:^\ I wasn't=20 aware dracut even /had/ modules, while you're referring to them with the=20 ease of familiarity... > However, if anything I think the future trend will be towards having > everything back on the root filesystem, since with btrfs you can set > quotas on subvolumes and have a lot more flexibility in general, which > you start to lose if you chop up your disks. However, I guess you coul= d > still have one big btrfs filesystem and mount individual subvolumes out > of it onto your root. I'm not really sure what that gets you. Having > the root itself be a subvolume does have benefits, since you can then > snapshot it and easily boot back off a snapshot if something goes wrong= . The big problem with btrfs subvolumes from my perspective is that they're= =20 still all on a single primary filesystem, and if that filesystem develops= =20 problems... all your eggs/data are in one big basket, good luck if the=20 bottom drops out of it! One lesson I've had drilled into my head repeatedly over now two decades=20 of computer experience... don't put all your data in one basket! It's a=20 personal policy that's saved my @$$ more than a few times over the years. Even with raid, when I first setup md/raid, I set it up as a nice big=20 (partitioned) raid, with a second (similarly partitioned) raid as a=20 backup. With triple-digits gigs of data (this was pre-terabyte-drive=20 era), a system-crash related re-add and resync would take /hours/. =20 So when I rebuilt the setup, I created over a dozen (including working=20 and backup copies of many of them) individual raids, each in its own set=20 of partitions on the physical devices, some raids of which were further=20 partitioned, some not, but only the media raid (and its backup) were=20 anything like 100 gigs, and with many of even the working raids (plus all= =20 backups) not even activated for normal operation unless I was actually=20 working on whatever data was on that raid, and in general even most of=20 the the assembled raids with rw mounting not actively writing at the time= =20 of a crash, re-add and resync tended to be seconds or minutes, not hours. So I'm about as strong a partitioning-policy advocate as you'll get, tho=20 I do keep everything that the pm installs, along with the installation=20 database (so /etc, /usr, /var, but not for instance /var/log or /usr/src,= =20 which are mountpoints), on the same (currently) rootfs of 8-ish gigs,=20 with a backup root partition (actually two of them now) that I can point=20 the kernel at from grub, if the working rootfs breaks for some reason. =20 So the separate /usr/ thing hasn't affected me at all, because /usr/ is=20 on rootfs. But as I said I had some computer hardware issues recently, and they made= =20 me aware of just how nice it'd be to have that rootfs mounted read-only=20 for normal operation -- no fsck/log-replay needed on read-only-at-time-of= - crash mounts! =3D:^) So I'm pondering just how hard it would be... --=20 Duncan - List replies preferred. No HTML msgs. "Every nonfree program has a lord, a master -- and if you use the program, he is your master." Richard Stallman