On Wed, 17 Feb 2016 22:26:32 -0500 Richard Yao wrote: > On 02/17/2016 02:01 PM, Andrew Savchenko wrote: > > On Tue, 16 Feb 2016 15:18:46 -0500 Rich Freeman wrote: > >> On Tue, Feb 16, 2016 at 2:31 PM, Patrick Lauer wrote: > >>> > >>> The failure message comes from rc-mount.sh when the list of PIDs using a > >>> mountpoint includes "$$" which is shell shorthand for self. How can the > >>> current shell claim to be using /usr when it is a shell that only has > >>> dependencies in $LIBDIR ? > >>> As far as I can tell the code at this point calls fuser -k ${list of > >>> pids}, and fuser outputs all PIDs that still use it. I don't see how $$ > >>> can end up in there ... > >> > >> What does openrc do when the script fails? Just shut down the system anyway? > >> > >> If you're going to shut down the system anyway then I'd just force the > >> read-only mount even if it is in use. That will cause less risk of > >> data loss than leaving it read-write. > >> > >> Of course, it would be better still to kill anything that could > >> potentially be writing to it. > > > > This is not always possible. Two practical cases from my experience: > > > > 1) NFS v4 shares can't be unmounted if server is unreachable (even > > with -f). If filesystem (e.g. /home or /) contains such unmounted > > mount points, it can't be unmounted as well, because it is still in > > use. This happens quite often if both NFS server and client are > > running from UPS on low power event (when AC power failed and > > battery is almost empty). > > Does `umount -l /path/to/mnt` work on those? No, if mount point is already stalled, -l is of no use. > > 2) LUKS device is in frozen state. I use this as a security > > precaution if LUKS fails to unmount (or it takes too long), e.g. > > due to dead mount point. > > This gives me another reason to justify being a fan of integrating > encryption directly into a filesystem Ext4 and f2fs do this, but with limited cypersuits available. Actually problems with LUKS are not critical: I never lost data, integrity there and had only a small security issue. The only failure I can remember was libgcrypt whirlpool issue (invalid implementation on old versions and incompatible fix on new ones). > or using ecryptfs on top of the VFS. No, never. Not on my setups. Ecryptfs is 1) unsecure (leaks several bytes of data) 2) unreliable, as it depends on boost and other high-level C++ stuff. I had lost an ability to decrypt data because of boost XML versioning change. > The others were possible integrity concerns (which definitely > happen with a frozen state, In theory — maybe. In real life — no. I use LUKS for over 8 years, I often had frozen shutdowns and I never had data loss there. In terms of data integrity LUKS + ext4 is ultimate. This joint survived even on host with failing RAM for several years. > although mine were about excessive layering > adding opportunities for bugs) and performance concerns from doing > unnecessary calculations on filesystems that span multiple disks (e.g. > each mirror member gets encrypted independently). Ehh... why independently? Just create mdadm with proper chunk size, put LUKS on top of it, align both LUKS and internal fs to that size and relevant stride and performance will be optimal. On SSDs this is harder because it is very difficult to determine erase block size properly, but that is another issue. Best regards, Andrew Savchenko