public inbox for gentoo-releng@lists.gentoo.org
 help / color / mirror / Atom feed
From: davecode@nospammail.net
To: gentoo-releng@lists.gentoo.org
Subject: [gentoo-releng] [OT] Filesystem Realities
Date: Tue, 05 Feb 2008 10:21:30 -0800	[thread overview]
Message-ID: <1202235690.17451.1235221525@webmail.messagingengine.com> (raw)

Alex Howells:
> Anyone advising you to deploy
> XFS in a production environment without
> UPS on 'critical' data is a fool.

Who, me?  Not like I haven't asked for 'em.  Or advised that strawman
case.

I don't know about "fool," but sometimes even a fool gets lucky...I knew
squat about XFS, it was just the next thing to try when ext3 ate my
data...

> Just my two cents, of course, and lets get back on topic? :)

Sure but....may I piss on your two-cent epithet first?

I personally run XFS all day long *synced from an ext2 ramdisk* so hey,
double fool points for me...but I haven't lost 10min of work since...I'm
very familiar with 'sync' too, no problem flushing at your comfort
level.

If power crashes, my disk doesn't crash with it, just RAM.  My main
issue is not having disks spinning when power fails.  Basically my disks
almost never spin - swapoff, RAMdisk, nice tmpfs use.

A database is another thing.  It has multiusers, transaction integrity,
throughput, yada yada.  I'm not sure any fs is good 'nuf except ZFS. 
But I would probably use RAID for dbs nowadays.  The point is, if you
have a Biga-bytes database, design and tune, or *you will hurt no matter
which fs you're using.*

The file system wars blame the wrong targets...the real problems are the
legacy *nix holdovers like /var/logs and the sort of server mentality
that goes with them.  I love Firefox, but have you ever looked at the
crazy insane backup behavior?  It's unreal - they imitate server cron
jons.  Pretty ugly performance hits.  You maybe thought your bookmarks
were "private" but nooooo.  They live in ten different places.  And of
course "profile" management is a sore spot.  Because in the legacy
server mentality, disks don't move around.

I fstablish craptastic legacy server stuff like /var/logs in tmpfs -
where no disks spin.  A server *is* another story.  They need disk logs;
but not desktops.  I mean it's absurd.  We have L1 cache, L2 cache, L3
cache, gigahertz and megabytes, which all goes to waste because *nix
wants to write /var/logs...the absolute worst single performance killer
being disk access.  So yeah, forgive me if I'm too fond of RAM.

Even Linus himself finally woke up recently about atime option..."hey
guys, what a ridiculous self-defeating behavior!" or words to that
effect.  It only took 15 years, too...

There is a lot of room for improvement in Linux...sort of at the "wow
that was dumb" level...

Well bye 'til beta time, then...
-- 
  
  davecode@nospammail.net

-- 
http://www.fastmail.fm - Or how I learned to stop worrying and
                          love email again

-- 
gentoo-releng@lists.gentoo.org mailing list



                 reply	other threads:[~2008-02-05 18:23 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1202235690.17451.1235221525@webmail.messagingengine.com \
    --to=davecode@nospammail.net \
    --cc=gentoo-releng@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox