From: Sean Cook <scook@kinex.net>
To: gentoo-server@lists.gentoo.org
Subject: Re: [gentoo-server] SAN Clustered Filesystem
Date: Fri, 26 Jan 2007 14:32:04 -0500 [thread overview]
Message-ID: <20070126193204.GA14401@gandalf.squishychicken.com> (raw)
In-Reply-To: <45B92F9B.8080103@wisc.edu>
Sorry... I didn't understand exactly what you were trying to do... if you
want to present the same disks to multiple host you would generally use a
cluster "aware" application or a cluster "aware" filesystem that would
establish the quarum and act a more true clustered environment with heartbeat.
Going back to GFS, this is actually capable of doing just that. So I guess
we are back to GFS :)
http://mail.digicola.com/wiki/index.php?title=User:Martin:GFS
http://www.yolinux.com/TUTORIALS/LinuxClustersAndFileSystems.html
Should give you enough reading to get started... most of the applications we
do this with are either read only data sets and don't need clustering
filesystems or are oracle databases that use ocfs. So I can only speak in
theory... I would check out what veritas products can do... They have an
amazing track record on almost all nix platforms (solaris,hp ux) and have a
lot of clustering capabilities.
Regards,
sean
On 25-Jan-2007, Brian Kroth wrote:
>
>
> paul k?lle wrote:
> >Sean Cook schrieb:
> >>
> >>GFS is ok if you don't want to mess around with a SAN but it has no where
> >>near the performance of fiber or iSCSI attached storage.
> >Aren't those apples and oranges? I thought iSCSI is a block level
> >protocol and doesn't do locking and such whereas GFS does...
>
> This is what I was getting at. I know the basics of working with the
> SAN to get a set of machines to at least see a storage array. The next
> step is getting them to read and write to say the same file on a
> filesystem on that storage array without stepping on each others toes or
> corrupting the filesystem that lives on top of that storage array.
> That's where I haven't learned too much yet.
>
> I hadn't actually planned on using the SAN to boot off of, but that
> might be an option for easier configuration/software management. I
> simply wanted to use it almost as if it were an NFS mount that a group
> of servers stored web content on. The problem I had with that model is
> that the NFS server is a single point of failure. If on the other hand
> all the servers are directly attached to the data, any one of them can
> go down and the others won't care or notice. At least that's the
> working theory behind it right now.
> --
> gentoo-server@gentoo.org mailing list
>
--
gentoo-server@gentoo.org mailing list
next prev parent reply other threads:[~2007-01-26 19:34 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-01-25 19:59 [gentoo-server] SAN Clustered Filesystem Brian Kroth
2007-01-25 20:15 ` Sean Cook
2007-01-25 20:24 ` Brian Kroth
2007-01-25 20:34 ` Sean Cook
2007-01-25 22:01 ` paul kölle
2007-01-25 22:07 ` Sean Cook
2007-01-25 22:30 ` Brian Kroth
2007-01-26 19:32 ` Sean Cook [this message]
2007-01-26 13:42 ` [gentoo-server] Re: [gentoo-cluster] " Ramon van Alteren
2007-02-04 15:01 ` [gentoo-server] " Kevin
2007-02-04 16:04 ` Brian Kroth
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20070126193204.GA14401@gandalf.squishychicken.com \
--to=scook@kinex.net \
--cc=gentoo-server@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox