From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from lists.gentoo.org ([140.105.134.102] helo=robin.gentoo.org) by nuthatch.gentoo.org with esmtp (Exim 4.62) (envelope-from ) id 1HAWqP-0002we-5K for garchives@archives.gentoo.org; Fri, 26 Jan 2007 19:34:41 +0000 Received: from robin.gentoo.org (localhost [127.0.0.1]) by robin.gentoo.org (8.13.8/8.13.8) with SMTP id l0QJXx5C024913; Fri, 26 Jan 2007 19:33:59 GMT Received: from smtp.kinex.net (pop3.kinex.net [207.42.174.24]) by robin.gentoo.org (8.13.8/8.13.8) with ESMTP id l0QJVtA2022335 for ; Fri, 26 Jan 2007 19:31:56 GMT Received: from localhost (localhost [127.0.0.1]) by smtp.kinex.net (Postfix) with ESMTP id B4F8D4007FE for ; Fri, 26 Jan 2007 14:31:55 -0500 (EST) Received: from smtp.kinex.net ([127.0.0.1]) by localhost (smtp.kinex.net [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 22216-06 for ; Fri, 26 Jan 2007 14:31:55 -0500 (EST) Received: from home.squishychicken.com (unknown [69.68.239.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.kinex.net (Postfix) with ESMTP id 6BC6C40076E for ; Fri, 26 Jan 2007 14:31:55 -0500 (EST) Received: by home.squishychicken.com (sSMTP sendmail emulation); Fri, 26 Jan 2007 14:32:04 -0500 Date: Fri, 26 Jan 2007 14:32:04 -0500 From: Sean Cook To: gentoo-server@lists.gentoo.org Subject: Re: [gentoo-server] SAN Clustered Filesystem Message-ID: <20070126193204.GA14401@gandalf.squishychicken.com> References: <45B90C20.4060305@wisc.edu> <20070125201537.GA30523@gandalf.squishychicken.com> <45B928B8.8010905@gmail.com> <45B92F9B.8080103@wisc.edu> Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-server@gentoo.org Reply-to: gentoo-server@lists.gentoo.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <45B92F9B.8080103@wisc.edu> X-Operating-System: Linux 2.6.17-gentoo-r8 i686 User-Agent: Mutt/1.5.13 (2006-08-11) X-Virus-Scanned: amavisd-new at kinex.net X-Archives-Salt: d8dfb8dd-e4f6-4d43-b050-29faaa68b6ea X-Archives-Hash: 8b0269e2f9e91515f7f63ab6be24fc13 Sorry... I didn't understand exactly what you were trying to do... if you want to present the same disks to multiple host you would generally use a cluster "aware" application or a cluster "aware" filesystem that would establish the quarum and act a more true clustered environment with heartbeat. Going back to GFS, this is actually capable of doing just that. So I guess we are back to GFS :) http://mail.digicola.com/wiki/index.php?title=User:Martin:GFS http://www.yolinux.com/TUTORIALS/LinuxClustersAndFileSystems.html Should give you enough reading to get started... most of the applications we do this with are either read only data sets and don't need clustering filesystems or are oracle databases that use ocfs. So I can only speak in theory... I would check out what veritas products can do... They have an amazing track record on almost all nix platforms (solaris,hp ux) and have a lot of clustering capabilities. Regards, sean On 25-Jan-2007, Brian Kroth wrote: > > > paul k?lle wrote: > >Sean Cook schrieb: > >> > >>GFS is ok if you don't want to mess around with a SAN but it has no where > >>near the performance of fiber or iSCSI attached storage. > >Aren't those apples and oranges? I thought iSCSI is a block level > >protocol and doesn't do locking and such whereas GFS does... > > This is what I was getting at. I know the basics of working with the > SAN to get a set of machines to at least see a storage array. The next > step is getting them to read and write to say the same file on a > filesystem on that storage array without stepping on each others toes or > corrupting the filesystem that lives on top of that storage array. > That's where I haven't learned too much yet. > > I hadn't actually planned on using the SAN to boot off of, but that > might be an option for easier configuration/software management. I > simply wanted to use it almost as if it were an NFS mount that a group > of servers stored web content on. The problem I had with that model is > that the NFS server is a single point of failure. If on the other hand > all the servers are directly attached to the data, any one of them can > go down and the others won't care or notice. At least that's the > working theory behind it right now. > -- > gentoo-server@gentoo.org mailing list > -- gentoo-server@gentoo.org mailing list