* [gentoo-user] Linux Fiber SAN
@ 2013-06-11 14:19 Nick Khamis
2013-06-12 4:57 ` Norman Rieß
0 siblings, 1 reply; 14+ messages in thread
From: Nick Khamis @ 2013-06-11 14:19 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 307 bytes --]
Hello Everyone,
Was wondering what people are running these days, and how do they compare
to the 10,000 dollar SAN boxes. We are looking to build a fiber san using
IET and glusterFS, and was wondering what kind of luck people where having
using this approach, or any for that matter.
Kind Regards,
Nick.
[-- Attachment #2: Type: text/html, Size: 421 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] Linux Fiber SAN
2013-06-11 14:19 [gentoo-user] Linux Fiber SAN Nick Khamis
@ 2013-06-12 4:57 ` Norman Rieß
2013-06-12 6:33 ` Dan Johansson
2013-06-12 14:20 ` Nick Khamis
0 siblings, 2 replies; 14+ messages in thread
From: Norman Rieß @ 2013-06-12 4:57 UTC (permalink / raw
To: gentoo-user
Am 11.06.2013 16:19, schrieb Nick Khamis:
> Hello Everyone,
>
> Was wondering what people are running these days, and how do they
> compare to the 10,000 dollar SAN boxes. We are looking to build a fiber
> san using IET and glusterFS, and was wondering what kind of luck people
> where having using this approach, or any for that matter.
>
> Kind Regards,
>
> Nick.
Hello Nick,
the question is, what are you doing with it and why do you think you
need a fibre channel SAN.
Our goal indeed is to get rid of the SAN infrastructure as it is
delicately to all kinds of failure with nearly zero fault tolerance.
An example, you have an hicup or a power failure in your network. SAN is
dead from nowon and must be reinitialized on the server. Simple NFS
comes back up without any fuzz.
Another, you boot your storage systems due to an os update or something
like that. Your SAN will be dead. NFS will just go on as if nothing
happened.
We use netapp storage systems which are NAS and SAN capable.
Another point is, that if you have a SAN lun, there is either no way to
increase or decrease size on the fly, on cifs or nfs you can resize your
share on the go.
So if you do not have a _really_ good reason to use a fribre channel
SAN, don't!
Regards,
Norman
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] Linux Fiber SAN
2013-06-12 4:57 ` Norman Rieß
@ 2013-06-12 6:33 ` Dan Johansson
2013-06-12 7:21 ` Norman Rieß
2013-06-12 15:12 ` Nick Khamis
2013-06-12 14:20 ` Nick Khamis
1 sibling, 2 replies; 14+ messages in thread
From: Dan Johansson @ 2013-06-12 6:33 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1.1: Type: text/plain, Size: 2055 bytes --]
On 12.06.2013 06:57, Norman Rieß wrote:
> Am 11.06.2013 16:19, schrieb Nick Khamis:
>> Hello Everyone,
>>
>> Was wondering what people are running these days, and how do they
>> compare to the 10,000 dollar SAN boxes. We are looking to build a fiber
>> san using IET and glusterFS, and was wondering what kind of luck people
>> where having using this approach, or any for that matter.
>
> the question is, what are you doing with it and why do you think you
> need a fibre channel SAN.
> Our goal indeed is to get rid of the SAN infrastructure as it is
> delicately to all kinds of failure with nearly zero fault tolerance.
> An example, you have an hicup or a power failure in your network. SAN is
> dead from nowon and must be reinitialized on the server. Simple NFS
> comes back up without any fuzz.
> Another, you boot your storage systems due to an os update or something
> like that. Your SAN will be dead. NFS will just go on as if nothing
> happened.
> We use netapp storage systems which are NAS and SAN capable.
> Another point is, that if you have a SAN lun, there is either no way to
> increase or decrease size on the fly, on cifs or nfs you can resize your
> share on the go.
>
> So if you do not have a _really_ good reason to use a fribre channel
> SAN, don't!
Hello,
I tend to disagree. A correctly designed SAN (using dual Fabric among
other things) is a lot more stable and has a lot better performance than
any NAS (NFS, CIFS, iSCSI) solution. One other thing that also needs to
be correctly configured to have a stable SAN infrastructure is the
servers on it (Multipathing, partition alignment, queue depth, ...)
according to the storage vendors recommendation.
LUN expansion/shrink is storage vendor specific, some can not (netapp
apparently) but others can.
Just my 2 cents.
Regards,
--
Dan Johansson, <http://www.dmj.nu>
***************************************************
This message is printed on 100% recycled electrons!
***************************************************
[-- Attachment #1.2: 0x2FB894AD.asc --]
[-- Type: application/pgp-keys, Size: 3477 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 255 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] Linux Fiber SAN
2013-06-12 6:33 ` Dan Johansson
@ 2013-06-12 7:21 ` Norman Rieß
2013-06-12 15:12 ` Nick Khamis
1 sibling, 0 replies; 14+ messages in thread
From: Norman Rieß @ 2013-06-12 7:21 UTC (permalink / raw
To: gentoo-user
Am 12.06.2013 08:33, schrieb Dan Johansson:
> On 12.06.2013 06:57, Norman Rieß wrote:
>> Am 11.06.2013 16:19, schrieb Nick Khamis:
>>> Hello Everyone,
>>>
>>> Was wondering what people are running these days, and how do they
>>> compare to the 10,000 dollar SAN boxes. We are looking to build a fiber
>>> san using IET and glusterFS, and was wondering what kind of luck people
>>> where having using this approach, or any for that matter.
>>
>> the question is, what are you doing with it and why do you think you
>> need a fibre channel SAN.
>> Our goal indeed is to get rid of the SAN infrastructure as it is
>> delicately to all kinds of failure with nearly zero fault tolerance.
>> An example, you have an hicup or a power failure in your network. SAN is
>> dead from nowon and must be reinitialized on the server. Simple NFS
>> comes back up without any fuzz.
>> Another, you boot your storage systems due to an os update or something
>> like that. Your SAN will be dead. NFS will just go on as if nothing
>> happened.
>> We use netapp storage systems which are NAS and SAN capable.
>> Another point is, that if you have a SAN lun, there is either no way to
>> increase or decrease size on the fly, on cifs or nfs you can resize your
>> share on the go.
>>
>> So if you do not have a _really_ good reason to use a fribre channel
>> SAN, don't!
>
> Hello,
>
> I tend to disagree. A correctly designed SAN (using dual Fabric among
> other things) is a lot more stable and has a lot better performance than
> any NAS (NFS, CIFS, iSCSI) solution. One other thing that also needs to
> be correctly configured to have a stable SAN infrastructure is the
> servers on it (Multipathing, partition alignment, queue depth, ...)
> according to the storage vendors recommendation.
> LUN expansion/shrink is storage vendor specific, some can not (netapp
> apparently) but others can.
>
> Just my 2 cents.
>
> Regards,
>
Hello,
you are right i did not elaborate on our san setup, but dual fabric,
correctly configured hba, proper timeout settings, multipathing,
alignment and proper block sizes, all was cared for.
And yes, it is stable as long, as no glitch in power, network etc. or
maintenance is due. Here NFS is far more fault tolerant.
Our servers are equipped with 10GE ports, which are bonded. Performance
is not the issue. Further more, is the configuration far easier and more
robust.
According to roadmaps ethernet will outperform SAN infrastructure by
factors soon.
Oh, you can resize the lun, but on the server side you have a
blockdevice exposed and need to unmount, resize if possible and mount
again. On nfs it is a df for the old size, resizing and a df with the
new size with no service downtime.
Regards,
Norman
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] Linux Fiber SAN
2013-06-12 4:57 ` Norman Rieß
2013-06-12 6:33 ` Dan Johansson
@ 2013-06-12 14:20 ` Nick Khamis
2013-06-12 14:53 ` Alan McKinnon
2013-06-13 4:58 ` Norman Rieß
1 sibling, 2 replies; 14+ messages in thread
From: Nick Khamis @ 2013-06-12 14:20 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 1937 bytes --]
>
> Hello Nick,
>
> the question is, what are you doing with it and why do you think you
> need a fibre channel SAN.
> Our goal indeed is to get rid of the SAN infrastructure as it is
> delicately to all kinds of failure with nearly zero fault tolerance.
> An example, you have an hicup or a power failure in your network. SAN is
> dead from nowon and must be reinitialized on the server. Simple NFS
> comes back up without any fuzz.
> Another, you boot your storage systems due to an os update or something
> like that. Your SAN will be dead. NFS will just go on as if nothing
> happened.
> We use netapp storage systems which are NAS and SAN capable.
> Another point is, that if you have a SAN lun, there is either no way to
> increase or decrease size on the fly, on cifs or nfs you can resize your
> share on the go.
>
> So if you do not have a _really_ good reason to use a fribre channel
> SAN, don't!
>
> Regards,
> Norman
>
>
Hello Norman,
Thank you so much for your response. That is a very interesting! We
currently use an NFS to house home directories etc.., and I love how it
just bloody works!!! We do however need block level sharing. The idea is
the
typical host with multiple VM with virtual HDDs residing on a SAN..... We
figured
fibre would give us better performance (for the mean time!!!).
It was my understanding that SAN whether implemented using iSCSI
or Fibre was essentially susceptible to the same type
of faults that lead to whatever failures? The only difference being of
course, on is on fibre, and the other using ethernet. Given the price
of fibre right now, it's quite cheap and we though double the throughput,
why not?
We could have the VMs taking storage from DAS, and mount to an
external NFS for home/ etc... Not sure how it would perform in terms of
IO rates, and also, the idea of block level allocation just seems so much
cleaner no?
PS I am new to SAN, please excuse me.
Kind Regards,
Nick
[-- Attachment #2: Type: text/html, Size: 3294 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] Linux Fiber SAN
2013-06-12 14:20 ` Nick Khamis
@ 2013-06-12 14:53 ` Alan McKinnon
2013-06-12 15:45 ` Nick Khamis
2013-06-13 4:58 ` Norman Rieß
1 sibling, 1 reply; 14+ messages in thread
From: Alan McKinnon @ 2013-06-12 14:53 UTC (permalink / raw
To: gentoo-user
On 12/06/2013 16:20, Nick Khamis wrote:
> It was my understanding that SAN whether implemented using iSCSI
> or Fibre was essentially susceptible to the same type
> of faults that lead to whatever failures?
Old cynic speaking here:
Yes, they both have the same weak point: humans.
In my experience the only storage technology that ever let me down badly
was a decrepit Arena locally-attached badly designed POS.
The humans that *run* the storage failed me many times. The SAN never
deleted a LUN, the humans did - more than once.
If you are assessing risk, do keep that one in mind.
Other than that, no storage technology is really inherently better than
any other, some are just better suited to what you need and have budget for.
--
Alan McKinnon
alan.mckinnon@gmail.com
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] Linux Fiber SAN
2013-06-12 6:33 ` Dan Johansson
2013-06-12 7:21 ` Norman Rieß
@ 2013-06-12 15:12 ` Nick Khamis
1 sibling, 0 replies; 14+ messages in thread
From: Nick Khamis @ 2013-06-12 15:12 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 1305 bytes --]
>
> Hello,
>
> I tend to disagree. A correctly designed SAN (using dual Fabric among
> other things) is a lot more stable and has a lot better performance than
> any NAS (NFS, CIFS, iSCSI) solution. One other thing that also needs to
> be correctly configured to have a stable SAN infrastructure is the
> servers on it (Multipathing, partition alignment, queue depth, ...)
> according to the storage vendors recommendation.
> LUN expansion/shrink is storage vendor specific, some can not (netapp
> apparently) but others can.
>
> Just my 2 cents.
>
> Regards,
> --
> Dan Johansson, <http://www.dmj.nu>
> ***************************************************
> This message is printed on 100% recycled electrons!
> ***************************************************
>
Hello Dan,
Thank you so much. As mentioned earlier I am a new to SAN, and the approach
we are taking given our limited budget is to purchase an IBM with
sufficient hdd bays and PCI bus, plugging a PCIe raid card as well as an
HBA (or two as you mentioned), and installing SCST or ESOS, and going from
their. Would you be kind enough to give more details about your SAN setup
in respect to HBA, raid adapters, software etc... I understand that you
could be using a black box from HP etc.., but just a general idea.
Kind Regards,
Nick.
[-- Attachment #2: Type: text/html, Size: 1914 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] Linux Fiber SAN
2013-06-12 14:53 ` Alan McKinnon
@ 2013-06-12 15:45 ` Nick Khamis
0 siblings, 0 replies; 14+ messages in thread
From: Nick Khamis @ 2013-06-12 15:45 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 1296 bytes --]
On Wed, Jun 12, 2013 at 10:53 AM, Alan McKinnon <alan.mckinnon@gmail.com>wrote:
> Old cynic speaking here:
>
> Yes, they both have the same weak point: humans.
>
> In my experience the only storage technology that ever let me down badly
> was a decrepit Arena locally-attached badly designed POS.
>
> The humans that *run* the storage failed me many times. The SAN never
> deleted a LUN, the humans did - more than once.
>
> If you are assessing risk, do keep that one in mind.
>
> Other than that, no storage technology is really inherently better than
> any other, some are just better suited to what you need and have budget
> for.
>
>
> --
> Alan McKinnon
> alan.mckinnon@gmail.com
>
>
>
Hello Alan,
Thanks for joining us! I am a big believer of KISS, and was also hoping on
eventually getting some up-to-date simple and efficient strategies to
deploying and
managing SANs in a virtualized environment to mitigate things like human
error.
Things like zoning using (world wide name/ n_port id virualization), LUN
mapping
and masking etc...
Using the typical architecture Host (VM1, VM2, VMn)<------>SAN<-------->
Virtual Storage.
It would be interesting in knowing are handling the said, and also felxable
way of backing up
of virutal storage drives, snapshots etc...
Kind Regards,
Nick.
[-- Attachment #2: Type: text/html, Size: 2415 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] Linux Fiber SAN
2013-06-12 14:20 ` Nick Khamis
2013-06-12 14:53 ` Alan McKinnon
@ 2013-06-13 4:58 ` Norman Rieß
2013-06-13 12:31 ` Nick Khamis
1 sibling, 1 reply; 14+ messages in thread
From: Norman Rieß @ 2013-06-13 4:58 UTC (permalink / raw
To: gentoo-user
Am 12.06.2013 16:20, schrieb Nick Khamis:
> Hello Nick,
>
> the question is, what are you doing with it and why do you think you
> need a fibre channel SAN.
> Our goal indeed is to get rid of the SAN infrastructure as it is
> delicately to all kinds of failure with nearly zero fault tolerance.
> An example, you have an hicup or a power failure in your network. SAN is
> dead from nowon and must be reinitialized on the server. Simple NFS
> comes back up without any fuzz.
> Another, you boot your storage systems due to an os update or something
> like that. Your SAN will be dead. NFS will just go on as if nothing
> happened.
> We use netapp storage systems which are NAS and SAN capable.
> Another point is, that if you have a SAN lun, there is either no way to
> increase or decrease size on the fly, on cifs or nfs you can resize your
> share on the go.
>
> So if you do not have a _really_ good reason to use a fribre channel
> SAN, don't!
>
> Regards,
> Norman
>
>
> Hello Norman,
>
> Thank you so much for your response. That is a very interesting! We
> currently use an NFS to house home directories etc.., and I love how it
> just bloody works!!! We do however need block level sharing. The idea is
> the
> typical host with multiple VM with virtual HDDs residing on a SAN.....
> We figured
> fibre would give us better performance (for the mean time!!!).
>
> It was my understanding that SAN whether implemented using iSCSI
> or Fibre was essentially susceptible to the same type
> of faults that lead to whatever failures? The only difference being of
> course, on is on fibre, and the other using ethernet. Given the price
> of fibre right now, it's quite cheap and we though double the throughput,
> why not?
>
> We could have the VMs taking storage from DAS, and mount to an
> external NFS for home/ etc... Not sure how it would perform in terms of
> IO rates, and also, the idea of block level allocation just seems so much
> cleaner no?
>
> PS I am new to SAN, please excuse me.
>
> Kind Regards,
>
> Nick
Hello,
our setup is that we open up pools of up to 20 hosts which all mount the
same NFS share which holds sparse file images as virtual hdds of the
VMs. So life migration is possible, other than holding the VMs on local
storage.
Our never clusters are equipped with hosts using 10 gigabit ethernet.
Two 10GE ports are bonded to provide redundancy and balancing. Every
host features 2 bonds, one for storage vlans and one for the production
vlans. Performance is not the issue.
Our older clusters do this with 1 gigabit ethernet and three bonds.
We have some high performance services and throughput never was a problem.
So i recomment NFS. But it really depends on your prefferation.
Regards,
Norman
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] Linux Fiber SAN
2013-06-13 4:58 ` Norman Rieß
@ 2013-06-13 12:31 ` Nick Khamis
2013-06-14 4:48 ` Norman Rieß
0 siblings, 1 reply; 14+ messages in thread
From: Nick Khamis @ 2013-06-13 12:31 UTC (permalink / raw
To: gentoo-user
Hello Norman,
Thank you so much for your response, and that is an interesting setup.
>> we open up pools of up to 20 hosts which all mount the same NFS
>> share which holds sparse file images as virtual hdds of the
>> VM.
How are these sprase file images initially built for each VM's virtual hdd? And
can this process be automated.
>> So life migration is possible, other than holding the VMs on local
>> storage.
I can understand that.
>> Our never clusters are equipped with hosts using 10 gigabit ethernet.
>> Two 10GE ports are bonded to provide redundancy and balancing. Every
>> host features 2 bonds, one for storage vlans and one for the production
>> vlans. Performance is not the issue.
Good network engineering.
I guess also with this setup replication would be handled by rsync? If so, the
potential of this setup really starts to shine.
WOW, from NAS to SAN?
Kind Regards,
Nick.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] Linux Fiber SAN
2013-06-13 12:31 ` Nick Khamis
@ 2013-06-14 4:48 ` Norman Rieß
2013-06-16 0:25 ` Nick Khamis
0 siblings, 1 reply; 14+ messages in thread
From: Norman Rieß @ 2013-06-14 4:48 UTC (permalink / raw
To: gentoo-user
Am 13.06.2013 14:31, schrieb Nick Khamis:
> Hello Norman,
>
> Thank you so much for your response, and that is an interesting setup.
>
>>> we open up pools of up to 20 hosts which all mount the same NFS
>>> share which holds sparse file images as virtual hdds of the
>>> VM.
>
> How are these sprase file images initially built for each VM's virtual hdd? And
> can this process be automated.
There are many ways, in our case virt-install creates them automaticaly.
But you could just dd a file from /dev/zero or random. It is a raw
sparse file. There is no internal logic behind it.
Of cause this process can be fully automated.
We automated the complete installation process, which takes one command
now to install and deploy a completely from scratch installed VM in
about 8 to 11 Minutes.
>
>>> So life migration is possible, other than holding the VMs on local
>>> storage.
>
> I can understand that.
>
>>> Our never clusters are equipped with hosts using 10 gigabit ethernet.
>>> Two 10GE ports are bonded to provide redundancy and balancing. Every
>>> host features 2 bonds, one for storage vlans and one for the production
>>> vlans. Performance is not the issue.
>
> Good network engineering.
>
> I guess also with this setup replication would be handled by rsync? If so, the
> potential of this setup really starts to shine.
What do you mean by replication?
>
> WOW, from NAS to SAN?
>
>
> Kind Regards,
>
> Nick.
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] Linux Fiber SAN
2013-06-14 4:48 ` Norman Rieß
@ 2013-06-16 0:25 ` Nick Khamis
2013-06-16 0:28 ` Nick Khamis
2013-06-17 4:40 ` Norman Rieß
0 siblings, 2 replies; 14+ messages in thread
From: Nick Khamis @ 2013-06-16 0:25 UTC (permalink / raw
To: gentoo-user
Hello Norman,
Sorry for the delayed response
>> What do you mean by replication?
Oh I was referring to the replication of the entire NFS server with virtual
drive images etc.. to other machines for fail over, maybe load balancing.
Kind Regards,
Nick.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] Linux Fiber SAN
2013-06-16 0:25 ` Nick Khamis
@ 2013-06-16 0:28 ` Nick Khamis
2013-06-17 4:40 ` Norman Rieß
1 sibling, 0 replies; 14+ messages in thread
From: Nick Khamis @ 2013-06-16 0:28 UTC (permalink / raw
To: gentoo-user
Anyone using Hadoop for managing virtual machines and/or drives.
Kind Regards,
Nick.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [gentoo-user] Linux Fiber SAN
2013-06-16 0:25 ` Nick Khamis
2013-06-16 0:28 ` Nick Khamis
@ 2013-06-17 4:40 ` Norman Rieß
1 sibling, 0 replies; 14+ messages in thread
From: Norman Rieß @ 2013-06-17 4:40 UTC (permalink / raw
To: gentoo-user
Am 16.06.2013 02:25, schrieb Nick Khamis:
> Hello Norman,
>
> Sorry for the delayed response
>
>>> What do you mean by replication?
>
> Oh I was referring to the replication of the entire NFS server with virtual
> drive images etc.. to other machines for fail over, maybe load balancing.
>
> Kind Regards,
>
> Nick.
>
Hi,
the NFS Server is a Netapp dualhead high availability storage system,
which takes care of all this.
Regards,
Norman
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2013-06-17 4:40 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-06-11 14:19 [gentoo-user] Linux Fiber SAN Nick Khamis
2013-06-12 4:57 ` Norman Rieß
2013-06-12 6:33 ` Dan Johansson
2013-06-12 7:21 ` Norman Rieß
2013-06-12 15:12 ` Nick Khamis
2013-06-12 14:20 ` Nick Khamis
2013-06-12 14:53 ` Alan McKinnon
2013-06-12 15:45 ` Nick Khamis
2013-06-13 4:58 ` Norman Rieß
2013-06-13 12:31 ` Nick Khamis
2013-06-14 4:48 ` Norman Rieß
2013-06-16 0:25 ` Nick Khamis
2013-06-16 0:28 ` Nick Khamis
2013-06-17 4:40 ` Norman Rieß
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox