* [gentoo-user] {OT} Need a new server
@ 2013-09-13 20:00 Grant
2013-09-13 20:39 ` Alan McKinnon
` (4 more replies)
0 siblings, 5 replies; 49+ messages in thread
From: Grant @ 2013-09-13 20:00 UTC (permalink / raw
To: Gentoo mailing list
It's time to switch hosts. I'm looking at the following:
Dual Xeon E5-2690
32GB RAM
4x SSD RAID10
This would be my first experience with multiple CPUs and RAID. Advice
on any of the following would be greatly appreciated.
Are there any administrative variations for a dual-CPU system or do I
just need to make sure I enable the right kernel option(s)?
Is the Gentoo Software RAID + LVM guide the best place for RAID
install info if I'm not using LVM and I'll have a hardware RAID
controller?
http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml
Since RAM is so nice for buffers/cache, how do I know when to stop
adding it to my server?
Can I count on this system to keep running if I lose an SSD?
Is a 100M uplink enough if this is my only system on the LAN?
Is hyperthreading worthwhile?
Any opinions on Soft Layer?
- Grant
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-13 20:00 [gentoo-user] {OT} Need a new server Grant
@ 2013-09-13 20:39 ` Alan McKinnon
2013-09-13 21:39 ` Grant
2013-09-13 20:44 ` Michael Orlitzky
` (3 subsequent siblings)
4 siblings, 1 reply; 49+ messages in thread
From: Alan McKinnon @ 2013-09-13 20:39 UTC (permalink / raw
To: gentoo-user
On 13/09/2013 22:00, Grant wrote:
> It's time to switch hosts. I'm looking at the following:
>
> Dual Xeon E5-2690
> 32GB RAM
> 4x SSD RAID10
>
> This would be my first experience with multiple CPUs and RAID. Advice
> on any of the following would be greatly appreciated.
>
> Are there any administrative variations for a dual-CPU system or do I
> just need to make sure I enable the right kernel option(s)?
Just use the right kernel options, nothing special needs to be done.
Individual packages may or may not benefit from lots of cpus, such
packages must be configured individually of course
> Is the Gentoo Software RAID + LVM guide the best place for RAID
> install info if I'm not using LVM and I'll have a hardware RAID
> controller?
Exactly what RAID controller are you getting?
My personal rule of thumb: on-board RAID controllers are not worth the
silicon they are written on. Decent hardware raid controllers do exist,
but they plug into big meaty slots and cost a fortune. By "a fortune" I
mean a number that will make you gulp then head off to the nearest pub
and make the barkeep's day. (Expensive, very expensive).
Sans such decent hardware, best bet is always to do it using Linux
software RAID, and the Gentoo guide is a fine start.
> http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml
>
> Since RAM is so nice for buffers/cache, how do I know when to stop
> adding it to my server?
When more RAM stops making a difference.
The proper answer to your question is "mu", meaning it can't really be
satisfactorily answered with the info available. Only you can really
answer answer it, and only after you have examined your system in
detail. But, assuming you will use this hardware for mostly routine
normal tasks, 32G RAM is heaps and should be plenty for a long time to come.
Nothing you've ever posted leads me to believe you need crazy amounts of
RAM. It's not like your business model is to eg load every public blog
at wordpress.com with all comments and store it all in an in-memory
database :-)
>
> Can I count on this system to keep running if I lose an SSD?
Yes. If you do RAID even half-way right, you can always tolerate the
loss of one disk out of four. It's only if you do striping that you have
no redundancy at all
>
> Is a 100M uplink enough if this is my only system on the LAN?
You mean 100M ethernet right?
100M is actually a lot of traffic. However, if you have a file server
and you have on it big files > 1G, it can become a drag waiting that
extra minute to push 1G through the network.
Your NICs on that hardware are 99.9% guaranteed to be 1G. It is well
worth the money to replace your switch with a 1000Mb model and invest in
decent cables. It's not expensive (a fraction of what that hardware will
cost) and you will be glad you did it, even if all the other clients are
100M
Law of diminishing returns doesn't apply here. It's a whole lot of bang
for relatively little buck
>
> Is hyperthreading worthwhile?
Yes. Horror stories about hyperthreading being bad and badly implemented
date back to 2004 or thereabouts. All that stuff got fixed.
Some software out there does not like current hyperthreading models, but
these are a) rather specialized and b) the issue is known and the vendor
will tell you upfront.
Software that uses threads in the modern style tends to fly if
hyperthreading is available. But again, this is a very general answer
and YMMV
>
> Any opinions on Soft Layer?
Never heard of it.
What is it?
--
Alan McKinnon
alan.mckinnon@gmail.com
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-13 20:00 [gentoo-user] {OT} Need a new server Grant
2013-09-13 20:39 ` Alan McKinnon
@ 2013-09-13 20:44 ` Michael Orlitzky
2013-09-13 21:47 ` Grant
2013-09-13 21:58 ` thegeezer
` (2 subsequent siblings)
4 siblings, 1 reply; 49+ messages in thread
From: Michael Orlitzky @ 2013-09-13 20:44 UTC (permalink / raw
To: gentoo-user
On 09/13/2013 04:00 PM, Grant wrote:
> It's time to switch hosts. I'm looking at the following:
>
> Dual Xeon E5-2690
> 32GB RAM
> 4x SSD RAID10
>
> This would be my first experience with multiple CPUs and RAID. Advice
> on any of the following would be greatly appreciated.
>
> Are there any administrative variations for a dual-CPU system or do I
> just need to make sure I enable the right kernel option(s)?
Just enable it in the kernel.
> Is the Gentoo Software RAID + LVM guide the best place for RAID
> install info if I'm not using LVM and I'll have a hardware RAID
> controller?
>
> http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml
No need. Hardware RAID is handled on the RAID controller. Gentoo won't
even know about it.
LVM is (optionally) up to you.
> Since RAM is so nice for buffers/cache, how do I know when to stop
> adding it to my server?
Run `htop` every once in a while. If you're using it all and you're not
out of money, add more RAM. Otherwise, stop.
> Can I count on this system to keep running if I lose an SSD?
Yes. RAID10 both stripes and mirrors. So you can lose one, and it's
definitely mirrored on another drive. Now you have three drives. If you
lose another one, is it mirrored? Well, maybe, if you're lucky. There's
a 2/3 chance that the second drive you lose will be one of the remaining
mirror pair.
Recommendation: add a hot spare to the system.
> Is a 100M uplink enough if this is my only system on the LAN?
If you're using it all and you're not out of money, add more bandwidth.
Otherwise, stop.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-13 20:39 ` Alan McKinnon
@ 2013-09-13 21:39 ` Grant
2013-09-14 8:14 ` Alan McKinnon
0 siblings, 1 reply; 49+ messages in thread
From: Grant @ 2013-09-13 21:39 UTC (permalink / raw
To: Gentoo mailing list
>> It's time to switch hosts. I'm looking at the following:
>>
>> Dual Xeon E5-2690
>> 32GB RAM
>> 4x SSD RAID10
>>
>> This would be my first experience with multiple CPUs and RAID. Advice
>> on any of the following would be greatly appreciated.
>>
>> Is the Gentoo Software RAID + LVM guide the best place for RAID
>> install info if I'm not using LVM and I'll have a hardware RAID
>> controller?
>
> Exactly what RAID controller are you getting?
>
> My personal rule of thumb: on-board RAID controllers are not worth the
> silicon they are written on. Decent hardware raid controllers do exist,
> but they plug into big meaty slots and cost a fortune. By "a fortune" I
> mean a number that will make you gulp then head off to the nearest pub
> and make the barkeep's day. (Expensive, very expensive).
>
> Sans such decent hardware, best bet is always to do it using Linux
> software RAID, and the Gentoo guide is a fine start.
I'm told it will likely be an "Adaptec 7000 series controller".
>> Since RAM is so nice for buffers/cache, how do I know when to stop
>> adding it to my server?
>
> When more RAM stops making a difference.
>
> The proper answer to your question is "mu", meaning it can't really be
> satisfactorily answered with the info available. Only you can really
> answer answer it, and only after you have examined your system in
> detail. But, assuming you will use this hardware for mostly routine
> normal tasks, 32G RAM is heaps and should be plenty for a long time to come.
>
> Nothing you've ever posted leads me to believe you need crazy amounts of
> RAM. It's not like your business model is to eg load every public blog
> at wordpress.com with all comments and store it all in an in-memory
> database :-)
In that case maybe I'll go with 16GB instead. It's easy to add more
later I suppose.
>> Any opinions on Soft Layer?
>
> Never heard of it.
> What is it?
It's a host in the US. I should have said so.
http://www.softlayer.com
- Grant
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-13 20:44 ` Michael Orlitzky
@ 2013-09-13 21:47 ` Grant
2013-09-13 22:47 ` Peter Humphrey
` (2 more replies)
0 siblings, 3 replies; 49+ messages in thread
From: Grant @ 2013-09-13 21:47 UTC (permalink / raw
To: Gentoo mailing list
>> It's time to switch hosts. I'm looking at the following:
>>
>> Dual Xeon E5-2690
>> 32GB RAM
>> 4x SSD RAID10
>>
>> This would be my first experience with multiple CPUs and RAID. Advice
>> on any of the following would be greatly appreciated.
>>
>> Is the Gentoo Software RAID + LVM guide the best place for RAID
>> install info if I'm not using LVM and I'll have a hardware RAID
>> controller?
>>
>> http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml
>
> No need. Hardware RAID is handled on the RAID controller. Gentoo won't
> even know about it.
I had no idea. How awesome. So the entire array shows up as /dev/sda
when using a real hardware controller? Just enable an extra kernel
config option or two and it works?
>> Can I count on this system to keep running if I lose an SSD?
>
> Yes. RAID10 both stripes and mirrors. So you can lose one, and it's
> definitely mirrored on another drive. Now you have three drives. If you
> lose another one, is it mirrored? Well, maybe, if you're lucky. There's
> a 2/3 chance that the second drive you lose will be one of the remaining
> mirror pair.
>
> Recommendation: add a hot spare to the system.
Would the hot spare be in case I lose 2 drives at once? Isn't that
extraordinarily unlikely?
Are modern SSDs reliable enough to negate the need for mirroring or do
they still crap out?
- Grant
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-13 20:00 [gentoo-user] {OT} Need a new server Grant
2013-09-13 20:39 ` Alan McKinnon
2013-09-13 20:44 ` Michael Orlitzky
@ 2013-09-13 21:58 ` thegeezer
2013-09-13 22:14 ` Grant
2013-09-14 8:59 ` [gentoo-user] " Grant
2013-09-14 11:18 ` [gentoo-user] " Tanstaafl
4 siblings, 1 reply; 49+ messages in thread
From: thegeezer @ 2013-09-13 21:58 UTC (permalink / raw
To: gentoo-user
On 09/13/2013 09:00 PM, Grant wrote:
> It's time to switch hosts. I'm looking at the following:
>
> Dual Xeon E5-2690
> 32GB RAM
> 4x SSD RAID10
nice
> Can I count on this system to keep running if I lose an SSD?
if a built in raid controller, yes. one thing you might want to check is
linux tools for management -- you wouldn't want to reboot just go to go
into the raid tools and check if it requires a rebuild, and you want to
be able to schedule regular scrubs and maybe get a report.
you might also like to consider OOB management such as IPMI, dell and HP
do very lovely web based control panels that are independent of your
main o/s allowing you to get alerts when bad things happen, and
crucially watch reboot process from remote locations.
>
> Is a 100M uplink enough if this is my only system on the LAN?
gigabit NICs are pretty cheap i'd be surprised if any new machine didn't
have gigabit. i would suggest if you ever want to transfer data over
10GB across the network you should request gigabit
>
> Is hyperthreading worthwhile?
>
> Any opinions on Soft Layer?
>
> - Grant
are you putting this server in colocation at softlayer? if so OOB is a
requirement, and gigabit is not
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-13 21:58 ` thegeezer
@ 2013-09-13 22:14 ` Grant
0 siblings, 0 replies; 49+ messages in thread
From: Grant @ 2013-09-13 22:14 UTC (permalink / raw
To: Gentoo mailing list
>> It's time to switch hosts. I'm looking at the following:
>>
>> Dual Xeon E5-2690
>> 32GB RAM
>> 4x SSD RAID10
> nice
>> Can I count on this system to keep running if I lose an SSD?
> if a built in raid controller, yes. one thing you might want to check is
> linux tools for management -- you wouldn't want to reboot just go to go
> into the raid tools and check if it requires a rebuild, and you want to
> be able to schedule regular scrubs and maybe get a report.
> you might also like to consider OOB management such as IPMI, dell and HP
> do very lovely web based control panels that are independent of your
> main o/s allowing you to get alerts when bad things happen, and
> crucially watch reboot process from remote locations.
Good idea, I will look into IPMI.
>> Is a 100M uplink enough if this is my only system on the LAN?
> gigabit NICs are pretty cheap i'd be surprised if any new machine didn't
> have gigabit. i would suggest if you ever want to transfer data over
> 10GB across the network you should request gigabit
I should be OK with 100M. I shouldn't be copying anything across the LAN.
>> Any opinions on Soft Layer?
>>
>> - Grant
> are you putting this server in colocation at softlayer? if so OOB is a
> requirement, and gigabit is not
I decided against colocation because I don't want to be responsible
for fixing hardware problems. It would be a hosted machine.
- Grant
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-13 21:47 ` Grant
@ 2013-09-13 22:47 ` Peter Humphrey
2013-09-13 22:54 ` Daniel Frey
2013-09-13 23:17 ` Michael Orlitzky
2013-09-14 11:35 ` Tanstaafl
2 siblings, 1 reply; 49+ messages in thread
From: Peter Humphrey @ 2013-09-13 22:47 UTC (permalink / raw
To: gentoo-user
On Friday 13 Sep 2013 14:47:35 Grant wrote:
> Would the hot spare be in case I lose 2 drives at once? Isn't that
> extraordinarily unlikely?
Not really. One fails and you don't notice for a while, or it takes a while to
recover from it. Then a second one fails. You're up queer street.
--
Regards,
Peter
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-13 22:47 ` Peter Humphrey
@ 2013-09-13 22:54 ` Daniel Frey
2013-09-14 8:50 ` Grant
0 siblings, 1 reply; 49+ messages in thread
From: Daniel Frey @ 2013-09-13 22:54 UTC (permalink / raw
To: gentoo-user
On 09/13/2013 03:47 PM, Peter Humphrey wrote:
> On Friday 13 Sep 2013 14:47:35 Grant wrote:
>
>> Would the hot spare be in case I lose 2 drives at once? Isn't that
>> extraordinarily unlikely?
>
> Not really. One fails and you don't notice for a while, or it takes a while to
> recover from it. Then a second one fails. You're up queer street.
>
I like to do RAID6 now because I've been burned by this. The hot spare
did work and automatically start rebuilding, but another drive failed
during the rebuild process. Not that RAID6 will help if three drives
fail, but hey.
Another thing I've read is that firmware bugs on SSDs can wipe out a
whole array. I suspect it is when the raid has all the same
manufacturer/model in it and a bug appears on multiple drives killing
the array. I can't remember the details but I do believe the rebuild
procedure causing lots of writes and the drives bug out because of all
the writes. I'll admit this is not something that I've directly seen but
you may want to consider it, maybe even having 2 sets of 2 different
models in the array. My google-fu is failing me, I can't find that
article where I read this.
Dan
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-13 21:47 ` Grant
2013-09-13 22:47 ` Peter Humphrey
@ 2013-09-13 23:17 ` Michael Orlitzky
2013-09-14 8:52 ` Grant
2013-09-14 11:35 ` Tanstaafl
2 siblings, 1 reply; 49+ messages in thread
From: Michael Orlitzky @ 2013-09-13 23:17 UTC (permalink / raw
To: gentoo-user
On 09/13/2013 05:47 PM, Grant wrote:
>
> I had no idea. How awesome. So the entire array shows up as /dev/sda
> when using a real hardware controller? Just enable an extra kernel
> config option or two and it works?
>
Yep.
>> Yes. RAID10 both stripes and mirrors. So you can lose one, and it's
>> definitely mirrored on another drive. Now you have three drives. If you
>> lose another one, is it mirrored? Well, maybe, if you're lucky. There's
>> a 2/3 chance that the second drive you lose will be one of the remaining
>> mirror pair.
>>
>> Recommendation: add a hot spare to the system.
>
> Would the hot spare be in case I lose 2 drives at once?
It's just to minimize the amount of time that you're running with a
busted drive. The RAID controller will switch to the hot spare
automatically without any human intervention, so you only have to keep
your fingers crossed for e.g. 3 hours while the array rebuilds. This is
as opposed to 3 hours + (however long it took the admin to notice that a
drive has failed).
> Isn't that extraordinarily unlikely?
If the failures were random, yes, but they aren't -- they just seem that
way. The drives that you use in a hardware RAID array should ideally be
exactly the same size and have the same firmware. It's therefore not
uncommon to wind up with a set of drives that all came off the same
manufacturing line at around the same time.
If there's a minor defect in a component, like say a solder joint that
melts at too low of a temperature, then they're all much more likely to
fail at around the same time as the first one.
> Are modern SSDs reliable enough to negate the need for mirroring or do
> they still crap out?
I don't have any experience with SSDs, but a general principle: ignore
what anyone says, mirror them anyway, and make lots of backups.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-13 21:39 ` Grant
@ 2013-09-14 8:14 ` Alan McKinnon
2013-09-14 8:54 ` Grant
0 siblings, 1 reply; 49+ messages in thread
From: Alan McKinnon @ 2013-09-14 8:14 UTC (permalink / raw
To: gentoo-user
On 13/09/2013 23:39, Grant wrote:
>> Exactly what RAID controller are you getting?
>> >
>> > My personal rule of thumb: on-board RAID controllers are not worth the
>> > silicon they are written on. Decent hardware raid controllers do exist,
>> > but they plug into big meaty slots and cost a fortune. By "a fortune" I
>> > mean a number that will make you gulp then head off to the nearest pub
>> > and make the barkeep's day. (Expensive, very expensive).
>> >
>> > Sans such decent hardware, best bet is always to do it using Linux
>> > software RAID, and the Gentoo guide is a fine start.
> I'm told it will likely be an "Adaptec 7000 series controller".
>
I'm not familiar with that model, but the white paper at the vendor's
site indicates it's of the decent variety. You might as well use it then :-)
Adaptec's stuff is rather good on the whole, we use exclusively Dell and
Adaptec is by far the most common controller shipped. I can only recall
one hardware failure or problem since 2003 over 300+ machines. The odds
are in your favour today :-)
--
Alan McKinnon
alan.mckinnon@gmail.com
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-13 22:54 ` Daniel Frey
@ 2013-09-14 8:50 ` Grant
2013-09-14 11:32 ` Tanstaafl
2013-09-14 14:37 ` Michael Orlitzky
0 siblings, 2 replies; 49+ messages in thread
From: Grant @ 2013-09-14 8:50 UTC (permalink / raw
To: Gentoo mailing list
>>> Would the hot spare be in case I lose 2 drives at once? Isn't that
>>> extraordinarily unlikely?
>>
>> Not really. One fails and you don't notice for a while, or it takes a while to
>> recover from it. Then a second one fails. You're up queer street.
>
> I like to do RAID6 now because I've been burned by this. The hot spare
> did work and automatically start rebuilding, but another drive failed
> during the rebuild process. Not that RAID6 will help if three drives
> fail, but hey.
This article references the same scenario:
http://blog.open-e.com/why-a-hot-spare-hard-disk-is-a-bad-idea/
"Based on our long years of experience we have learned that during a
RAID rebuild the probability of an additional drive failure is quite
high – a rebuild is stressful on the existing drives."
Instead, how about a 6-drive RAID 10 array with no hot spare? My
guess is this would mean much greater fault-tolerance both overall and
during the rebuild process (once a new drive is swapped in). That
would mean not only potentially increased uptime but decreased
monitoring responsibility.
- Grant
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-13 23:17 ` Michael Orlitzky
@ 2013-09-14 8:52 ` Grant
0 siblings, 0 replies; 49+ messages in thread
From: Grant @ 2013-09-14 8:52 UTC (permalink / raw
To: Gentoo mailing list
>> Are modern SSDs reliable enough to negate the need for mirroring or do
>> they still crap out?
>
> I don't have any experience with SSDs, but a general principle: ignore
> what anyone says, mirror them anyway, and make lots of backups.
I'm onboard with that.
- Grant
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-14 8:14 ` Alan McKinnon
@ 2013-09-14 8:54 ` Grant
2013-09-14 9:02 ` Alan McKinnon
0 siblings, 1 reply; 49+ messages in thread
From: Grant @ 2013-09-14 8:54 UTC (permalink / raw
To: Gentoo mailing list
>>> Exactly what RAID controller are you getting?
>>> >
>>> > My personal rule of thumb: on-board RAID controllers are not worth the
>>> > silicon they are written on. Decent hardware raid controllers do exist,
>>> > but they plug into big meaty slots and cost a fortune. By "a fortune" I
>>> > mean a number that will make you gulp then head off to the nearest pub
>>> > and make the barkeep's day. (Expensive, very expensive).
>>> >
>>> > Sans such decent hardware, best bet is always to do it using Linux
>>> > software RAID, and the Gentoo guide is a fine start.
>> I'm told it will likely be an "Adaptec 7000 series controller".
>
> I'm not familiar with that model, but the white paper at the vendor's
> site indicates it's of the decent variety. You might as well use it then :-)
>
> Adaptec's stuff is rather good on the whole, we use exclusively Dell and
> Adaptec is by far the most common controller shipped. I can only recall
> one hardware failure or problem since 2003 over 300+ machines. The odds
> are in your favour today :-)
Can a controller like that handle a 6-drive RAID 10 array?
Is a hot spare handled by the controller or is it configured in the OS?
- Grant
^ permalink raw reply [flat|nested] 49+ messages in thread
* [gentoo-user] Re: {OT} Need a new server
2013-09-13 20:00 [gentoo-user] {OT} Need a new server Grant
` (2 preceding siblings ...)
2013-09-13 21:58 ` thegeezer
@ 2013-09-14 8:59 ` Grant
2013-09-14 9:10 ` Alan McKinnon
2013-09-14 11:04 ` thegeezer
2013-09-14 11:18 ` [gentoo-user] " Tanstaafl
4 siblings, 2 replies; 49+ messages in thread
From: Grant @ 2013-09-14 8:59 UTC (permalink / raw
To: Gentoo mailing list
> It's time to switch hosts. I'm looking at the following:
>
> Dual Xeon E5-2690
> 32GB RAM
> 4x SSD RAID10
If I make this 6x SSD RAID10 with redundant power supplies, what is my
weakest link as far as hardware? If a CPU craps out, will the system
keep running?
- Grant
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-14 8:54 ` Grant
@ 2013-09-14 9:02 ` Alan McKinnon
2013-09-14 9:14 ` Grant
0 siblings, 1 reply; 49+ messages in thread
From: Alan McKinnon @ 2013-09-14 9:02 UTC (permalink / raw
To: gentoo-user
On 14/09/2013 10:54, Grant wrote:
>>>> Exactly what RAID controller are you getting?
>>>>>
>>>>> My personal rule of thumb: on-board RAID controllers are not worth the
>>>>> silicon they are written on. Decent hardware raid controllers do exist,
>>>>> but they plug into big meaty slots and cost a fortune. By "a fortune" I
>>>>> mean a number that will make you gulp then head off to the nearest pub
>>>>> and make the barkeep's day. (Expensive, very expensive).
>>>>>
>>>>> Sans such decent hardware, best bet is always to do it using Linux
>>>>> software RAID, and the Gentoo guide is a fine start.
>>> I'm told it will likely be an "Adaptec 7000 series controller".
>>
>> I'm not familiar with that model, but the white paper at the vendor's
>> site indicates it's of the decent variety. You might as well use it then :-)
>>
>> Adaptec's stuff is rather good on the whole, we use exclusively Dell and
>> Adaptec is by far the most common controller shipped. I can only recall
>> one hardware failure or problem since 2003 over 300+ machines. The odds
>> are in your favour today :-)
>
> Can a controller like that handle a 6-drive RAID 10 array?
>
> Is a hot spare handled by the controller or is it configured in the OS?
The problem with questions of that nature is that the answer is always
"It depends"
With hardware, the vendor can release almost any imaginable
configuration and it's up to them what they want to build into their
product and the variations are endless.
Typically, a Series designation is a bunch of products built to a
certain form factor with the same basic silicon on board. The difference
in the models if how many drives they support and the feature list.
"Series 7000" tells us very little. You will need to get the exact model
number from your hardware vendor then consult Adaptec's tech docs to
find out the supported feature set
--
Alan McKinnon
alan.mckinnon@gmail.com
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] Re: {OT} Need a new server
2013-09-14 8:59 ` [gentoo-user] " Grant
@ 2013-09-14 9:10 ` Alan McKinnon
2013-09-14 9:29 ` Grant
2013-09-14 11:34 ` Tanstaafl
2013-09-14 11:04 ` thegeezer
1 sibling, 2 replies; 49+ messages in thread
From: Alan McKinnon @ 2013-09-14 9:10 UTC (permalink / raw
To: gentoo-user
On 14/09/2013 10:59, Grant wrote:
>> It's time to switch hosts. I'm looking at the following:
>>
>> Dual Xeon E5-2690
>> 32GB RAM
>> 4x SSD RAID10
>
> If I make this 6x SSD RAID10 with redundant power supplies, what is my
> weakest link as far as hardware? If a CPU craps out, will the system
> keep running?
Your weakest link is not having redundant power feeds. Two PSUs doesn't
help much when they both draw power from the same place :-)
Second is inadequate cooling in the data centre
Third is idiots in the data centre doing stupid things like activating
the fire suppression or any of the other truly epic fail tricks clueless
customers get up to.
Then there is is drive failure - it's the hardest working component.
SSDs less so, but they still draw considerable power and get hot
Everything else is a distant concern. When did you last hear of a CPU
failure anywhere at any time? CPUs do not fail for the most part. When
they do it's because everything else got hot which brings us back to #2
in the list.
--
Alan McKinnon
alan.mckinnon@gmail.com
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-14 9:02 ` Alan McKinnon
@ 2013-09-14 9:14 ` Grant
0 siblings, 0 replies; 49+ messages in thread
From: Grant @ 2013-09-14 9:14 UTC (permalink / raw
To: Gentoo mailing list
>>>> I'm told it will likely be an "Adaptec 7000 series controller".
>>>
>> Can a controller like that handle a 6-drive RAID 10 array?
>>
>> Is a hot spare handled by the controller or is it configured in the OS?
>
> The problem with questions of that nature is that the answer is always
> "It depends"
>
> With hardware, the vendor can release almost any imaginable
> configuration and it's up to them what they want to build into their
> product and the variations are endless.
>
> Typically, a Series designation is a bunch of products built to a
> certain form factor with the same basic silicon on board. The difference
> in the models if how many drives they support and the feature list.
> "Series 7000" tells us very little. You will need to get the exact model
> number from your hardware vendor then consult Adaptec's tech docs to
> find out the supported feature set
Yeah, that should have been a question for the host, sorry about that.
- Grant
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] Re: {OT} Need a new server
2013-09-14 9:10 ` Alan McKinnon
@ 2013-09-14 9:29 ` Grant
2013-09-14 11:07 ` Michael Hampicke
2013-09-14 14:36 ` Alan McKinnon
2013-09-14 11:34 ` Tanstaafl
1 sibling, 2 replies; 49+ messages in thread
From: Grant @ 2013-09-14 9:29 UTC (permalink / raw
To: Gentoo mailing list
>>> It's time to switch hosts. I'm looking at the following:
>>>
>>> Dual Xeon E5-2690
>>> 32GB RAM
>>> 4x SSD RAID10
>>
>> If I make this 6x SSD RAID10 with redundant power supplies, what is my
>> weakest link as far as hardware? If a CPU craps out, will the system
>> keep running?
>
> Your weakest link is not having redundant power feeds. Two PSUs doesn't
> help much when they both draw power from the same place :-)
At Soft Layer redundant power supplies are actually powered by
redundant power feeds.
> Second is inadequate cooling in the data centre
Easy to monitor though.
> Third is idiots in the data centre doing stupid things like activating
> the fire suppression or any of the other truly epic fail tricks clueless
> customers get up to.
Ouch....
> Then there is is drive failure - it's the hardest working component.
> SSDs less so, but they still draw considerable power and get hot
6-SSD RAID 10!
> Everything else is a distant concern. When did you last hear of a CPU
> failure anywhere at any time? CPUs do not fail for the most part. When
> they do it's because everything else got hot which brings us back to #2
> in the list.
I had one fail a number of years ago but I do think it was because of
heat. Plus I think that was in my overcl0cking days. Out of
curiosity though, would the system continue if I were to lose a CPU
and it didn't fry anything?
- Grant
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] Re: {OT} Need a new server
2013-09-14 8:59 ` [gentoo-user] " Grant
2013-09-14 9:10 ` Alan McKinnon
@ 2013-09-14 11:04 ` thegeezer
2013-09-15 11:05 ` Grant
1 sibling, 1 reply; 49+ messages in thread
From: thegeezer @ 2013-09-14 11:04 UTC (permalink / raw
To: gentoo-user
On 09/14/2013 09:59 AM, Grant wrote:
>> It's time to switch hosts. I'm looking at the following:
>>
>> Dual Xeon E5-2690
>> 32GB RAM
>> 4x SSD RAID10
> If I make this 6x SSD RAID10 with redundant power supplies, what is my
> weakest link as far as hardware? If a CPU craps out, will the system
> keep running?
>
> - Grant
>
consider making the main memory ECC too and flick the correct switches
in kernel to ensure ECC is monitored.
no point in ensuring the data is resilient if the content is garbled.
and also consider what happens if the raid controller fails due to a
popped capacitor five years from now
will you still be able to get a like for like replacement ?
bear in mind that you may have to keep the raid card firmware up to date
in order to be compatible with newer cards
of course, this is all relative to how long you stay with your host but
you have to decide how much resilience you want to build in.
it's the mechanical parts of spinning rust or pseudo mechanical nand
gate switching for SSD that will tend to fail,
secondary to that in most places the PSU acts as a static cling with a
dust blower attached, and any slight knock knocks the dust off causing a
short circuit especially if any humidity is caught in the air
also consider the fans blowing around the air inside the machine
you can start thinking what about earthquakes or flooding in the area -
surely you want to ensure two geographically diverse locations
cpu / motherboard failures on server spec tend to not be very likely,
especially if the environment is controlled (air filters/temp/power)
a great many things happen that are beyond anyone's sphere of control -
just look at the new york datacentres during hurricane Sandy; would it
be better to have had more diesel on site or just everything replicated
at another site ?
the real question is what is your expectation of uptime and how can your
budget match that.
uptime is affected by software as well as hardware don't forget.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] Re: {OT} Need a new server
2013-09-14 9:29 ` Grant
@ 2013-09-14 11:07 ` Michael Hampicke
2013-09-15 11:07 ` Grant
2013-09-14 14:36 ` Alan McKinnon
1 sibling, 1 reply; 49+ messages in thread
From: Michael Hampicke @ 2013-09-14 11:07 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1.1: Type: text/plain, Size: 1151 bytes --]
Am 14.09.2013 11:29, schrieb Grant:
>>>> It's time to switch hosts. I'm looking at the following:
>>>>
>>>> Dual Xeon E5-2690
>>>> 32GB RAM
>>>> 4x SSD RAID10
>>>
>>> If I make this 6x SSD RAID10 with redundant power supplies, what is my
>>> weakest link as far as hardware? If a CPU craps out, will the system
>>> keep running?
>>
>> Your weakest link is not having redundant power feeds. Two PSUs doesn't
>> help much when they both draw power from the same place :-)
>
> At Soft Layer redundant power supplies are actually powered by
> redundant power feeds.
>
>> Second is inadequate cooling in the data centre
>
> Easy to monitor though.
>
True, it's easy to monitor. But that does not help you in case of
cooling. I had that case once a few years back in july. Air condition
_and_ backup air condition fell out in the offsite data center where we
had some server.
I still have the munin image from that day. Almost 70° c hot hard drives
are not a pretty sight :-)
And there's nothing you can do, except shutting down the server and wait
until the refrigeration engineers have fixed the air conditioning.
[-- Attachment #1.2: hddtemp-day.png --]
[-- Type: image/png, Size: 23677 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 490 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-13 20:00 [gentoo-user] {OT} Need a new server Grant
` (3 preceding siblings ...)
2013-09-14 8:59 ` [gentoo-user] " Grant
@ 2013-09-14 11:18 ` Tanstaafl
2013-09-15 11:10 ` Grant
2013-09-17 7:28 ` Grant
4 siblings, 2 replies; 49+ messages in thread
From: Tanstaafl @ 2013-09-14 11:18 UTC (permalink / raw
To: gentoo-user
On 2013-09-13 4:00 PM, Grant <emailgrant@gmail.com> wrote:
> Is the Gentoo Software RAID + LVM guide the best place for RAID
> install info if I'm not using LVM and I'll have a hardware RAID
> controller?
Not ready to take the ZFS plunge? That would greatly reduce the
complexity of RAID+LVM, since ZFS best practice is to set your hardware
raid controller to JBOD mode and let ZFS take care of the RAID - and no
LVM required (ZFS has mucho better tools). That is my next big project
for when I switch to my next new server.
I'm just hoping I can get comfortable with a process for getting ZFS
compiled into the kernel that is workable/tenable for ongoing kernel
updates (with minimal fear of breaking things due to a complex/fragile
methodology)...
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-14 8:50 ` Grant
@ 2013-09-14 11:32 ` Tanstaafl
2013-09-15 11:15 ` Grant
2013-09-14 14:37 ` Michael Orlitzky
1 sibling, 1 reply; 49+ messages in thread
From: Tanstaafl @ 2013-09-14 11:32 UTC (permalink / raw
To: gentoo-user; +Cc: Grant
On 2013-09-14 4:50 AM, Grant <emailgrant@gmail.com> wrote:
> http://blog.open-e.com/why-a-hot-spare-hard-disk-is-a-bad-idea/
>
> "Based on our long years of experience we have learned that during a
> RAID rebuild the probability of an additional drive failure is quite
> high – a rebuild is stressful on the existing drives."
This is NOT true on a RAID 10... a rebuild is only stressful on the
other drive in the mirrored pair, not the other drives.
But, it is true for that one drive.
That said, it would be nice is the auto rebuild could be scripted such
that a backup could be triggered and the auto-rebuild queued until the
backup was complete.
But, here is the problem there... a backup will stress the drive almost
as much as the rebuild, because all the rebuild does is read/copy the
contents of the one drive to the other one (ie, it re-mirrors).
> Instead, how about a 6-drive RAID 10 array with no hot spare? My
> guess is this would mean much greater fault-tolerance both overall and
> during the rebuild process (once a new drive is swapped in). That
> would mean not only potentially increased uptime but decreased
> monitoring responsibility.
I would still prefer a hot spare to not... in the real world, it has
saved me exactly 3 out of 3 times...
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] Re: {OT} Need a new server
2013-09-14 9:10 ` Alan McKinnon
2013-09-14 9:29 ` Grant
@ 2013-09-14 11:34 ` Tanstaafl
2013-09-14 14:42 ` Alan McKinnon
1 sibling, 1 reply; 49+ messages in thread
From: Tanstaafl @ 2013-09-14 11:34 UTC (permalink / raw
To: gentoo-user
On 2013-09-14 5:10 AM, Alan McKinnon <alan.mckinnon@gmail.com> wrote:
> On 14/09/2013 10:59, Grant wrote:
>> If I make this 6x SSD RAID10 with redundant power supplies, what is my
>> weakest link as far as hardware? If a CPU craps out, will the system
>> keep running?
> Your weakest link is not having redundant power feeds. Two PSUs doesn't
> help much when they both draw power from the same place :-)
Right... so get two (high quality online UPS's, and plug one PS into one
UPS and the other into the other UPS.
Most hosting providers have generator backups, so as long as you
buy/specify high quality UPS's, you should be fine.
> Everything else is a distant concern. When did you last hear of a CPU
> failure anywhere at any time? CPUs do not fail for the most part. When
> they do it's because everything else got hot which brings us back to #2
> in the list.
Don't most newer server boards detect over-temp conditions and shut down
automatically?
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-13 21:47 ` Grant
2013-09-13 22:47 ` Peter Humphrey
2013-09-13 23:17 ` Michael Orlitzky
@ 2013-09-14 11:35 ` Tanstaafl
2 siblings, 0 replies; 49+ messages in thread
From: Tanstaafl @ 2013-09-14 11:35 UTC (permalink / raw
To: gentoo-user
On 2013-09-13 5:47 PM, Grant <emailgrant@gmail.com> wrote:
> Are modern SSDs reliable enough to negate the need for mirroring or do
> they still crap out?
You definitely want to mirror, but I'd be very interested in some
statistics comparing rebuild times on a RAID5 and RAID 6 with SSD's, vs
15K SAS drives, vs 7200 SATA drives.
My gut feeling is, the rebuild times on SSDs just might eliminate the
biggest problem with RAID5/6, which has always been, the more
drives/larger the RAID, the longer the rebuild times when (not if) you
lose a drive.
With regular hard drives, rebuild times can be DAYS using SATA drives.
If this can be reduced to a few hours (or less?) if using SSDs, then I'd
seriously consider using RAID 6, since you don't lose nearly as much
usable storage as you do when using RAID10 (you always lose 50%).
But of course, with ZFS, most of these questions become moot...
If you can, I'd go with JBOD and ZFS RAID...
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] Re: {OT} Need a new server
2013-09-14 9:29 ` Grant
2013-09-14 11:07 ` Michael Hampicke
@ 2013-09-14 14:36 ` Alan McKinnon
1 sibling, 0 replies; 49+ messages in thread
From: Alan McKinnon @ 2013-09-14 14:36 UTC (permalink / raw
To: gentoo-user
On 14/09/2013 11:29, Grant wrote:
>> Everything else is a distant concern. When did you last hear of a CPU
>> > failure anywhere at any time? CPUs do not fail for the most part. When
>> > they do it's because everything else got hot which brings us back to #2
>> > in the list.
> I had one fail a number of years ago but I do think it was because of
> heat. Plus I think that was in my overcl0cking days. Out of
> curiosity though, would the system continue if I were to lose a CPU
> and it didn't fry anything?
That's an interesting question and I honestly don't know the answer -
I've never had a cpu fail on me in over 30 years :-)
If we are lucky, someone might have experienced it and pipe up as to
what happened. And I'm sure someone does regular testing on
hot-pluggable cpus by popping one out and monitoring the results. I have
no knowledge though
--
Alan McKinnon
alan.mckinnon@gmail.com
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-14 8:50 ` Grant
2013-09-14 11:32 ` Tanstaafl
@ 2013-09-14 14:37 ` Michael Orlitzky
2013-09-16 6:49 ` Grant
1 sibling, 1 reply; 49+ messages in thread
From: Michael Orlitzky @ 2013-09-14 14:37 UTC (permalink / raw
To: gentoo-user
On 09/14/2013 04:50 AM, Grant wrote:
>
> Instead, how about a 6-drive RAID 10 array with no hot spare? My
> guess is this would mean much greater fault-tolerance both overall and
> during the rebuild process (once a new drive is swapped in). That
> would mean not only potentially increased uptime but decreased
> monitoring responsibility.
>
RAID10 with six drives can be implemented one of two ways,
Type 1: A B A B A B
Type 2: A B C A B C
If your controller can do Type 1, then going with six drives gives you
better fault tolerance than four with a hot spare.
I've only ever seen Type 2, so I would bet that's what your controller
will do. It's easy to check: set up RAID10 with four drives, then with
six. Did the drive get bigger? If so, it's Type 2.
If it's Type 2, then four drives with a spare is equally tolerant.
Slightly better, even, if you take into account the reduced probability
of 2/5 of the drives failing compared to 2/6.
No one believes me when I say this, so here are all possibilities for a
two-drive failure enumerated for four-drive Type 2 (with a spare) and
six-drive Type 2. Both have a 20% uh oh ratio.
Layout: A B A B S
1. A-bad B-bad A B S -- OK
2. A-bad B A-bad B S -- UH OH
3. A-bad B A B-bad S -- OK
4. A-bad B A B S-bad -- OK
5. A B-bad A-bad B S -- OK
6. A B-bad A B-bad S -- UH OH
7. A B-bad A B S-bad -- OK
8. A B A-bad B-bad S -- OK
9. A B A-bad B S-bad -- OK
10. A B A B-bad S-bad -- OK
Layout: A B C A B C
1. A-bad B-bad C A B C -- OK
2. A-bad B C-bad A B C -- OK
3. A-bad B C A-bad B C -- UH OH
4. A-bad B C A B-bad C -- OK
5. A-bad B C A B C-bad -- OK
6. A B-bad C-bad A B C -- OK
7. A B-bad C A-bad B C -- OK
8. A B-bad C A B-bad C -- UH OH
9. A B-bad C A B C-bad -- OK
10. A B C-bad A-bad B C -- OK
11. A B C-bad A B-bad C -- OK
12. A B C-bad A B C-bad -- UH OH
13. A B C A-bad B-bad C -- OK
14. A B C A-bad B C-bad -- OK
15. A B C A B-bad C-bad -- OK
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] Re: {OT} Need a new server
2013-09-14 11:34 ` Tanstaafl
@ 2013-09-14 14:42 ` Alan McKinnon
0 siblings, 0 replies; 49+ messages in thread
From: Alan McKinnon @ 2013-09-14 14:42 UTC (permalink / raw
To: gentoo-user
On 14/09/2013 13:34, Tanstaafl wrote:
> On 2013-09-14 5:10 AM, Alan McKinnon <alan.mckinnon@gmail.com> wrote:
>> On 14/09/2013 10:59, Grant wrote:
>>> If I make this 6x SSD RAID10 with redundant power supplies, what is my
>>> weakest link as far as hardware? If a CPU craps out, will the system
>>> keep running?
>
>> Your weakest link is not having redundant power feeds. Two PSUs doesn't
>> help much when they both draw power from the same place :-)
>
> Right... so get two (high quality online UPS's, and plug one PS into one
> UPS and the other into the other UPS.
Grant is looking at renting hosting space in someone's data centre. No
such company is ever going to let him buy rack space to install his UPSs
- rack space is far too valuable for that.
>
> Most hosting providers have generator backups, so as long as you
> buy/specify high quality UPS's, you should be fine.
You aren't reading between the lines I've been hinting at :-)
All decent providers claim redundant power feeds with battery/generator
backup. I'm not talking about that. I'm talking about which connector
the electrician connects which wire to.
The number of stories I hear about THAT going wrong are frightening
>> Everything else is a distant concern. When did you last hear of a CPU
>> failure anywhere at any time? CPUs do not fail for the most part. When
>> they do it's because everything else got hot which brings us back to #2
>> in the list.
>
> Don't most newer server boards detect over-temp conditions and shut down
> automatically?
--
Alan McKinnon
alan.mckinnon@gmail.com
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] Re: {OT} Need a new server
2013-09-14 11:04 ` thegeezer
@ 2013-09-15 11:05 ` Grant
0 siblings, 0 replies; 49+ messages in thread
From: Grant @ 2013-09-15 11:05 UTC (permalink / raw
To: Gentoo mailing list
>>> It's time to switch hosts. I'm looking at the following:
>>>
>>> Dual Xeon E5-2690
>>> 32GB RAM
>>> 4x SSD RAID10
>> If I make this 6x SSD RAID10 with redundant power supplies, what is my
>> weakest link as far as hardware? If a CPU craps out, will the system
>> keep running?
>>
>> - Grant
>>
> consider making the main memory ECC too and flick the correct switches
> in kernel to ensure ECC is monitored.
> no point in ensuring the data is resilient if the content is garbled.
Soft Layer does use ECC memory and I will make sure I'm monitoring it.
> and also consider what happens if the raid controller fails due to a
> popped capacitor five years from now
> will you still be able to get a like for like replacement ?
> bear in mind that you may have to keep the raid card firmware up to date
> in order to be compatible with newer cards
> of course, this is all relative to how long you stay with your host but
> you have to decide how much resilience you want to build in.
Can the RAID controller's firmware be updated in a running system?
> it's the mechanical parts of spinning rust or pseudo mechanical nand
> gate switching for SSD that will tend to fail,
> secondary to that in most places the PSU acts as a static cling with a
> dust blower attached, and any slight knock knocks the dust off causing a
> short circuit especially if any humidity is caught in the air
> also consider the fans blowing around the air inside the machine
That's great, I'm planning on 6xSSD RAID 10 and redundant power
supplies/feeds so I should be pretty well covered.
- Grant
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] Re: {OT} Need a new server
2013-09-14 11:07 ` Michael Hampicke
@ 2013-09-15 11:07 ` Grant
0 siblings, 0 replies; 49+ messages in thread
From: Grant @ 2013-09-15 11:07 UTC (permalink / raw
To: Gentoo mailing list
>>>>> It's time to switch hosts. I'm looking at the following:
>>>>>
>>>>> Dual Xeon E5-2690
>>>>> 32GB RAM
>>>>> 4x SSD RAID10
>>>>
>>>> If I make this 6x SSD RAID10 with redundant power supplies, what is my
>>>> weakest link as far as hardware? If a CPU craps out, will the system
>>>> keep running?
>>>
>>> Your weakest link is not having redundant power feeds. Two PSUs doesn't
>>> help much when they both draw power from the same place :-)
>>
>> At Soft Layer redundant power supplies are actually powered by
>> redundant power feeds.
>>
>>> Second is inadequate cooling in the data centre
>>
>> Easy to monitor though.
>>
>
> True, it's easy to monitor. But that does not help you in case of
> cooling. I had that case once a few years back in july. Air condition
> _and_ backup air condition fell out in the offsite data center where we
> had some server.
>
> I still have the munin image from that day. Almost 70° c hot hard drives
> are not a pretty sight :-)
>
> And there's nothing you can do, except shutting down the server and wait
> until the refrigeration engineers have fixed the air conditioning.
Nasty. It looks like you didn't shut down though. I use munin too.
- Grant
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-14 11:18 ` [gentoo-user] " Tanstaafl
@ 2013-09-15 11:10 ` Grant
2013-09-17 5:36 ` Pandu Poluan
2013-09-17 7:28 ` Grant
1 sibling, 1 reply; 49+ messages in thread
From: Grant @ 2013-09-15 11:10 UTC (permalink / raw
To: Gentoo mailing list
>> Is the Gentoo Software RAID + LVM guide the best place for RAID
>> install info if I'm not using LVM and I'll have a hardware RAID
>> controller?
>
> Not ready to take the ZFS plunge? That would greatly reduce the complexity
> of RAID+LVM, since ZFS best practice is to set your hardware raid controller
> to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
> has mucho better tools). That is my next big project for when I switch to my
> next new server.
>
> I'm just hoping I can get comfortable with a process for getting ZFS
> compiled into the kernel that is workable/tenable for ongoing kernel updates
> (with minimal fear of breaking things due to a complex/fragile
> methodology)...
That sounds interesting. I don't think I'm up to it this time around,
but ZFS manages a RAID array better than a good hardware card?
It sounds like ZFS isn't included in the mainline kernel. Is it on its way in?
- Grant
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-14 11:32 ` Tanstaafl
@ 2013-09-15 11:15 ` Grant
2013-09-16 9:54 ` Tanstaafl
0 siblings, 1 reply; 49+ messages in thread
From: Grant @ 2013-09-15 11:15 UTC (permalink / raw
To: Tanstaafl; +Cc: Gentoo mailing list
>> http://blog.open-e.com/why-a-hot-spare-hard-disk-is-a-bad-idea/
>>
>> "Based on our long years of experience we have learned that during a
>> RAID rebuild the probability of an additional drive failure is quite
>> high – a rebuild is stressful on the existing drives."
>
> This is NOT true on a RAID 10... a rebuild is only stressful on the other
> drive in the mirrored pair, not the other drives.
>
> But, it is true for that one drive.
Why wouldn't it be true of RAID 10? Each drive only has one mirror,
so if a drive fails, its only mirror will be stressed by the rebuild
won't it?
> That said, it would be nice is the auto rebuild could be scripted such that
> a backup could be triggered and the auto-rebuild queued until the backup was
> complete.
>
> But, here is the problem there... a backup will stress the drive almost as
> much as the rebuild, because all the rebuild does is read/copy the contents
> of the one drive to the other one (ie, it re-mirrors).
>
>> Instead, how about a 6-drive RAID 10 array with no hot spare? My
>> guess is this would mean much greater fault-tolerance both overall and
>> during the rebuild process (once a new drive is swapped in). That
>> would mean not only potentially increased uptime but decreased
>> monitoring responsibility.
>
> I would still prefer a hot spare to not... in the real world, it has saved
> me exactly 3 out of 3 times...
You would prefer 4-drive RAID 10 plus a hot spare to 6-drive RAID 10?
Isn't 6-drive RAID 10 superior in every way except for cost (1 extra
drive)?
- Grant
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-14 14:37 ` Michael Orlitzky
@ 2013-09-16 6:49 ` Grant
2013-09-16 13:10 ` Michael Orlitzky
0 siblings, 1 reply; 49+ messages in thread
From: Grant @ 2013-09-16 6:49 UTC (permalink / raw
To: Gentoo mailing list
>> Instead, how about a 6-drive RAID 10 array with no hot spare? My
>> guess is this would mean much greater fault-tolerance both overall and
>> during the rebuild process (once a new drive is swapped in). That
>> would mean not only potentially increased uptime but decreased
>> monitoring responsibility.
>
> RAID10 with six drives can be implemented one of two ways,
>
> Type 1: A B A B A B
>
> Type 2: A B C A B C
>
> If your controller can do Type 1, then going with six drives gives you
> better fault tolerance than four with a hot spare.
>
> I've only ever seen Type 2, so I would bet that's what your controller
> will do. It's easy to check: set up RAID10 with four drives, then with
> six. Did the drive get bigger? If so, it's Type 2.
>
> If it's Type 2, then four drives with a spare is equally tolerant.
> Slightly better, even, if you take into account the reduced probability
> of 2/5 of the drives failing compared to 2/6.
Thank you very much for this info. I had no idea. Is there another
label for these RAID types besides "Type 1" and "Type 2"? I can't
find reference to those designations via Google.
- Grant
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-15 11:15 ` Grant
@ 2013-09-16 9:54 ` Tanstaafl
0 siblings, 0 replies; 49+ messages in thread
From: Tanstaafl @ 2013-09-16 9:54 UTC (permalink / raw
To: gentoo-user
On 2013-09-15 7:15 AM, Grant <emailgrant@gmail.com> wrote:
> You would prefer 4-drive RAID 10 plus a hot spare to 6-drive RAID 10?
> Isn't 6-drive RAID 10 superior in every way except for cost (1 extra
> drive)?
I would prefer X-drive RAID10 plus hot spare in *any* situation.
But, this always loses 50+% of the potential storage space available...
Again, I'd love to see some comparisons of rebuild times on RAID5/RAID6
systems, using slow SATA drives vs fast 15K SAS drives vs fastest SSD
drives.
The problem with RAID5/6 has always been, the larger the array, the
longer the rebuild times - and the longer the rebuild times, the larger
the chance of another drive failure during the rebuild.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-16 6:49 ` Grant
@ 2013-09-16 13:10 ` Michael Orlitzky
2013-09-17 6:43 ` Grant
0 siblings, 1 reply; 49+ messages in thread
From: Michael Orlitzky @ 2013-09-16 13:10 UTC (permalink / raw
To: gentoo-user
On 09/16/2013 02:49 AM, Grant wrote:
>>
>> If it's Type 2, then four drives with a spare is equally tolerant.
>> Slightly better, even, if you take into account the reduced probability
>> of 2/5 of the drives failing compared to 2/6.
>
> Thank you very much for this info. I had no idea. Is there another
> label for these RAID types besides "Type 1" and "Type 2"? I can't
> find reference to those designations via Google.
Nothing standard. RAID 10 pretty intuitively comes from RAID 1+0, which
can be read aloud to figure out what it means: "RAID 1, plus RAID 0,"
i.e. you do RAID 1, then stripe (RAID 0) the result.
The trick is that RAID 1 can refer to either mirroring (2-way) or
multi-mirroring (3-way) [1]. In the end, the designation is the same:
RAID 1. So if you stripe either of them, you wind up with RAID 10. In
other words, "RAID 10" doesn't tell you which one you're going to get.
If I ever find a controller that will do multi-mirroring + RAID 0, I'll
let you know what they call it =)
[1] http://www.snia.org/tech_activities/standards/curr_standards/ddf
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-15 11:10 ` Grant
@ 2013-09-17 5:36 ` Pandu Poluan
2013-09-17 6:32 ` Grant
` (2 more replies)
0 siblings, 3 replies; 49+ messages in thread
From: Pandu Poluan @ 2013-09-17 5:36 UTC (permalink / raw
To: gentoo-user
On Sun, Sep 15, 2013 at 6:10 PM, Grant <emailgrant@gmail.com> wrote:
>>> Is the Gentoo Software RAID + LVM guide the best place for RAID
>>> install info if I'm not using LVM and I'll have a hardware RAID
>>> controller?
>>
>> Not ready to take the ZFS plunge? That would greatly reduce the complexity
>> of RAID+LVM, since ZFS best practice is to set your hardware raid controller
>> to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
>> has mucho better tools). That is my next big project for when I switch to my
>> next new server.
>>
>> I'm just hoping I can get comfortable with a process for getting ZFS
>> compiled into the kernel that is workable/tenable for ongoing kernel updates
>> (with minimal fear of breaking things due to a complex/fragile
>> methodology)...
>
> That sounds interesting. I don't think I'm up to it this time around,
> but ZFS manages a RAID array better than a good hardware card?
>
Yes. If you use ZFS to wrestle a JBOD array into its version of
RAID1+0, when comes time for resilvering (i.e., rebuilding a failed
drive), ZFS smartly only copies the used blocks and skips over unused
blocks.
> It sounds like ZFS isn't included in the mainline kernel. Is it on its way in?
>
Unlikely. There has been a discussion on that in this list, and there
is some concern that ZFS' license (CDDL) is not compatible with the
Linux kernel license (GPL), so never the twain shall be integrated.
That said, if your kernel supports modules, it's a piece of cake to
compile the ZFS modules on your own. @ryao has a zfs-overlay you can
use to emerge ZFS as a module.
If you have configured your kernel to not support modules, it's a bit
more work, but ZFS can still be integrated statically into the kernel.
But the onus is on us ZFS users to do the necessary steps.
Rgds,
--
FdS Pandu E Poluan
~ IT Optimizer ~
• LOPSA Member #15248
• Blog : http://pepoluan.tumblr.com
• Linked-In : http://id.linkedin.com/in/pepoluan
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-17 5:36 ` Pandu Poluan
@ 2013-09-17 6:32 ` Grant
2013-09-17 9:24 ` Neil Bothwick
2013-09-17 10:01 ` Tanstaafl
2 siblings, 0 replies; 49+ messages in thread
From: Grant @ 2013-09-17 6:32 UTC (permalink / raw
To: Gentoo mailing list
>>>> Is the Gentoo Software RAID + LVM guide the best place for RAID
>>>> install info if I'm not using LVM and I'll have a hardware RAID
>>>> controller?
>>>
>>> Not ready to take the ZFS plunge? That would greatly reduce the complexity
>>> of RAID+LVM, since ZFS best practice is to set your hardware raid controller
>>> to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
>>> has mucho better tools). That is my next big project for when I switch to my
>>> next new server.
>>>
>>> I'm just hoping I can get comfortable with a process for getting ZFS
>>> compiled into the kernel that is workable/tenable for ongoing kernel updates
>>> (with minimal fear of breaking things due to a complex/fragile
>>> methodology)...
>>
>> That sounds interesting. I don't think I'm up to it this time around,
>> but ZFS manages a RAID array better than a good hardware card?
>
> Yes. If you use ZFS to wrestle a JBOD array into its version of
> RAID1+0, when comes time for resilvering (i.e., rebuilding a failed
> drive), ZFS smartly only copies the used blocks and skips over unused
> blocks.
I'm seriously considering ZFS now. I'm going to start a new thread on
that topic.
- Grant
>> It sounds like ZFS isn't included in the mainline kernel. Is it on its way in?
>>
>
> Unlikely. There has been a discussion on that in this list, and there
> is some concern that ZFS' license (CDDL) is not compatible with the
> Linux kernel license (GPL), so never the twain shall be integrated.
>
> That said, if your kernel supports modules, it's a piece of cake to
> compile the ZFS modules on your own. @ryao has a zfs-overlay you can
> use to emerge ZFS as a module.
>
> If you have configured your kernel to not support modules, it's a bit
> more work, but ZFS can still be integrated statically into the kernel.
>
> But the onus is on us ZFS users to do the necessary steps.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-16 13:10 ` Michael Orlitzky
@ 2013-09-17 6:43 ` Grant
2013-09-17 12:30 ` Michael Orlitzky
0 siblings, 1 reply; 49+ messages in thread
From: Grant @ 2013-09-17 6:43 UTC (permalink / raw
To: Gentoo mailing list
>>> If it's Type 2, then four drives with a spare is equally tolerant.
>>> Slightly better, even, if you take into account the reduced probability
>>> of 2/5 of the drives failing compared to 2/6.
>>
>> Thank you very much for this info. I had no idea. Is there another
>> label for these RAID types besides "Type 1" and "Type 2"? I can't
>> find reference to those designations via Google.
>
> Nothing standard. RAID 10 pretty intuitively comes from RAID 1+0, which
> can be read aloud to figure out what it means: "RAID 1, plus RAID 0,"
> i.e. you do RAID 1, then stripe (RAID 0) the result.
>
> The trick is that RAID 1 can refer to either mirroring (2-way) or
> multi-mirroring (3-way) [1]. In the end, the designation is the same:
> RAID 1. So if you stripe either of them, you wind up with RAID 10. In
> other words, "RAID 10" doesn't tell you which one you're going to get.
>
> If I ever find a controller that will do multi-mirroring + RAID 0, I'll
> let you know what they call it =)
Is multi-mirroring (3-disk RAID1) support without RAID0 common in
hardware RAID cards?
- Grant
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-14 11:18 ` [gentoo-user] " Tanstaafl
2013-09-15 11:10 ` Grant
@ 2013-09-17 7:28 ` Grant
2013-09-17 7:37 ` Pandu Poluan
1 sibling, 1 reply; 49+ messages in thread
From: Grant @ 2013-09-17 7:28 UTC (permalink / raw
To: Gentoo mailing list
>> Is the Gentoo Software RAID + LVM guide the best place for RAID
>> install info if I'm not using LVM and I'll have a hardware RAID
>> controller?
>
> Not ready to take the ZFS plunge? That would greatly reduce the complexity
> of RAID+LVM, since ZFS best practice is to set your hardware raid controller
> to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
> has mucho better tools). That is my next big project for when I switch to my
> next new server.
>
> I'm just hoping I can get comfortable with a process for getting ZFS
> compiled into the kernel that is workable/tenable for ongoing kernel updates
> (with minimal fear of breaking things due to a complex/fragile
> methodology)...
Can't you just emerge zfs-kmod? Or maybe you're trying to do it
without module support?
- Grant
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-17 7:28 ` Grant
@ 2013-09-17 7:37 ` Pandu Poluan
2013-09-17 9:49 ` Grant
0 siblings, 1 reply; 49+ messages in thread
From: Pandu Poluan @ 2013-09-17 7:37 UTC (permalink / raw
To: gentoo-user
On Tue, Sep 17, 2013 at 2:28 PM, Grant <emailgrant@gmail.com> wrote:
>>> Is the Gentoo Software RAID + LVM guide the best place for RAID
>>> install info if I'm not using LVM and I'll have a hardware RAID
>>> controller?
>>
>> Not ready to take the ZFS plunge? That would greatly reduce the complexity
>> of RAID+LVM, since ZFS best practice is to set your hardware raid controller
>> to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
>> has mucho better tools). That is my next big project for when I switch to my
>> next new server.
>>
>> I'm just hoping I can get comfortable with a process for getting ZFS
>> compiled into the kernel that is workable/tenable for ongoing kernel updates
>> (with minimal fear of breaking things due to a complex/fragile
>> methodology)...
>
> Can't you just emerge zfs-kmod? Or maybe you're trying to do it
> without module support?
>
@tanstaafl's kernels have no module support.
Rgds,
--
FdS Pandu E Poluan
~ IT Optimizer ~
• LOPSA Member #15248
• Blog : http://pepoluan.tumblr.com
• Linked-In : http://id.linkedin.com/in/pepoluan
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-17 5:36 ` Pandu Poluan
2013-09-17 6:32 ` Grant
@ 2013-09-17 9:24 ` Neil Bothwick
2013-09-17 10:01 ` Tanstaafl
2 siblings, 0 replies; 49+ messages in thread
From: Neil Bothwick @ 2013-09-17 9:24 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 358 bytes --]
On Tue, 17 Sep 2013 12:36:20 +0700, Pandu Poluan wrote:
> That said, if your kernel supports modules, it's a piece of cake to
> compile the ZFS modules on your own. @ryao has a zfs-overlay you can
> use to emerge ZFS as a module.
It's also in the main portage tree.
--
Neil Bothwick
Get your grubby hands off my tagline! I stole it first!
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-17 7:37 ` Pandu Poluan
@ 2013-09-17 9:49 ` Grant
2013-09-17 10:10 ` Alan McKinnon
0 siblings, 1 reply; 49+ messages in thread
From: Grant @ 2013-09-17 9:49 UTC (permalink / raw
To: Gentoo mailing list
>>>> Is the Gentoo Software RAID + LVM guide the best place for RAID
>>>> install info if I'm not using LVM and I'll have a hardware RAID
>>>> controller?
>>>
>>> Not ready to take the ZFS plunge? That would greatly reduce the complexity
>>> of RAID+LVM, since ZFS best practice is to set your hardware raid controller
>>> to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
>>> has mucho better tools). That is my next big project for when I switch to my
>>> next new server.
>>>
>>> I'm just hoping I can get comfortable with a process for getting ZFS
>>> compiled into the kernel that is workable/tenable for ongoing kernel updates
>>> (with minimal fear of breaking things due to a complex/fragile
>>> methodology)...
>>
>> Can't you just emerge zfs-kmod? Or maybe you're trying to do it
>> without module support?
>
> @tanstaafl's kernels have no module support.
OK, but why exclude module support?
- Grant
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-17 5:36 ` Pandu Poluan
2013-09-17 6:32 ` Grant
2013-09-17 9:24 ` Neil Bothwick
@ 2013-09-17 10:01 ` Tanstaafl
2 siblings, 0 replies; 49+ messages in thread
From: Tanstaafl @ 2013-09-17 10:01 UTC (permalink / raw
To: gentoo-user
On 2013-09-17 1:36 AM, Pandu Poluan <pandu@poluan.info> wrote:
> On Sun, Sep 15, 2013 at 6:10 PM, Grant <emailgrant@gmail.com> wrote:
>> It sounds like ZFS isn't included in the mainline kernel. Is it on its way in?
> Unlikely. There has been a discussion on that in this list, and there
> is some concern that ZFS' license (CDDL) is not compatible with the
> Linux kernel license (GPL), so never the twain shall be integrated.
You must have missed the part that determined that integrated ZFS is
easily doable via a simple ebuild (they said it didn't even need to be
in an overlay) containing the code to do the integration at compile time.
So, yes, it *could* easily be done without any fear of licensing issues.
The question is, will someone with the knowledge and skills of how to do
it right also have the desire to do the work.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-17 9:49 ` Grant
@ 2013-09-17 10:10 ` Alan McKinnon
2013-09-17 13:11 ` Grant
0 siblings, 1 reply; 49+ messages in thread
From: Alan McKinnon @ 2013-09-17 10:10 UTC (permalink / raw
To: gentoo-user
On 17/09/2013 11:49, Grant wrote:
>>>>> Is the Gentoo Software RAID + LVM guide the best place for RAID
>>>>> install info if I'm not using LVM and I'll have a hardware RAID
>>>>> controller?
>>>>
>>>> Not ready to take the ZFS plunge? That would greatly reduce the complexity
>>>> of RAID+LVM, since ZFS best practice is to set your hardware raid controller
>>>> to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
>>>> has mucho better tools). That is my next big project for when I switch to my
>>>> next new server.
>>>>
>>>> I'm just hoping I can get comfortable with a process for getting ZFS
>>>> compiled into the kernel that is workable/tenable for ongoing kernel updates
>>>> (with minimal fear of breaking things due to a complex/fragile
>>>> methodology)...
>>>
>>> Can't you just emerge zfs-kmod? Or maybe you're trying to do it
>>> without module support?
>>
>> @tanstaafl's kernels have no module support.
>
> OK, but why exclude module support?
Noooooooooo, please for the love of god and all that's holy, let's not
go there again :-)
taanstafl has his reasons for using fully monolithic kernels without
module support. This works for him and nothing will dissuade him from
this strategy - we tried, we really did. He won.
--
Alan McKinnon
alan.mckinnon@gmail.com
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-17 6:43 ` Grant
@ 2013-09-17 12:30 ` Michael Orlitzky
2013-09-17 13:13 ` Grant
0 siblings, 1 reply; 49+ messages in thread
From: Michael Orlitzky @ 2013-09-17 12:30 UTC (permalink / raw
To: gentoo-user
On 09/17/2013 02:43 AM, Grant wrote:
>
> Is multi-mirroring (3-disk RAID1) support without RAID0 common in
> hardware RAID cards?
>
Nope. Not at my pay grade, anyway. The only ones I know of are the
Hewlett-Packard MSA/EVA, but they don't call it plain RAID1. They allow
you to create virtual disk groups, though, so you can mirror a mirror to
achieve the same effect.
The only other place I've seen it in real life is Linux's mdraid.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-17 10:10 ` Alan McKinnon
@ 2013-09-17 13:11 ` Grant
2013-09-17 16:49 ` Alan McKinnon
0 siblings, 1 reply; 49+ messages in thread
From: Grant @ 2013-09-17 13:11 UTC (permalink / raw
To: Gentoo mailing list
>>>>>> Is the Gentoo Software RAID + LVM guide the best place for RAID
>>>>>> install info if I'm not using LVM and I'll have a hardware RAID
>>>>>> controller?
>>>>>
>>>>> Not ready to take the ZFS plunge? That would greatly reduce the complexity
>>>>> of RAID+LVM, since ZFS best practice is to set your hardware raid controller
>>>>> to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
>>>>> has mucho better tools). That is my next big project for when I switch to my
>>>>> next new server.
>>>>>
>>>>> I'm just hoping I can get comfortable with a process for getting ZFS
>>>>> compiled into the kernel that is workable/tenable for ongoing kernel updates
>>>>> (with minimal fear of breaking things due to a complex/fragile
>>>>> methodology)...
>>>>
>>>> Can't you just emerge zfs-kmod? Or maybe you're trying to do it
>>>> without module support?
>>>
>>> @tanstaafl's kernels have no module support.
>>
>> OK, but why exclude module support?
>
> Noooooooooo, please for the love of god and all that's holy, let's not
> go there again :-)
Oopsie!
> taanstafl has his reasons for using fully monolithic kernels without
> module support. This works for him and nothing will dissuade him from
> this strategy - we tried, we really did. He won.
It must be for security.
- Grant
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-17 12:30 ` Michael Orlitzky
@ 2013-09-17 13:13 ` Grant
2013-09-17 16:46 ` Alan McKinnon
0 siblings, 1 reply; 49+ messages in thread
From: Grant @ 2013-09-17 13:13 UTC (permalink / raw
To: Gentoo mailing list
>> Is multi-mirroring (3-disk RAID1) support without RAID0 common in
>> hardware RAID cards?
>
> Nope. Not at my pay grade, anyway. The only ones I know of are the
> Hewlett-Packard MSA/EVA, but they don't call it plain RAID1. They allow
> you to create virtual disk groups, though, so you can mirror a mirror to
> achieve the same effect.
>
> The only other place I've seen it in real life is Linux's mdraid.
Thanks Michael. This really pushes me in the ZFS direction.
- Grant
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-17 13:13 ` Grant
@ 2013-09-17 16:46 ` Alan McKinnon
0 siblings, 0 replies; 49+ messages in thread
From: Alan McKinnon @ 2013-09-17 16:46 UTC (permalink / raw
To: gentoo-user
On 17/09/2013 15:13, Grant wrote:
>>> Is multi-mirroring (3-disk RAID1) support without RAID0 common in
>>> hardware RAID cards?
>>
>> Nope. Not at my pay grade, anyway. The only ones I know of are the
>> Hewlett-Packard MSA/EVA, but they don't call it plain RAID1. They allow
>> you to create virtual disk groups, though, so you can mirror a mirror to
>> achieve the same effect.
>>
>> The only other place I've seen it in real life is Linux's mdraid.
>
> Thanks Michael. This really pushes me in the ZFS direction.
If you need another gentle push, ZFS checksums everything it does as it
does it, so it catches data corruption that almost all other systems
can't detect. And it doesn't have write holes either.
A very good analogy I find is Google, and why Google decided to take the
software/hardware route they did (it simplifies down to scalability).
Hardware will break and at their scale it will do it three times a day.
Google detects and works around this in software.
ZFS's approach to how to store stuff on disk in an fs is similar to
Google's approach to storing search data across the world. With the same
benefit - take the uber-expensive hardware and chuck it. Use regular
stuff instead and use it smart.
--
Alan McKinnon
alan.mckinnon@gmail.com
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [gentoo-user] {OT} Need a new server
2013-09-17 13:11 ` Grant
@ 2013-09-17 16:49 ` Alan McKinnon
0 siblings, 0 replies; 49+ messages in thread
From: Alan McKinnon @ 2013-09-17 16:49 UTC (permalink / raw
To: gentoo-user
On 17/09/2013 15:11, Grant wrote:
>>>>>>> Is the Gentoo Software RAID + LVM guide the best place for RAID
>>>>>>> install info if I'm not using LVM and I'll have a hardware RAID
>>>>>>> controller?
>>>>>>
>>>>>> Not ready to take the ZFS plunge? That would greatly reduce the complexity
>>>>>> of RAID+LVM, since ZFS best practice is to set your hardware raid controller
>>>>>> to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
>>>>>> has mucho better tools). That is my next big project for when I switch to my
>>>>>> next new server.
>>>>>>
>>>>>> I'm just hoping I can get comfortable with a process for getting ZFS
>>>>>> compiled into the kernel that is workable/tenable for ongoing kernel updates
>>>>>> (with minimal fear of breaking things due to a complex/fragile
>>>>>> methodology)...
>>>>>
>>>>> Can't you just emerge zfs-kmod? Or maybe you're trying to do it
>>>>> without module support?
>>>>
>>>> @tanstaafl's kernels have no module support.
>>>
>>> OK, but why exclude module support?
>>
>> Noooooooooo, please for the love of god and all that's holy, let's not
>> go there again :-)
>
> Oopsie!
>
>> taanstafl has his reasons for using fully monolithic kernels without
>> module support. This works for him and nothing will dissuade him from
>> this strategy - we tried, we really did. He won.
>
> It must be for security.
Essentially, yes. He once explained his position to me nicely - he knows
exactly what hardware he has and what he needs, it never changes and he
never needs to tweak it on the fly. Once the driver is in the running
kernel, it stays there till a reboot. Modules he benefits, but he
doesn't need them.
That was the point I realised I didn't have an answer in his world.
--
Alan McKinnon
alan.mckinnon@gmail.com
^ permalink raw reply [flat|nested] 49+ messages in thread
end of thread, other threads:[~2013-09-17 16:54 UTC | newest]
Thread overview: 49+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-09-13 20:00 [gentoo-user] {OT} Need a new server Grant
2013-09-13 20:39 ` Alan McKinnon
2013-09-13 21:39 ` Grant
2013-09-14 8:14 ` Alan McKinnon
2013-09-14 8:54 ` Grant
2013-09-14 9:02 ` Alan McKinnon
2013-09-14 9:14 ` Grant
2013-09-13 20:44 ` Michael Orlitzky
2013-09-13 21:47 ` Grant
2013-09-13 22:47 ` Peter Humphrey
2013-09-13 22:54 ` Daniel Frey
2013-09-14 8:50 ` Grant
2013-09-14 11:32 ` Tanstaafl
2013-09-15 11:15 ` Grant
2013-09-16 9:54 ` Tanstaafl
2013-09-14 14:37 ` Michael Orlitzky
2013-09-16 6:49 ` Grant
2013-09-16 13:10 ` Michael Orlitzky
2013-09-17 6:43 ` Grant
2013-09-17 12:30 ` Michael Orlitzky
2013-09-17 13:13 ` Grant
2013-09-17 16:46 ` Alan McKinnon
2013-09-13 23:17 ` Michael Orlitzky
2013-09-14 8:52 ` Grant
2013-09-14 11:35 ` Tanstaafl
2013-09-13 21:58 ` thegeezer
2013-09-13 22:14 ` Grant
2013-09-14 8:59 ` [gentoo-user] " Grant
2013-09-14 9:10 ` Alan McKinnon
2013-09-14 9:29 ` Grant
2013-09-14 11:07 ` Michael Hampicke
2013-09-15 11:07 ` Grant
2013-09-14 14:36 ` Alan McKinnon
2013-09-14 11:34 ` Tanstaafl
2013-09-14 14:42 ` Alan McKinnon
2013-09-14 11:04 ` thegeezer
2013-09-15 11:05 ` Grant
2013-09-14 11:18 ` [gentoo-user] " Tanstaafl
2013-09-15 11:10 ` Grant
2013-09-17 5:36 ` Pandu Poluan
2013-09-17 6:32 ` Grant
2013-09-17 9:24 ` Neil Bothwick
2013-09-17 10:01 ` Tanstaafl
2013-09-17 7:28 ` Grant
2013-09-17 7:37 ` Pandu Poluan
2013-09-17 9:49 ` Grant
2013-09-17 10:10 ` Alan McKinnon
2013-09-17 13:11 ` Grant
2013-09-17 16:49 ` Alan McKinnon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox