Howdy,
My own 2c on the issue is to suggest LVM
this looks at things in a slightly different way and allows me to
treat all my disks as one large volume i can carve up.
It supports multi way mirroring, so i can choose to create a
volume for all my pictures which is on at least 3 drives.
It supports volume striping (RAID0) so i can put swap file and
scratch files there.
It does support other RAID levels but I can't find where the scrub
option is
It supports volume concatenation so i can just keep growing my
MythTV recordings volume by just adding another disk.
It supports encrypted volumes so I can put all my guarded stuff in
there.
it supports (with some magic) nested volumes, so i can have an
encrypted volume sitting inside a mirrored volume so my secrets
are protected.
i can partition my drives in 3 parts, so that i can create a
volume group of fast, medium and slow based on where on the disk
the partition is (start track ~150MB/sec, end track ~60MB/sec,
numbers sort of remembered sort of made up)
I can have a bunch of disks for long term storage and hdparm can
spin them down all the time.
Live movement even of a root volume also means that i can keep
moving storage to the storage drives or decide to use a fast disk
as a storage disk and have that spin down too.
I think the crucial aspect is to also consider what you wish to
put on the drives.
If it is just pr0n, do you really care if it gets lost?
if it is just scratch areas that need to be fast, ditto.
Where the different RAIDs are good is the use of parity so you
don't lose half of your potential storage size if it were a
mirror.
Bit rot is real, all it takes is a single misaligned charged
particle from that nuclear furnace in the sky to knock a single
bit out of magnetic alignment so it will require regular scrubbing
maybe in a cron.
https://wiki.archlinux.org/index.php/Software_RAID_and_LVM#Data_scrubbing
Specifically on the bandwidth issue, I'd suggest
1. take all the drives out of RAID if you can, run a benchmark
against them individually, I like the benchmark tool in palimpset,
but that's me.
2. concurrently run dd if=/dev/zero of=/dev/sdX on all drives and
see how it compares to the individual scores this will show you
the computer mainboard/chipset effect.
3. you might find this
https://raid.wiki.kernel.org/index.php/RAID_setup#Calculation
a good starting point for calculating strides and stripes
and this
http://forums.gentoo.org/viewtopic-t-942794-start-0.html
shows the benefit of adjusting the numbers
hope this helps!
On 06/20/2013 08:10 PM, Mark Knecht wrote: