Howdy,
My own 2c on the issue is to suggest LVM
this looks at things in a slightly different way and allows me to treat all my disks as one large volume i can carve up.
It supports multi way mirroring, so i can choose to create a volume for all my pictures  which is on at least 3 drives.
It supports volume striping (RAID0) so i can put swap file and scratch files there.
It does support other RAID levels but I can't find where the scrub option is
It supports volume concatenation so i can just keep growing my MythTV recordings volume by just adding another disk.
It supports encrypted volumes so I can put all my guarded stuff in there.
it supports (with some magic) nested volumes, so i can have an encrypted volume sitting inside a mirrored volume so my secrets are protected.
i can partition my drives in 3 parts, so that i can create a volume group of fast, medium and slow based on where on the disk the partition is (start track ~150MB/sec, end track ~60MB/sec, numbers sort of remembered sort of made up)
I can have a bunch of disks for long term storage and hdparm can spin them down all the time. 
Live movement even of a root volume also means that i can keep moving storage to the storage drives or decide to use a fast disk as a storage disk and have that spin down too.

I think the crucial aspect is to also consider what you wish to put on the drives. 
If it is just pr0n, do you really care if it gets lost?
if it is just scratch areas that need to be fast, ditto.
Where the different RAIDs are good is the use of parity so you don't lose half of your potential storage size if it were a mirror. 
Bit rot is real, all it takes is a single misaligned charged particle from that nuclear furnace in the sky to knock a single bit out of magnetic alignment so it will require regular scrubbing maybe in a cron. https://wiki.archlinux.org/index.php/Software_RAID_and_LVM#Data_scrubbing

Specifically on the bandwidth issue, I'd suggest
1. take all the drives out of RAID if you can, run a benchmark against them individually, I like the benchmark tool in palimpset, but that's me.  
2. concurrently run dd if=/dev/zero of=/dev/sdX on all drives and see how it compares to the individual scores this will show you the computer mainboard/chipset effect.
3. you might find this https://raid.wiki.kernel.org/index.php/RAID_setup#Calculation  a good starting point for calculating strides and stripes
 and this http://forums.gentoo.org/viewtopic-t-942794-start-0.html  shows the benefit of adjusting the numbers


hope this helps!


On 06/20/2013 08:10 PM, Mark Knecht wrote:
Hi,
   Does anyone know of info on how the starting sector number might
impact RAID performance under Gentoo? The drives are WD-500G RE3
drives shown here:

http://www.amazon.com/Western-Digital-WD5002ABYS-3-5-inch-Enterprise/dp/B001EMZPD0/ref=cm_cr_pr_product_top

   These are NOT 4k sector sized drives.

   Specifically I'm a 5-drive RAID6 for about 1.45TB of storage. My
benchmarking seems abysmal at around 40MB/S using dd copying large
files. It's higher, around 80MB/S if the file being transferred is
coming from an SSD, but even 80MB/S seems slow to me. I see a LOT of
wait time in top. And my 'large file' copies might not be large enough
as the machine has 24GB of DRAM and I've only been copying 21GB so
it's possible some of that is cached.

   Then I looked again at how I partitioned the drives originally and
see the starting sector of sector 3 as 8594775. I started wondering if
something like 4K block sizes at the file system level might be
getting munged across 16k chunk sizes in the RAID. Maybe the blocks
are being torn apart in bad ways for performance? That led me down a
bunch of rabbit holes and I haven't found any light yet.

   Looking for some thoughtful ideas from those more experienced in this area.

Cheers,
Mark