public inbox for gentoo-amd64@lists.gentoo.org
 help / color / mirror / Atom feed
From: Mark Knecht <markknecht@gmail.com>
To: Gentoo AMD64 <gentoo-amd64@lists.gentoo.org>
Subject: Re: [gentoo-amd64] Re: Is my RAID performance bad possibly due to starting sector value?
Date: Sun, 23 Jun 2013 08:23:13 -0700	[thread overview]
Message-ID: <CAK2H+ee_Y5SSMwfXjT73GO+dVZpKNB71a0vLwd5Kh2f8Wb4j3g@mail.gmail.com> (raw)
In-Reply-To: <CAGfcS_kES66v37aAk7ejHUnDxHdNv+JtMTJ8id=wiMN7YodWXQ@mail.gmail.com>

On Sun, Jun 23, 2013 at 4:43 AM, Rich Freeman <rich0@gentoo.org> wrote:
> On Sat, Jun 22, 2013 at 7:04 PM, Mark Knecht <markknecht@gmail.com> wrote:
>>    I've been rereading everyone's posts as well as trying to collect
>> my own thoughts. One question I have at this point, being that you and
>> I seem to be the two non-RAID1 users (but not necessarily devotees) at
>> this time, is what chunk size, stride & stripe width with you are
>> using?
>
> I'm using 512K chunks on the two RAID5s which are my LVM PVs:
> md7 : active raid5 sdc3[0] sdd3[6] sde3[7] sda4[2] sdb4[5]
>       971765760 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
>       bitmap: 1/2 pages [4KB], 65536KB chunk
>
> md6 : active raid5 sda3[0] sdd2[4] sdb3[3] sde2[5]
>       2197687296 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
>       bitmap: 2/6 pages [8KB], 65536KB chunk
>
> On top of this I have a few LVs with ext4 filesystems:
> tune2fs -l /dev/vg1/root  | grep RAID
> RAID stride:              128
> RAID stripe width:        384
> (this is root, bin, sbin, lib)
>
> tune2fs -l /dev/vg1/data  | grep RAID
> RAID stride:              19204
> (this is just about everything else)
>
> tune2fs -l /dev/vg1/video  | grep RAID
> RAID stride:              11047
> (this is mythtv video)
>
> Those were all the defaults picked, and with the exception of root I
> believe the array was quite different when the others were created.
> I'm pretty confident that none of these are optimizes, and I'd be
> shocked if any of them are aligned unless this is automated (including
> across pvmoves, reshaping, and such).
>
> That is part of why I'd like to move to btrfs - optimizing raid with
> mdadm+lvm+mkfs.ext4 involves a lot of micromanagement as far as I'm
> aware.  Docs are very spotty at best, and it isn't at all clear that
> things get adjusted as needed when you actually take advantage of
> things like pvmove or reshaping arrays.  I suspect that having btrfs
> on bare metal will be more likely to result in something that keeps
> itself in-tune.
>
> Rich
>

Thanks Rich. I'm finding that helpful.

I completely agree on the micromanagement comment. At one level or
another that's sort of what this whole thread is about!

On your root partition I sort of wonder about the stripe width.
Assuming I did it right (5, 5, 512, 4) his little page calculates 128
for the stride and 512 stripe width. (4 data disks * 128 I think) Just
a piece of info.

http://busybox.net/~aldot/mkfs_stride.html

Returning to the title of the thread, asking about partition location
essentially, I woke up this morning and had sort of decided to just
try changing the chunk size to something large like your 512K. It
seems I'm out of luck as my partition size is not (apparently)
divisible by 512K:

c2RAID6 ~ # mdadm --grow /dev/md3 --chunk=512
--backup-file=/backups/ChunkSizeBackup
mdadm: component size 484088160K is not a multiple of chunksize 512K
c2RAID6 ~ # mdadm --grow /dev/md3 --chunk=256
--backup-file=/backups/ChunkSizeBackup
mdadm: component size 484088160K is not a multiple of chunksize 256K
c2RAID6 ~ # mdadm --grow /dev/md3 --chunk=128
--backup-file=/backups/ChunkSizeBackup
mdadm: component size 484088160K is not a multiple of chunksize 128K
c2RAID6 ~ # mdadm --grow /dev/md3 --chunk=64
--backup-file=/backups/ChunkSizeBackup
mdadm: component size 484088160K is not a multiple of chunksize 64K
c2RAID6 ~ #
c2RAID6 ~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md3 : active raid6 sdb3[9] sdf3[5] sde3[6] sdd3[7] sdc3[8]
      1452264480 blocks super 1.2 level 6, 16k chunk, algorithm 2 [5/5] [UUUUU]

unused devices: <none>
c2RAID6 ~ # fdisk -l /dev/sdb

Disk /dev/sdb: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x8b45be24

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *          63      112454       56196   83  Linux
/dev/sdb2          112455     8514449     4200997+  82  Linux swap / Solaris
/dev/sdb3         8594775   976773167   484089196+  fd  Linux raid autodetect
c2RAID6 ~ #

I suspect I might be much better off if all the partition sizes were
divisible by 2048 and started on 2048 multiple, like the newer fdisk
tools enforce.

I am thinking I won't make much headway unless I completely rebuild
the system from bare metal up. If I'm going to do that then I need to
get a good copy of the whole RAID onto some other drive which is a big
scary job, then start over with an install disk I guess.

Not sure I'm up for that just yet on a Sunday morning...

Take care,
Mark


  reply	other threads:[~2013-06-23 15:23 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-06-20 19:10 [gentoo-amd64] Is my RAID performance bad possibly due to starting sector value? Mark Knecht
2013-06-20 19:16 ` Volker Armin Hemmann
2013-06-20 19:28   ` Mark Knecht
2013-06-20 20:45   ` Mark Knecht
2013-06-24 18:47     ` Volker Armin Hemmann
2013-06-24 19:11       ` Mark Knecht
2013-06-20 19:27 ` Rich Freeman
2013-06-20 19:31   ` Mark Knecht
2013-06-21  7:31 ` [gentoo-amd64] " Duncan
2013-06-21 10:28   ` Rich Freeman
2013-06-21 14:23     ` Bob Sanders
2013-06-21 14:27     ` Duncan
2013-06-21 15:13       ` Rich Freeman
2013-06-22 10:29         ` Duncan
2013-06-22 11:12           ` Rich Freeman
2013-06-22 15:45             ` Duncan
2013-06-22 23:04     ` Mark Knecht
2013-06-22 23:17       ` Matthew Marlowe
2013-06-23 11:43       ` Rich Freeman
2013-06-23 15:23         ` Mark Knecht [this message]
2013-06-28  0:51       ` Duncan
2013-06-28  3:18         ` Matthew Marlowe
2013-06-21 17:40   ` Mark Knecht
2013-06-21 17:56     ` Bob Sanders
2013-06-21 18:12       ` Mark Knecht
2013-06-21 17:57     ` Rich Freeman
2013-06-21 18:10       ` Gary E. Miller
2013-06-21 18:38       ` Mark Knecht
2013-06-21 18:50         ` Gary E. Miller
2013-06-21 18:57           ` Rich Freeman
2013-06-22 14:34           ` Duncan
2013-06-22 22:15             ` Gary E. Miller
2013-06-28  0:20               ` Duncan
2013-06-28  0:41                 ` Gary E. Miller
2013-06-21 18:53         ` Bob Sanders
2013-06-22 14:23     ` Duncan
2013-06-23  1:02       ` Mark Knecht
2013-06-23  1:48         ` Mark Knecht
2013-06-28  3:36           ` Duncan
2013-06-28  9:12             ` Duncan
2013-06-28 17:50               ` Gary E. Miller
2013-06-29  5:40                 ` Duncan
2013-06-30  1:04   ` Rich Freeman
2013-06-22 12:49 ` [gentoo-amd64] " B Vance
2013-06-22 13:12   ` Rich Freeman
2013-06-23 11:31 ` thegeezer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAK2H+ee_Y5SSMwfXjT73GO+dVZpKNB71a0vLwd5Kh2f8Wb4j3g@mail.gmail.com \
    --to=markknecht@gmail.com \
    --cc=gentoo-amd64@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox