public inbox for gentoo-amd64@lists.gentoo.org
 help / color / mirror / Atom feed
From: Mark Knecht <markknecht@gmail.com>
To: Gentoo AMD64 <gentoo-amd64@lists.gentoo.org>
Subject: Re: [gentoo-amd64] Re: Is my RAID performance bad possibly due to starting sector value?
Date: Sat, 22 Jun 2013 18:02:58 -0700	[thread overview]
Message-ID: <CAK2H+edpKpAx1nq+GM+PNEWfz1muDH27DPb42TCXEqMKrABgmQ@mail.gmail.com> (raw)
In-Reply-To: <pan$22d39$31ef3544$39d12fd7$3d33aa30@cox.net>

On Sat, Jun 22, 2013 at 7:23 AM, Duncan <1i5t5.duncan@cox.net> wrote:
> Mark Knecht posted on Fri, 21 Jun 2013 10:40:48 -0700 as excerpted:
>
>> On Fri, Jun 21, 2013 at 12:31 AM, Duncan <1i5t5.duncan@cox.net> wrote:
>> <SNIP>
<SNIP>
>
> ... Assuming $PWD is now on the raid.  You had the path shown too, which
> I snipped, but that doesn't tell /me/ (as opposed to you, who should know
> based on your mounts) anything about whether it's on the raid or not.
> However, the above including the drop-caches demonstrates enough care
> that I'm quite confident you'd not make /that/ mistake.
>
>> 4) As a second test I read from the RAID6 and write back to the RAID6.
>> I see MUCH lower speeds, again repeatable:
>>
>> dd if=SDDCopy of=HDDWrite
>> 97656250+0 records in 97656250+0 records out
>> 50000000000 bytes (50 GB) copied, 1187.07 s, 42.1 MB/s
>
>> 5) As a final test, and just looking for problems if any, I do an SDD to
>> SDD copy which clocked in at close to 200MB/S
>>
>> dd if=random1 of=SDDCopy
>> 97656250+0 records in 97656250+0 records out
>> 50000000000 bytes (50 GB) copied, 251.105 s, 199 MB/s
>
>> So, being that this RAID6 was grown yesterday from something that
>> has existed for a year or two I'm not sure of it's fragmentation, or
>> even how to determine that at this time. However it seems my problem are
>> RAID6 reads, not RAID6 writes, at least to new an probably never used
>> disk space.
>
> Reading all that, one question occurs to me.  If you want to test read
> and write separately, why the intermediate step of dd-ing from /dev/
> random to ssd, then from ssd to raid or ssd?
>
> Why not do direct dd if=/dev/random (or urandom, see note below)
> of=/desired/target ... for write tests, and then (after dropping caches),
> if=/desired/target of=/dev/null ... for read tests?  That way there's
> just the one block device involved, not both.
>

1) I was a bit worried about using it in a way it might not have been
intended to be used.

2) I felt that if I had a specific file then results should be
repeatable, or at least not dependent on what's in the file.


<SNIP>
>
> Meanwhile, dd-ing either from /dev/urandom as source, or to /dev/null as
> sink, with only the test-target block device as a real block device,
> should give you "purer" read-only and write-only tests.  In theory it
> shouldn't matter much given your method of testing, but as we all know,
> theory and reality aren't always well aligned.
>

Will try some tests this way tomorrow morning.

>
> Of course the next question follows on from the above.  I see a write to
> the raid, and a copy from the raid to the raid, so read/write on the
> raid, and a copy from the ssd to the ssd, read/write on it, but no test
> of from the raid read.
>
> So
>
> if=/dev/urandom of=/mnt/raid/target ... should give you raid write.
>
> drop-caches
>
> if=/mnt/raid/target of=/dev/null ... should give you raid read.
>
> *THEN* we have good numbers on both to compare the raid read/write to.
>
> What I suspect you'll find, unless fragmentation IS your problem, is that
> both read (from the raid) alone and write (to the raid) alone should be
> much faster than read/write (from/to the raid).
>
> The problem with read/write is that you're on "rotating rust" hardware
> and there's some latency as it repositions the heads from the read
> location to the write location and back.
>

If this lack of performance is truly driven by the drive rotational
issues than I completely agree.

> If I'm correct and that's what you find, a workaround specific to dd
> would be to specify a much larger block size, so it reads in far more
> data at once, then writes it out at once, with far fewer switches between
> modes.  In the above you didn't specify bs (or the separate input/output
> equivilents, ibs/obs respectively) at all, so it's using 512-byte
> blocksize defaults.
>

So help me clarify this before I do the work and find out I didn't
understand. Whereas earlier I created a file using:

dd if=/dev/random of=random1 bs=1000 count=0 seek=$[1000*1000*50]

if what you are suggesting is more like this very short example:

mark@c2RAID6 /VirtualMachines/bonnie $ dd if=/dev/urandom of=urandom1
bs=4096 count=$[1000*100]
100000+0 records in
100000+0 records out
409600000 bytes (410 MB) copied, 25.8825 s, 15.8 MB/s
mark@c2RAID6 /VirtualMachines/bonnie $

then the results for writing this 400MB file are very slow, but I'm
sure I don't understand what you're asking, or urandom is the limiting
factor here.

I'll look for a reply (you or anyone else that has Duncan's idea
better than I do) before I do much more.

Thanks!

- Mark


  reply	other threads:[~2013-06-23  1:03 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-06-20 19:10 [gentoo-amd64] Is my RAID performance bad possibly due to starting sector value? Mark Knecht
2013-06-20 19:16 ` Volker Armin Hemmann
2013-06-20 19:28   ` Mark Knecht
2013-06-20 20:45   ` Mark Knecht
2013-06-24 18:47     ` Volker Armin Hemmann
2013-06-24 19:11       ` Mark Knecht
2013-06-20 19:27 ` Rich Freeman
2013-06-20 19:31   ` Mark Knecht
2013-06-21  7:31 ` [gentoo-amd64] " Duncan
2013-06-21 10:28   ` Rich Freeman
2013-06-21 14:23     ` Bob Sanders
2013-06-21 14:27     ` Duncan
2013-06-21 15:13       ` Rich Freeman
2013-06-22 10:29         ` Duncan
2013-06-22 11:12           ` Rich Freeman
2013-06-22 15:45             ` Duncan
2013-06-22 23:04     ` Mark Knecht
2013-06-22 23:17       ` Matthew Marlowe
2013-06-23 11:43       ` Rich Freeman
2013-06-23 15:23         ` Mark Knecht
2013-06-28  0:51       ` Duncan
2013-06-28  3:18         ` Matthew Marlowe
2013-06-21 17:40   ` Mark Knecht
2013-06-21 17:56     ` Bob Sanders
2013-06-21 18:12       ` Mark Knecht
2013-06-21 17:57     ` Rich Freeman
2013-06-21 18:10       ` Gary E. Miller
2013-06-21 18:38       ` Mark Knecht
2013-06-21 18:50         ` Gary E. Miller
2013-06-21 18:57           ` Rich Freeman
2013-06-22 14:34           ` Duncan
2013-06-22 22:15             ` Gary E. Miller
2013-06-28  0:20               ` Duncan
2013-06-28  0:41                 ` Gary E. Miller
2013-06-21 18:53         ` Bob Sanders
2013-06-22 14:23     ` Duncan
2013-06-23  1:02       ` Mark Knecht [this message]
2013-06-23  1:48         ` Mark Knecht
2013-06-28  3:36           ` Duncan
2013-06-28  9:12             ` Duncan
2013-06-28 17:50               ` Gary E. Miller
2013-06-29  5:40                 ` Duncan
2013-06-30  1:04   ` Rich Freeman
2013-06-22 12:49 ` [gentoo-amd64] " B Vance
2013-06-22 13:12   ` Rich Freeman
2013-06-23 11:31 ` thegeezer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAK2H+edpKpAx1nq+GM+PNEWfz1muDH27DPb42TCXEqMKrABgmQ@mail.gmail.com \
    --to=markknecht@gmail.com \
    --cc=gentoo-amd64@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox