From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from pigeon.gentoo.org ([208.92.234.80] helo=lists.gentoo.org) by finch.gentoo.org with esmtp (Exim 4.60) (envelope-from ) id 1QHbOV-0008Ul-Ks for garchives@archives.gentoo.org; Wed, 04 May 2011 12:41:31 +0000 Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id DF5671C00C; Wed, 4 May 2011 12:39:45 +0000 (UTC) Received: from out2.smtp.messagingengine.com (out2.smtp.messagingengine.com [66.111.4.26]) by pigeon.gentoo.org (Postfix) with ESMTP id B45541C00C for ; Wed, 4 May 2011 12:39:45 +0000 (UTC) Received: from compute2.internal (compute2.nyi.mail.srv.osa [10.202.2.42]) by gateway1.messagingengine.com (Postfix) with ESMTP id 6936820925 for ; Wed, 4 May 2011 08:39:45 -0400 (EDT) Received: from frontend2.messagingengine.com ([10.202.2.161]) by compute2.internal (MEProxy); Wed, 04 May 2011 08:39:45 -0400 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=messagingengine.com; h=message-id:date:from:mime-version:to:subject:references:in-reply-to:content-type; s=smtpout; bh=L0IGTMgX37EU/0APRpiFzRH2m5Y=; b=mwtbZvf/8j61yw0en91hdyV4KxgTRJQcn8UJZkGzys6tm/Gw1mFzNiTwqmXhcOUGC8mdMII7jHN/+sgpf/JPgVp1vVKSIzQqwVbqIaaYI0H/A1s+qZtl64PZizsyub3WXYDtQaO+j1sPjhZRlFQ0TzxHD6wKfW7SqfYOo6yfdB0= X-Sasl-enc: ZAJsJm3sG3faoAYfNxiy2snuVk2ABU5YxwFMJM6rjEdO 1304512784 Received: from [192.168.5.18] (serv.binarywings.net [83.169.5.6]) by mail.messagingengine.com (Postfix) with ESMTPSA id CDE38443EA3 for ; Wed, 4 May 2011 08:39:43 -0400 (EDT) Message-ID: <4DC14908.50301@binarywings.net> Date: Wed, 04 May 2011 14:39:36 +0200 From: Florian Philipp User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110313 Lightning/1.0b3pre Thunderbird/3.1.9 Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org MIME-Version: 1.0 To: gentoo-user@lists.gentoo.org Subject: Re: [gentoo-user] mdadm and raid4 References: <4DC04164.8060503@dotcomltd.ru> <4DC0786B.6010600@binarywings.net> <4DC0ED3E.8070509@dotcomltd.ru> <20110504075634.1339D1F86@data.antarean.org> <4DC11792.8090909@dotcomltd.ru> In-Reply-To: <4DC11792.8090909@dotcomltd.ru> X-Enigmail-Version: 1.1.2 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="------------enigC91F9832401A8D45AEE305A1" X-Archives-Salt: X-Archives-Hash: 759898b5ca4119da7d58151253c26b09 This is an OpenPGP/MIME signed message (RFC 2440 and 3156) --------------enigC91F9832401A8D45AEE305A1 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Am 04.05.2011 11:08, schrieb Evgeny Bushkov: > On 04.05.2011 11:54, Joost Roeleveld wrote: >> On Wednesday 04 May 2011 10:07:58 Evgeny Bushkov wrote: >>> On 04.05.2011 01:49, Florian Philipp wrote: >>>> Am 03.05.2011 19:54, schrieb Evgeny Bushkov: >>>>> Hi. >>>>> How can I find out which is the parity disk in a RAID-4 soft array?= I >>>>> couldn't find that in the mdadm manual. I know that RAID-4 feature= s a >>>>> dedicated parity disk that is usually the bottleneck of the array, = so >>>>> that disk must be as fast as possible. It seems useful to employ a = few >>>>> slow disks with a relatively fast disk in such a RAID-4 array. >>>>> >>>>> Best regards, >>>>> Bushkov E. >>>> You are seriously considering a RAID4? You know, there is a reason w= hy >>>> it was superseded by RAID5. Given the way RAID4 operates, a first gu= ess >>>> for finding the parity disk in a running array would be the one with= the >>>> worst SMART data. It is the parity disk that dies the soonest. >>>> >>>> From looking at the source code it seems like the last specified dis= k is >>>> parity. Disclaimer: I'm no kernel hacker and I have only inspected t= he >>>> code, not tried to understand the whole MD subsystem. >>>> >>>> Regards, >>>> Florian Philipp >>> Thank you for answering... The reason I consider RAID-4 is a few >>> sata/150 drives and a pair of sata II drives I've got. Let's look at= >>> the problem from the other side: I can create RAID-0(from sata II >>> drives) and then add it to RAID-4 as the parity disk. It doesn't both= er >>> me if any disk from the RAID-0 fails, that wouldn't disrupt my RAID-4= >>> array. For example: >>> >>> mdadm --create /dev/md1 --level=3D4 -n 3 -c 128 /dev/sdb1 /dev/sdc1 m= issing >>> mdadm --create /dev/md2 --level=3D0 -n 2 -c 128 /dev/sda1 /dev/sdd1 >>> mdadm /dev/md1 --add /dev/md2 >>> >>> livecd ~ # cat /proc/mdstat >>> Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] >>> md2 : active raid0 sdd1[1] sda1[0] >>> 20969472 blocks super 1.2 128k chunks >>> >>> md1 : active raid4 md2[3] sdc1[1] sdb1[0] >>> 20969216 blocks super 1.2 level 4, 128k chunk, algorithm 0 [3/2= ] [UU_] >>> [=3D=3D=3D=3D=3D=3D=3D=3D>............] recovery =3D 43.7% (4590464/= 10484608) finish=3D1.4min >>> speed=3D69615K/sec >>> >>> That configuration works well, but I'm not sure if md1 is the parity >>> disk here, that's why I asked. May be I'm wrong and RAID-5 is the onl= y >>> worth array, I'm just trying to consider all pros and cons here. >>> >>> Best regards, >>> Bushkov E. >> I only use RAID-0 (when I want performance and don't care about the da= ta),=20 >> RAID-1 (for data I can't afford to loose) and RAID-5 (data I would lik= e to=20 >> keep). I have never bothered with RAID-4. >> [...] >=20 > I've run some tests with different chunk sizes, the fastest was > raid-10(4 disks), raid-5(3 disks) was closely after. Raid-4(4 disks) wa= s > almost as fast as raid-5 so I don't see any sense to use it. >=20 > Best regards, > Bushkov E. >=20 >=20 >=20 When you have an array with uneven disk speeds, you might consider using the --write-mostly option of mdadm: -W, --write-mostly subsequent devices lists in a --build, --create, or --add command will be flagged as 'write-mostly'. This is valid for RAID1 only and means that the 'md' driver will avoid reading from these devices if at all possible. This can be useful if mirroring over a slow link. This should help in concurrent read and write operations because the kernel will not dispatch read requests to a disk that is already having trouble managing the write operations. On another point: Are you sure your disks have different speeds? SATA150 and 300 are no reliable indicator because most HDDs cannot saturate the SATA port anyway. dd is still the most reliable way to measure sequential throughput. Regards, Florian Philipp --------------enigC91F9832401A8D45AEE305A1 Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk3BSQ0ACgkQqs4uOUlOuU/PNACfeDouIrx/4jwyUF7ifLAyiFff GfAAn0jBjuyBcSOFCRxO+8NK5ptjkCDr =xN6b -----END PGP SIGNATURE----- --------------enigC91F9832401A8D45AEE305A1--