From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from pigeon.gentoo.org ([208.92.234.80] helo=lists.gentoo.org) by finch.gentoo.org with esmtp (Exim 4.60) (envelope-from ) id 1SSS2V-0001Wh-6b for garchives@archives.gentoo.org; Thu, 10 May 2012 12:00:11 +0000 Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 2E12BE0ADF; Thu, 10 May 2012 11:59:47 +0000 (UTC) Received: from mail.squareownz.org (static.185.64.4.46.clients.your-server.de [46.4.64.185]) by pigeon.gentoo.org (Postfix) with ESMTP id 59861E0A8D for ; Thu, 10 May 2012 11:55:43 +0000 (UTC) Received: by mail.squareownz.org (Postfix, from userid 1000) id ED7E49C024E; Thu, 10 May 2012 13:55:41 +0200 (CEST) Date: Thu, 10 May 2012 13:55:41 +0200 From: napalm@squareownz.org To: gentoo-user@lists.gentoo.org Subject: Re: [gentoo-user] Are those "green" drives any good? Message-ID: <20120510115541.GA20233@squareownz.org> References: <4FAA2F0D.8080900@gmail.com> <20120509112543.6021e1f8@khamul.example.com> <4FAA3E79.5010007@gmail.com> <20120509232806.495276ed@khamul.example.com> <4FAAEEB4.6090800@gmail.com> <4FAB04B7.8060306@gmail.com> Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="y0ulUmNC+osPPQO6" Content-Disposition: inline In-Reply-To: <4FAB04B7.8060306@gmail.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-Archives-Salt: cf668abc-0d0b-4368-827b-b84d2caa11e4 X-Archives-Hash: b8687b5d9d03204b3020cc1a06f53893 --y0ulUmNC+osPPQO6 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Wed, May 09, 2012 at 06:58:47PM -0500, Dale wrote: > Mark Knecht wrote: > > On Wed, May 9, 2012 at 3:24 PM, Dale wrote: > >> Alan McKinnon wrote: > > > >>> My thoughts these days is that nobody really makes a bad drive anymor= e. > >>> Like cars[1], they're all good and do what it says on the box. Same > >>> with bikes[2]. > >>> > >>> A manufacturer may have some bad luck and a product range is less than > >>> perfect, but even that is quite rare and most stuff ups can be fixed > >>> with new firmware. So it's all good. > >> > >> > >> That's my thoughts too. It doesn't matter what brand you go with, they > >> all have some sort of failure at some point. They are not built to la= st > >> forever and there is always the random failure, even when a week old. > >> It's usually the loss of important data and not having a backup that > >> makes it sooooo bad. I'm not real picky on brand as long as it is a > >> company I have heard of. > >> > >=20 > > One thing to keep in mind is statistics. For a single drive by itself > > it hardly matters anymore what you buy. You cannot predict the > > failure. However if you buy multiple identical drives at the same time > > then most likely you will either get all good drives or (possibly) a > > bunch of drives that suffer from similar defects and all start failing > > at the same point in their life cycle. For RAID arrays it's > > measurably best to buy drives that come from different manufacturing > > lots, better from different factories, and maybe even from different > > companies. Then, if a drive fails, assuming the failure is really the > > fault of the drive and not some local issue like power sources or ESD > > events, etc., it's less likely other drives in the box will fail at > > the same time. > >=20 > > Cheers, > > Mark > >=20 > >=20 >=20 >=20 >=20 > You make a good point too. I had a headlight to go out on my car once > long ago. I, not thinking, replaced them both since the new ones were > brighter. Guess what, when one of the bulbs blew out, the other was out > VERY soon after. Now, I replace them but NOT at the same time. Keep in > mind, just like a hard drive, when one headlight is on, so is the other > one. When we turn our computers on, all the drives spin up together so > they are basically all getting the same wear and tear effect. >=20 > I don't use RAID, except to kill bugs, but that is good advice. People > who do use RAID would be wise to use it. >=20 > Dale >=20 > :-) :-) >=20 hum hum! I know that Windows does this by default (it annoys me so I disable it) but does linux disable or stop running the disks if they're inactive? I'm assuming there's an option somewhere - maybe just `unmount`! --y0ulUmNC+osPPQO6 Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iJwEAQECAAYFAk+rrL0ACgkQwma4ruuHSlnj6gP/ZWyITlpbBSo7jDWOzDLV1fJz x0ZW+jDd0dy8ChPaiJqgK0DQRAGqYvOysq5xynG73DuMZQSxkGlyXANAZ1EQsUUi sh+vrkYtkFbsshIaz4sq3wTW8aLGKYP/KcVcagmoep+FINqp4+Yz4RnbTfZcYRKz aL/Uyvc8cuZY3glAJQc= =MPdr -----END PGP SIGNATURE----- --y0ulUmNC+osPPQO6--