From mboxrd@z Thu Jan  1 00:00:00 1970
Received: from pigeon.gentoo.org ([208.92.234.80] helo=lists.gentoo.org)
	by finch.gentoo.org with esmtp (Exim 4.60)
	(envelope-from <gentoo-user+bounces-138214-garchives=archives.gentoo.org@lists.gentoo.org>)
	id 1ST8la-0006i6-6D
	for garchives@archives.gentoo.org; Sat, 12 May 2012 09:37:35 +0000
Received: from pigeon.gentoo.org (localhost [127.0.0.1])
	by pigeon.gentoo.org (Postfix) with SMTP id 29652E0955;
	Sat, 12 May 2012 09:37:01 +0000 (UTC)
Received: from mail-we0-f181.google.com (mail-we0-f181.google.com [74.125.82.181])
	by pigeon.gentoo.org (Postfix) with ESMTP id 78B56E02CE
	for <gentoo-user@lists.gentoo.org>; Sat, 12 May 2012 09:34:46 +0000 (UTC)
Received: by werj55 with SMTP id j55so1444945wer.40
        for <gentoo-user@lists.gentoo.org>; Sat, 12 May 2012 02:34:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20120113;
        h=from:reply-to:to:subject:date:user-agent:references:in-reply-to
         :mime-version:content-type:content-transfer-encoding:message-id;
        bh=GPRUNwLSv8J1yOHaxoyrQJZ2/cVtdAO2Ak7uu/n/5TQ=;
        b=ChJNpPt7PVA6DQiKufmzDAXW6sAyhWIAUNw8LGw61xgDEnPhaCP2OZ4Cgx/e2eRVNV
         de+HhB3eqEaLOOo8nkDhWuSLKIvjay1fGFKjwlwEMtrIgslefjo0G3bfu5i1rpQgT6pD
         zdm8XMM2hOHTGSk3NHhdUfXPbBwDwvbcLysc8sYkjq9JfbnR/eQbjVcDu8f0mW8tSbTh
         j8orsJxS98MCV7a4sb5T8mx5GFBJDaOaV5Jmjr7sPUmKKxbOwKL6+FAf+UZr3QEGgMdS
         wz+qpg08udRwAjofO0brZJF41or8mIE+gJxaPthdlvAhNv0f9mOQPRoNlP4ZIH8hYc13
         loyQ==
Received: by 10.180.99.195 with SMTP id es3mr3066897wib.12.1336815285567;
        Sat, 12 May 2012 02:34:45 -0700 (PDT)
Received: from dell_xps.localnet (230.3.169.217.in-addr.arpa. [217.169.3.230])
        by mx.google.com with ESMTPS id ex2sm27632012wib.8.2012.05.12.02.34.42
        (version=SSLv3 cipher=OTHER);
        Sat, 12 May 2012 02:34:44 -0700 (PDT)
From: Mick <michaelkintzios@gmail.com>
To: gentoo-user@lists.gentoo.org
Subject: Re: [gentoo-user] Are those "green" drives any good?
Date: Sat, 12 May 2012 10:34:12 +0100
User-Agent: KMail/1.13.7 (Linux/3.2.12-gentoo; KDE/4.8.1; x86_64; ; )
References: <4FAA2F0D.8080900@gmail.com> <CAJoTvCs6RhW8T4F9EAjBaS+KCdvzDiZaYLvTwDkx5s_7Tf2vUA@mail.gmail.com> <CAK2H+ed1DdB8q3fAU0Rw5sxHny2Rm9nTD4_W6x=6-9RwcXyXyA@mail.gmail.com>
In-Reply-To: <CAK2H+ed1DdB8q3fAU0Rw5sxHny2Rm9nTD4_W6x=6-9RwcXyXyA@mail.gmail.com>
Precedence: bulk
List-Post: <mailto:gentoo-user@lists.gentoo.org>
List-Help: <mailto:gentoo-user+help@lists.gentoo.org>
List-Unsubscribe: <mailto:gentoo-user+unsubscribe@lists.gentoo.org>
List-Subscribe: <mailto:gentoo-user+subscribe@lists.gentoo.org>
List-Id: Gentoo Linux mail <gentoo-user.gentoo.org>
X-BeenThere: gentoo-user@lists.gentoo.org
Reply-to: gentoo-user@lists.gentoo.org
MIME-Version: 1.0
Content-Type: multipart/signed;
  boundary="nextPart1524927.dMXJFQMox9";
  protocol="application/pgp-signature";
  micalg=pgp-sha1
Content-Transfer-Encoding: 7bit
Message-Id: <201205121035.02106.michaelkintzios@gmail.com>
X-Archives-Salt: 567f5640-30e0-43d5-86fe-951d8cbd25d8
X-Archives-Hash: c4045e3a5e6e17d0f24d324d1b6c99be

--nextPart1524927.dMXJFQMox9
Content-Type: Text/Plain;
  charset="utf-8"
Content-Transfer-Encoding: quoted-printable

On Thursday 10 May 2012 19:51:14 Mark Knecht wrote:
> On Thu, May 10, 2012 at 11:13 AM, Norman Invasion
>=20
> <invasivenorman@gmail.com> wrote:
> > On 10 May 2012 14:01, Mark Knecht <markknecht@gmail.com> wrote:
> >> On Thu, May 10, 2012 at 9:20 AM, Norman Invasion
> >>=20
> >> <invasivenorman@gmail.com> wrote:
> >>> On 9 May 2012 04:47, Dale <rdalek1967@gmail.com> wrote:
> >>>> Hi,
> >>>>=20
> >>>> As some know, I'm planning to buy me a LARGE hard drive to put all my
> >>>> videos on, eventually.  The prices are coming down now.  I keep seei=
ng
> >>>> these "green" drives that are made by just about every company
> >>>> nowadays. When comparing them to a non "green" drive, do they hold up
> >>>> as good? Are they as dependable as a plain drive?  I guess they are
> >>>> more efficient and I get that but do they break quicker, more often
> >>>> or no difference?
> >>>>=20
> >>>> I have noticed that they tend to spin slower and are cheaper.  That
> >>>> much I have figured out.  Other than that, I can't see any other
> >>>> difference. Data speeds seem to be about the same.
> >>>=20
> >>> They have an ugly tendency to nod off at 6 second intervals.
> >>> This runs up "193 Load_Cycle_Count" unacceptably: as many
> >>> as a few hundred thousand in a year & a million cycles is
> >>> getting close to the lifetime limit on most hard drives.  I end
> >>> up running some iteration of
> >>> # hdparm -B 255 /dev/sda
> >>> every boot.
> >>=20
> >> Very true about the 193 count. Here's a drive in a system that was
> >> built in Jan., 2010 so it's a bit over 2 years old at this point. It's
> >> on 24/7 and not rebooted except for more major updates, etc. My tests
> >> say the drive spins down and starts back up every 2 minutes and has
> >> been doing so for about 28 months. IIRC the 193 spec on this drive was
> >> something like 300000 max with the drive currently clocking in at
> >> 700488. I don't see any evidence that it's going to fail but I am
> >> trying to make sure it's backed up often. Being that it's gone >2x at
> >> this point I will swap the drive out in the early summer no matter
> >> what. This week I'll be visiting where the machine is so I'm going to
> >> put a backup drive in the box to get ready.
> >=20
> > Yes, I just learned about this problem in 2009 or so, &
> > checked on my FreeBSD laptop, which turned out to be
> > at >400000.  It only made it another month or so before
> > having unrecoverable errors.
> >=20
> > Now, I can't conclusively demonstrate that the 193
> > Load_Cycle_Count was somehow causitive, but I
> > gots my suspicions.  Many of 'em highly suspectable.
>=20
> It's part of the 'Wear Out Failure' part of the Bathtub Curve posted
> in the last few days. That said, some Toyotas go 100K miles, and
> others go 500K miles. Same car, same spec, same production line,
> different owners, different roads, different climates, etc.
>=20
> It's not possible to absolutely know when any drive will fail. I
> suspect that the 300K spec is just that, a spec. They'd replace the
> drive if it failed at 299,999 and wouldn't replace it at 300,001. That
> said, they don't want to spec thing too tightly, and I doubt many
> people make a purchasing decision on a spec like this, so for the vast
> majority of drives most likely they'd do far more than 300K.
>=20
> At 2 minutes per count on that specific WD Green Drive, if a home
> machine is turned on for instance 5 hours a day (6PM to 11PM) then
> 300K count equates to around 6 years. To me that seems pretty generous
> for a low cost home machine. However for a 24/7 production server it's
> a pretty fast replacement schedule.
>=20
> Here's data for my 500GB WD RAID Edition drives in my compute server
> here. It's powered down almost every night but doesn't suffer from the
> same firmware issues. The machine was built in April, 2010, so it's a
> bit of 2 years old.  Note that it's been powered on less than 1/2 the
> number of hours but only has a 193 count of 907 vs > 700000!
>=20
> Cheers,
> Mark
>=20
>=20
> c2stable ~ # smartctl -a /dev/sda
> smartctl 5.42 2011-10-20 r3458 [x86_64-linux-3.2.12-gentoo] (local build)
> Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
>=20
> =3D=3D=3D START OF INFORMATION SECTION =3D=3D=3D
> Model Family:     Western Digital RE3 Serial ATA
> Device Model:     WDC WD5002ABYS-02B1B0
> Serial Number:    WD-WCASYA846988
> LU WWN Device Id: 5 0014ee 2042c3477
> Firmware Version: 02.03B03
> User Capacity:    500,107,862,016 bytes [500 GB]
> Sector Size:      512 bytes logical/physical
> Device is:        In smartctl database [for details use: -P show]
> ATA Version is:   8
> ATA Standard is:  Exact ATA specification draft version not indicated
> Local Time is:    Thu May 10 11:45:45 2012 PDT
> SMART support is: Available - device has SMART capability.
> SMART support is: Enabled
>=20
> =3D=3D=3D START OF READ SMART DATA SECTION =3D=3D=3D
> SMART overall-health self-assessment test result: PASSED
>=20
> General SMART Values:
> Offline data collection status:  (0x84) Offline data collection activity
>                                         was suspended by an
> interrupting command from host.
>                                         Auto Offline Data Collection:
> Enabled. Self-test execution status:      (   0) The previous self-test
> routine completed without error or no self-test has ever been run.
> Total time to complete Offline
> data collection:                ( 9480) seconds.
> Offline data collection
> capabilities:                    (0x7b) SMART execute Offline immediate.
>                                         Auto Offline data collection
> on/off support.
>                                         Suspend Offline collection upon n=
ew
>                                         command.
>                                         Offline surface scan supported.
>                                         Self-test supported.
>                                         Conveyance Self-test
> supported.
>                                         Selective Self-test supported.
> SMART capabilities:            (0x0003) Saves SMART data before
> entering
>                                         power-saving mode.
>                                         Supports SMART auto save
> timer.
> Error logging capability:        (0x01) Error logging supported.
>                                         General Purpose Logging
> supported.
> Short self-test routine
> recommended polling time:        (   2) minutes.
> Extended self-test routine
> recommended polling time:        ( 112) minutes.
> Conveyance self-test routine
> recommended polling time:        (   5) minutes.
> SCT capabilities:              (0x303f) SCT Status supported.
>                                         SCT Error Recovery Control
> supported.
>                                         SCT Feature Control supported.
>                                         SCT Data Table supported.
>=20
> SMART Attributes Data Structure revision number: 16
> Vendor Specific SMART Attributes with Thresholds:
> ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE
> UPDATED  WHEN_FAILED RAW_VALUE
>   1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail
> Always       -       0
>   3 Spin_Up_Time            0x0027   239   235   021    Pre-fail
> Always       -       1050
>   4 Start_Stop_Count        0x0032   100   100   000    Old_age
> Always       -       935
>   5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail
> Always       -       0
>   7 Seek_Error_Rate         0x002e   200   200   000    Old_age
> Always       -       0
>   9 Power_On_Hours          0x0032   091   091   000    Old_age
> Always       -       7281
>  10 Spin_Retry_Count        0x0032   100   100   000    Old_age
> Always       -       0
>  11 Calibration_Retry_Count 0x0032   100   100   000    Old_age
> Always       -       0
>  12 Power_Cycle_Count       0x0032   100   100   000    Old_age
> Always       -       933
> 192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age
> Always       -       27
> 193 Load_Cycle_Count        0x0032   200   200   000    Old_age
> Always       -       907
> 194 Temperature_Celsius     0x0022   106   086   000    Old_age
> Always       -       41
> 196 Reallocated_Event_Count 0x0032   200   200   000    Old_age
> Always       -       0
> 197 Current_Pending_Sector  0x0032   200   200   000    Old_age
> Always       -       0
> 198 Offline_Uncorrectable   0x0030   200   200   000    Old_age
> Offline      -       0
> 199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age
> Always       -       0
> 200 Multi_Zone_Error_Rate   0x0008   200   200   000    Old_age
> Offline      -       0

Is this 193 Load_Cycle_Count an issue only on the green drives?

I have a very old Compaq laptop here that shows:

# smartctl -A /dev/sda | egrep "Power_On|Load_Cycle"
  9 Power_On_Hours          0x0012   055   055   000    Old_age   Always   =
   =20
=2D       19830
193 Load_Cycle_Count        0x0012   001   001   000    Old_age   Always   =
   =20
=2D       1739734

Admittedly, there are some 60 errors on it (having been used extensively on=
=20
bouncy trains, buses, aeroplanes, etc) but it is still refusing to die ... =
=20
O_O

It is a Hitachi 20G=20

=3D=3D=3D START OF INFORMATION SECTION =3D=3D=3D
Model Family:     Hitachi Travelstar 80GN
Device Model:     IC25N020ATMR04-0
Serial Number:    MRX107K1DS623H
=46irmware Version: MO1OAD5A
User Capacity:    20,003,880,960 bytes [20.0 GB]
Sector Size:      512 bytes logical/physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   6
ATA Standard is:  ATA/ATAPI-6 T13 1410D revision 3a
Local Time is:    Sat May 12 10:30:13 2012 BST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=3D=3D=3D START OF READ SMART DATA SECTION =3D=3D=3D

=2D-=20
Regards,
Mick

--nextPart1524927.dMXJFQMox9
Content-Type: application/pgp-signature; name=signature.asc 
Content-Description: This is a digitally signed message part.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.17 (GNU/Linux)

iEYEABECAAYFAk+uLsUACgkQVTDTR3kpaLbbZACgnu7jSh7WSdOhLnkmW9QhgomN
gWkAn136Efj5dsbizmHLo6qAmQmMmfae
=Z08j
-----END PGP SIGNATURE-----

--nextPart1524927.dMXJFQMox9--