public inbox for gentoo-user@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Sid Spry" <sid@aeam.us>
To: gentoo-user@lists.gentoo.org
Subject: Re: [gentoo-user] Testing a used hard drive to make SURE it is good.
Date: Tue, 23 Jun 2020 13:44:14 -0500	[thread overview]
Message-ID: <8be8c914-dccd-4ccb-b6f2-31f1585765a9@www.fastmail.com> (raw)
In-Reply-To: <CAGfcS_=Z6Ntag61_4UZTq6V59wiQV7TTMq5uR22hti2MYKCE5Q@mail.gmail.com>

On Tue, Jun 23, 2020, at 12:20 PM, Rich Freeman wrote:
> On Tue, Jun 23, 2020 at 12:14 PM Sid Spry <sid@aeam.us> wrote:
> >
> > So if I'm understanding properly most drive firmware won't let you
> > operate the device in an append-only mode?
> 
> So, there are several types of SMR drives.
> 
> There are host-managed, drive-managed, and then hybrid devices that
> default to drive-managed for compatibility reasons but the host can
> send them a command to take full control so that it is the same as
> host-managed.
> 
> A host-managed drive just does what the host tells it to.  If the host
> tells it to do a write that obliterates some other data on the disk,
> the drive just does it, and it is the job of the host
> OS/filesystem/application to make sure that they protect any data they
> care about.  At the drive level these perform identically to CMR
> because they just seek and write like any other drive.  At the
> application level these could perform differently since the
> application might end up having to work around the drive.  However,
> these drives are generally chosen for applications where this is not a
> big problem or where the problems can be efficiently mitigated.
> 
> A drive-managed drive just looks like a regular drive to the host, and
> it ends up having to do a lot of read-before-rewrite operations
> because the host is treating it like it is CMR but the drive has to
> guarantee that nothing gets lost.  A drive-managed disk has no way to
> operate in "append-only" mode.  I'm not an expert in ATA but I believe
> disks are just given an LBA and a set of data to write.  Without
> support for TRIM the drive has no way to know if it is safe to
> overwrite nearby cylinders, which means it has to preserve them.
> 

Yeah, this is what I was wondering. It looks like there are devices that
do management that keeps you from using them at their full
performance.

> The biggest problem is that the vendors were trying to conceal the
> nature of the drives.  If they advertised TRIM support it would be
> pretty obvious they were SMR.
> 

It looks like I was right then. Maybe the market will settle soon and I
will be able to buy properly marked parts. It's a good thing I stumbled
into this, I was going to be buying more storage shortly.

> > If any do I suspect
> > NILFS (https://en.wikipedia.org/wiki/NILFS) may be a good choice:
> >
> > "NILFS is a log-structured filesystem, in that the storage medium is treated
> > like a circular buffer and new blocks are always written to the end. [...]"
> >
> 
> On a host-managed disk this would perform the same as on a CMR disk,
> with the possible exception of any fixed metadata (I haven't read the
> gory details on the filesystem).  If it has no fixed metadata (without
> surrounding unused tracks) then it would have no issues at all on SMR.
> F2FS takes a similar approach for SSDs, though it didn't really take
> off because ext4's support is good enough and I suspect that modern
> SSDs are fast enough at erasing.
> 

There is not really a lot to NILFS' structure save the fact it doesn't delete.
It ends up being fairly similar to f2fs. On a SMR w/ TRIM drive this would
imply no (or very little) penalties for write operations as all writes are
actually just appends. I'm not sure the impact it would have on seek/read
time with a normal workload but some people report slight improvements
on SMR drives, especially if they are helium filled, as the denser packing
and lighter gas lead to increased read speeds.

Actually, f2fs may be the better choice, I had almost forgotten about it.
Bigger rollout and more testing. I would need to check feature set in more
detail to make a choice.

---

Weirdly benchmarking tends to show f2fs as deficient to ext4 in most
cases.

For a while I've been interested in f2fs (or now nilfs) as a backing block
layer for a real filesystem. It seems to have a better data model than some
tree based filesystems, but I think we are seeing filesystems suck up
features like snapshotting and logging as a must-have instead of the
older LVM/raid based "storage pipelines."

But then, this is just reimplementing a smart storage controller on your
CPU, though that may be the best place for it.


  reply	other threads:[~2020-06-23 18:44 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-15 16:07 [gentoo-user] Testing a used hard drive to make SURE it is good Dale
2020-06-15 19:20 ` Spackman, Chris
2020-06-15 19:54 ` Mark Knecht
2020-06-15 20:00   ` Rich Freeman
2020-06-15 20:04     ` Mark Knecht
2020-06-16  7:34   ` Dale
2020-06-16  8:22     ` Wols Lists
2020-06-16  9:04       ` Dale
2020-06-16 11:02         ` Wols Lists
2020-06-16 11:26           ` Dale
2020-06-16 11:36             ` Michael
2020-06-16 12:25               ` Rich Freeman
2020-06-16 23:38                 ` antlists
2020-06-17  9:47                   ` Rich Freeman
2020-06-23 16:14                 ` Sid Spry
2020-06-23 17:20                   ` Rich Freeman
2020-06-23 18:44                     ` Sid Spry [this message]
2020-06-16 13:14               ` Dale
2020-06-16 23:24             ` antlists
2020-06-17  4:47               ` Dale
2020-06-17 12:32                 ` Wols Lists
2020-06-17 12:04                   ` Rich Freeman
2020-06-16  8:29     ` Neil Bothwick
2020-06-16  8:52       ` Dale
2020-06-15 19:54 ` [gentoo-user] " Grant Edwards
2020-06-15 20:04   ` Grant Edwards
2020-06-15 23:03 ` [gentoo-user] " madscientistatlarge
2020-06-15 23:18 ` David Haller
2020-06-16  7:17   ` Dale
2020-06-16  7:32     ` William Kenworthy
2020-06-16  7:37       ` Dale
2020-06-17 15:27     ` David Haller
2020-06-18  8:07       ` Dr Rainer Woitok
2020-06-23 16:08     ` Sid Spry
2020-06-23 16:38       ` [gentoo-user] " Grant Edwards
2020-06-23 16:41         ` Sid Spry
2020-06-23 17:26           ` Dale
2020-06-23 18:32             ` Sid Spry
2020-06-23 19:37               ` Dale
2020-06-23 20:03                 ` Rich Freeman
2020-06-24  4:26               ` Wols Lists
2020-06-18  9:14   ` [gentoo-user] " Dale
2020-06-22  1:52     ` Pengcheng Xu
2020-06-22  2:15       ` Dale
2020-06-22 19:10     ` David Haller
2020-06-22 20:29       ` Dale
2020-06-22 22:59         ` David Haller
2020-06-23  4:18           ` Dale
2020-06-17  6:02 ` Dale
2020-06-20  9:50 ` Dale

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8be8c914-dccd-4ccb-b6f2-31f1585765a9@www.fastmail.com \
    --to=sid@aeam.us \
    --cc=gentoo-user@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox