From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <gentoo-user+bounces-191450-garchives=archives.gentoo.org@lists.gentoo.org>
Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by finch.gentoo.org (Postfix) with ESMTPS id CD9F51382C5
	for <garchives@archives.gentoo.org>; Fri, 22 May 2020 18:08:52 +0000 (UTC)
Received: from pigeon.gentoo.org (localhost [127.0.0.1])
	by pigeon.gentoo.org (Postfix) with SMTP id 6D8E4E08C9;
	Fri, 22 May 2020 18:08:47 +0000 (UTC)
Received: from smtp.hosts.co.uk (smtp.hosts.co.uk [85.233.160.19])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by pigeon.gentoo.org (Postfix) with ESMTPS id 24A73E0878
	for <gentoo-user@lists.gentoo.org>; Fri, 22 May 2020 18:08:46 +0000 (UTC)
Received: from [81.154.111.47] (helo=[192.168.1.225])
	by smtp.hosts.co.uk with esmtpa (Exim)
	(envelope-from <antlists@youngman.org.uk>)
	id 1jcC68-000ClC-EH
	for gentoo-user@lists.gentoo.org; Fri, 22 May 2020 19:08:44 +0100
Subject: Re: [gentoo-user] Seagate ST8000NM0065 PMR or SMR plus NAS SAS SATA
 question
To: gentoo-user@lists.gentoo.org
References: <659f766d-a697-08fc-baeb-9e4356c0a58e@gmail.com>
 <CAGfcS_=UfE7HGA3iGnzt0FucCF60G5mr+3TOoa8qtj-cGTMT2w@mail.gmail.com>
 <9bafdc79-a77f-1b57-6372-b611176164f4@youngman.org.uk>
 <1756899.CQOukoFCf9@lenovo.localdomain>
 <CAGfcS_kWxr_o40dbsNnmzi4F2Cz2yZSvMN8z2vtdhoNdTgWV+Q@mail.gmail.com>
 <9c8682f3-a6a8-e56b-2ac4-999ffb809bc7@youngman.org.uk>
 <CAGfcS_=gTSCBwv-+mjcvhzB6m0gSmdzcPCFdrWbYy=6L-8M7Ng@mail.gmail.com>
From: antlists <antlists@youngman.org.uk>
Message-ID: <c25d83fa-b023-8866-8d19-6045c21f62c1@youngman.org.uk>
Date: Fri, 22 May 2020 19:08:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
Precedence: bulk
List-Post: <mailto:gentoo-user@lists.gentoo.org>
List-Help: <mailto:gentoo-user+help@lists.gentoo.org>
List-Unsubscribe: <mailto:gentoo-user+unsubscribe@lists.gentoo.org>
List-Subscribe: <mailto:gentoo-user+subscribe@lists.gentoo.org>
List-Id: Gentoo Linux mail <gentoo-user.gentoo.org>
X-BeenThere: gentoo-user@lists.gentoo.org
Reply-to: gentoo-user@lists.gentoo.org
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
MIME-Version: 1.0
In-Reply-To: <CAGfcS_=gTSCBwv-+mjcvhzB6m0gSmdzcPCFdrWbYy=6L-8M7Ng@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-Archives-Salt: 468e211f-91b2-4b67-9487-0f8cd6b872c9
X-Archives-Hash: 5c0ad66bdbdea193ed1a345cf77e03c2

On 22/05/2020 18:20, Rich Freeman wrote:
> On Fri, May 22, 2020 at 12:47 PM antlists <antlists@youngman.org.uk> wrote:
>>
>> What puzzles me (or rather, it doesn't, it's just cost cutting), is why
>> you need a *dedicated* cache zone anyway.
>>
>> Stick a left-shift register between the LBA track and the hard drive,
>> and by switching this on you write to tracks 2,4,6,8,10... and it's a
>> CMR zone. Switch the register off and it's an SMR zone writing to all
>> tracks.
> 
> Disclaimer: I'm not a filesystem/DB design expert.
> 
> Well, I'm sure the zones aren't just 2 tracks wide, but that is worked
> around easily enough.  I don't see what this gets you though.  If
> you're doing sequential writes you can do them anywhere as long as
> you're doing them sequentially within any particular SMR zone.  If
> you're overwriting data then it doesn't matter how you've mapped them
> with a static mapping like this, you're still going to end up with
> writes landing in the middle of an SMR zone.

Let's assume each shingled track overwrites half the previous write. 
Let's also assume a shingled zone is 2GB in size. My method converts 
that into a 1GB CMR zone, because we're only writing to every second track.

I don't know how these drives cache their writes before re-organising, 
but this means that ANY disk zone can be used as cache, rather than 
having a (too small?) dedicated zone...

So what you could do is allocate one zone of CMR to every four or five 
zones of SMR and just reshingle each SMR as the CMR filled up. The 
important point is that zones can switch from CMR cache to SMR filling 
up, to full SMR zones decaying as they are re-written.
> 
>> The other thing is, why can't you just stream writes to a SMR zone,
>> especially if we try and localise writes so lets say all LBAs in Gig 1
>> go to the same zone ... okay - if we run out of zones to re-shingle to,
>> then the drive is going to grind to a halt, but it will be much less
>> likely to crash into that barrier in the first place.
> 
> I'm not 100% following you, but if you're suggesting remapping all
> blocks so that all writes are always sequential, like some kind of
> log-based filesystem, your biggest problem here is going to be
> metadata.  Blocks logically are only 512 bytes, so there are a LOT of
> them.  You can't just freely remap them all because then you're going
> to end up with more metadata than data.
> 
> I'm sure they are doing something like that within the cache area,
> which is fine for short bursts of writes, but at some point you need
> to restructure that data so that blocks are contiguous or otherwise
> following some kind of pattern so that you don't have to literally
> remap every single block. 

Which is why I'd break it down to maybe 2GB zones. If as the zone fills 
it streams, but is then re-organised and re-written properly when time 
permits, you've not got too large chunks of metadata. You need a btree 
to work out where each zone is stored, then each one has a btree to say 
where the blocks is stored. Oh - and these drives are probably 4K blocks 
only - most new drives are.

> Now, they could still reside in different
> locations, so maybe some sequential group of blocks are remapped, but
> if you have a write to one block in the middle of a group you need to
> still read/rewrite all those blocks somewhere.  Maybe you could use a
> COW-like mechanism like zfs to reduce this somewhat, but you still
> need to manage blocks in larger groups so that you don't have a ton of
> metadata.

The problem with drives at the moment is they run out of CMR cache, so 
they have to rewrite all those blocks WHILE THE USER IS STILL WRITING. 
The point of my idea is that they can repurpose disk as SMR or CMR as 
required, so they don't run out of cache at the wrong time ...

Yes metadata may bloom under pressure, but give the drives a break and 
they can grab a new zone, do an SMR ordered stream, and shrink the metadata.
> 
> With host-managed SMR this is much less of a problem because the host
> can use extents/etc to reduce the metadata, because the host already
> needs to map all this stuff into larger structures like
> files/records/etc.  The host is already trying to avoid having to
> track individual blocks, so it is counterproductive to re-introduce
> that problem at the block layer.
> 
> Really the simplest host-managed SMR solution is something like f2fs
> or some other log-based filesystem that ensures all writes to the disk
> are sequential.  Downside to flash-based filesystems is that they can
> disregard fragmentation on flash, but you can't disregard that for an
> SMR drive because random disk performance is terrible.

Which is why you have small(ish) zones so logically close writes are 
hopefully physically close as well ...
> 
>> Even better, if we have two independent heads, we could presumably
>> stream updates using one head, and re-shingle with the other. But that's
>> more cost ...
> 
> Well, sure, or if you're doing things host-managed then you stick the
> journal on an SSD and then do the writes to the SMR drive
> opportunistically.  You're basically describing a system where you
> have independent drives for the journal and the data areas.  Adding an
> extra head on a disk (or just having two disks) greatly improves
> performance, especially if you're alternating between two regions
> constantly.
> 
EXcept I'm describing a system where journal and data areas are 
interchangeable :-)

Cheers,
Wol