From: William Kenworthy <billk@iinet.net.au>
To: gentoo-user@lists.gentoo.org
Subject: Re: [gentoo-user] [OT] SMR drives (WAS: cryptsetup close and device in use when it is not)
Date: Sun, 1 Aug 2021 11:05:29 +0800 [thread overview]
Message-ID: <6ca83a12-24b7-57bd-2dd9-1b1d46209d69@iinet.net.au> (raw)
In-Reply-To: <CAGfcS_kziu6DUkRzU+iPK=S3KtKsMujvaaiGxZ2QYotLCne0kQ@mail.gmail.com>
On 31/7/21 9:30 pm, Rich Freeman wrote:
> On Sat, Jul 31, 2021 at 8:59 AM William Kenworthy <billk@iinet.net.au> wrote:
>> I tried using moosefs with a rpi3B in the
>> mix and it didn't go well once I started adding data - rpi 4's were not
>> available when I set it up.
> Pi2/3s only have USB2 as far as I'm aware, and they stick the ethernet
> port on that USB bus besides. So, they're terrible for anything that
> involves IO of any kind.
>
> The Pi4 moves the ethernet off of USB, upgrades it to gigabit, and has
> two USB3 hosts, so this is just all-around a missive improvement.
> Obviously it isn't going to outclass some server-grade system with a
> gazillion PCIe v4 lanes but it is very good for an SBC and the price.
>
> I'd love server-grade ARM hardware but it is just so expensive unless
> there is some source out there I'm not aware of. It is crazy that you
> can't get more than 4-8GiB of RAM on an affordable arm system.
Checkout the odroid range. Same or only slightly $$$ more for a much
better unit than a pi (except for the availability of 8G ram on the pi4)
None of the pi's I have had have come close though I do not have a pi4
and that looks from reading to be much closer in performance. The
Odroid sites includes comparison charts of odroid aganst the rpi and it
also shows it getting closer in performance. There are a few other
companies out there too. I am hoping the popularity of the pi 8G will
push others to match it. I found the supplied 4.9 or 4.14 kernels
problematic with random crashes, espicially if usb was involved. I am
currently using the 5.12 tobetter kernels and aarch64 or arm32 bit
gentoo userlands.
>
>> I think that SMR disks will work quite well
>> on moosefs or lizardfs - I don't see long continuous writes to one disk
>> but a random distribution of writes across the cluster with gaps between
>> on each disk (1G network).
> So, the distributed filesystems divide all IO (including writes)
> across all the drives in the cluster. When you have a number of
> drives that obviously increases the total amount of IO you can handle
> before the SMR drives start hitting the wall. Writing 25GB of data to
> a single SMR drive will probably overrun its CMR cache, but if you
> split it across 10 drives and write 2.5GB each, there is a decent
> chance they'll all have room in the cache, take the write quickly, and
> then as long as your writes aren't sustained they can clear the
> buffer.
Not strictly what I am seeing. You request a file from MFS and the
first first free chunkserver with the data replies. Writing is similar
in that (depending on the creation arguments) a chunk is written
wherever responds fastest then replicated. Replication is done under
control of an algorithm that replicates a set number of chunks at a time
between a limited number of chunkservers in a stream depending on
replication status. So I am seeing individual disk activity that is
busy for a few seconds, and then nothing for a short period - this
pattern has become more pronounced as I added chunkservers and would
seem to match the SMR requirements. If I replace/rebuild (resilver) a
chunkserver, that one is a lot busier, but still not 100% continuous
write or read. Moosefs uses a throttled replication methodology. This
is with 7 chunkservers and 9 disks - more is definitely better for
performance.
> I think you're still going to have an issue in a rebalancing scenario
> unless you're adding many drives at once so that the network becomes
> rate-limiting instead of the disks. Having unreplicated data sitting
> around for days or weeks due to slow replication performance is
> setting yourself up for multiple failures. So, I'd still stay away
> from them.
I think at some point I am going to have to add an SMR disk and see what
happens - cant do it now though.
>
> If you have 10GbE then your ability to overrun those disks goes way
> up. Ditto if you're running something like Ceph which can achieve
> higher performance. I'm just doing bulk storage where I care a lot
> more about capacity than performance. If I were trying to run a k8s
> cluster or something I'd be on Ceph on SSD or whatever.
Tried ceph - run away fast :) I have a lot of nearly static data but
also a number of lxc instances (running on an Odroid N2) with both the
LXC instance and data stored on the cluster. These include email,
calendaring, dns, webservers etc. all work well. The online borgbackup
repos are also stored on it. Limitations on community moosefs is the
single point of failure that is the master plus the memory resource
requirements on the master. I improved performance and master memory
requirements considerably by pushing the larger data sets (e.g., Gib of
mail files) into a container file stored on MFS and loop mounted onto
the mailserver lxc instance. Convoluted but very happy with the
improvement its made.
>> With a good adaptor, USB3 is great ... otherwise its been quite
>> frustrating :( I do suspect linux and its pedantic correctness trying
>> to deal with hardware that isn't truly standardised (as in the
>> manufacturer probably supplies a windows driver that covers it up) is
>> part of the problem. These adaptors are quite common and I needed to
>> apply the ATA command filter and turn off UAS using the usb tweaks
>> mechanism to stop the crashes and data corruption. The comments in the
>> kernel driver code for these adaptors are illuminating!
> Sometimes I wonder. I occasionally get errors in dmesg about
> unaligned writes when using zfs. Others have seen these. The zfs
> developers seem convinced that the issue isn't with zfs but it simply
> is reporting the issue, or maybe it happens under loads that you're
> more likely to get with zfs scrubbing (which IMO performs far worse
> than with btrfs - I'm guessing it isn't optimized to scan physically
> sequentially on each disk but may be doing it in a more logical order
> and synchronously between mirror pairs). Sometimes I wonder if there
> is just some sort of bug in the HBA drivers, or maybe the hardware on
> the motherboard. Consumer PC hardware (like all PC hardware) is
> basically a black box unless you have pretty sophisticated testing
> equipment and knowledge, so if your SATA host is messing things up how
> would you know?
>
BillK
next prev parent reply other threads:[~2021-08-01 3:07 UTC|newest]
Thread overview: 64+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-14 4:50 [gentoo-user] cryptsetup close and device in use when it is not Dale
2021-06-15 13:48 ` Ramon Fischer
2021-06-15 14:21 ` Dale
2021-06-15 14:52 ` Jack
2021-06-15 15:26 ` Dale
2021-06-15 19:04 ` Ramon Fischer
2021-06-21 4:18 ` Dale
2021-06-21 4:49 ` Dale
2021-06-21 5:41 ` Dale
2021-06-21 5:59 ` Dale
2021-06-28 3:35 ` Dale
2021-07-05 3:19 ` Dale
2021-07-06 18:40 ` Ramon Fischer
2021-07-06 19:43 ` Dale
2021-07-07 14:48 ` Dr Rainer Woitok
2021-07-07 18:08 ` Dale
2021-07-08 8:20 ` Ramon Fischer
2021-07-12 8:31 ` Dale
2021-07-12 13:14 ` Ramon Fischer
2021-08-02 13:33 ` Dale
2021-08-09 13:38 ` Ramon Fischer
2021-09-19 11:55 ` Dale
2021-07-25 20:29 ` Frank Steinmetzger
2021-07-25 23:10 ` Dale
2021-07-26 21:00 ` Frank Steinmetzger
2021-07-26 22:48 ` Dale
2021-07-29 16:46 ` Wols Lists
2021-07-29 20:55 ` [gentoo-user] [OT] SMR drives (WAS: cryptsetup close and device in use when it is not) Frank Steinmetzger
2021-07-29 21:31 ` Frank Steinmetzger
2021-07-30 12:48 ` Frank Steinmetzger
2021-07-30 5:14 ` William Kenworthy
2021-07-30 14:29 ` Rich Freeman
2021-07-30 16:50 ` antlists
2021-07-30 18:38 ` Rich Freeman
2021-07-31 3:14 ` William Kenworthy
2021-07-31 3:50 ` Wols Lists
2021-07-31 4:58 ` William Kenworthy
2021-07-31 12:12 ` Rich Freeman
2021-08-01 0:41 ` Frank Steinmetzger
2021-08-01 0:56 ` Rich Freeman
2021-07-31 16:38 ` antlists
2021-08-01 0:50 ` Frank Steinmetzger
2021-08-01 3:36 ` William Kenworthy
2021-08-01 3:46 ` William Kenworthy
2021-08-01 21:38 ` Frank Steinmetzger
2021-08-02 5:38 ` William Kenworthy
2021-08-02 21:52 ` Frank Steinmetzger
2021-08-02 23:10 ` William Kenworthy
2021-08-03 8:18 ` Frank Steinmetzger
2021-08-05 20:40 ` Frank Steinmetzger
2021-08-06 7:22 ` William Kenworthy
2021-08-01 21:55 ` Frank Steinmetzger
2021-08-02 6:12 ` William Kenworthy
2021-08-02 22:03 ` Frank Steinmetzger
2021-08-02 23:35 ` William Kenworthy
2021-08-01 3:41 ` William Kenworthy
2021-08-01 21:41 ` Frank Steinmetzger
2021-07-31 12:21 ` Rich Freeman
2021-07-31 12:59 ` William Kenworthy
2021-07-31 13:30 ` Rich Freeman
2021-08-01 3:05 ` William Kenworthy [this message]
2021-08-01 11:37 ` Rich Freeman
2021-07-31 5:23 ` William Kenworthy
2021-06-15 17:48 ` [gentoo-user] Re: cryptsetup close and device in use when it is not Remy Blank
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6ca83a12-24b7-57bd-2dd9-1b1d46209d69@iinet.net.au \
--to=billk@iinet.net.au \
--cc=gentoo-user@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox