public inbox for gentoo-user@lists.gentoo.org
 help / color / mirror / Atom feed
From: Michael <confabulate@kintzios.com>
To: gentoo-user@lists.gentoo.org
Subject: Re: [gentoo-user] m.2 nvme stick not what I was expecting at all.
Date: Wed, 26 Feb 2025 16:38:34 +0000	[thread overview]
Message-ID: <3865089.kQq0lBPeGt@rogueboard> (raw)
In-Reply-To: <f14b8557-8bb6-e0f6-abe0-ff6b6dd82891@gmail.com>

[-- Attachment #1: Type: text/plain, Size: 3867 bytes --]

On Wednesday, 26 February 2025 14:43:41 Greenwich Mean Time Dale wrote:
> Rich Freeman wrote:
> > On Tue, Feb 25, 2025 at 12:26 PM Dale <rdalek1967@gmail.com> wrote:
> >> I'm pretty sure you mentioned this once before in one of my older
> >> threads.  I can't find it tho.  I use PCIe x1 cards to connect my SATA
> >> drives for my video collection and such.  You mentioned once what the
> >> bandwidth was for that setup and how many drives it would take to pretty
> >> much max it out.  Right now, I have one card for two sets of LVs.  One
> >> LV has four drives and the other has three.  What would be the limiting
> >> factor on that, the drives, the PCIe bus or something else?
> > 
> > It depends on the PCIe revision, and of course whether the controller
> > actually maxes it out.
> > 
> > 1x PCIe v3 can do 0.985GB/s total.  That's about 5 HDDs if they're
> > running sequentially, and again assumes that your controller can
> > actually handle all that data.  For each generation of PCIe
> > forward/backwards either double/halve the transfer rate.  The
> > interface works at the version of PCIe supported by both the
> > motherboard+CPU and the adapter card.
> > 
> > If you're talking about HDDs in practice the HDDs are probably still
> > the bottleneck.  If these were SATA SSDs then odds are that the PCIe
> > lane is limiting things, because I doubt this is an all-v5 setup.
> > 
> > https://en.wikipedia.org/wiki/PCI_Express#History_and_revisions
> > 
> > The big advantage of NVMe isn't so much the bandwidth as the IOPS,
> > though both benefit.  Those run at full PCIe 4x interface speed per
> > drive, but of course you need 4 lanes per drive for this, which is
> > hard to obtain on consumer motherboards at any scale.
> 
> This I think is what I needed.  As it is, I'm most likely not maxing
> anything out, yet.  The drives for Data, torrent stuff, stays pretty
> busy.  Mostly reading.  My other set of drives, videos, isn't to busy
> most of the time.  A few MBs/sec or something, playing videos type
> reading.  Still, next time I power down, I may stick that second card in
> and divide things up a bit.  Might benefit if those cards aren't to great. 
> 
> I did copy this info and stuck in in a text file so I don't have to dig
> for it again, or ask again.  ;-) 
> 
> Thanks.
> 
> Dale
> 
> :-)  :-) 

The other thing to straighten out, already hinted at by Rich et al., is an 
NVMe M.2 card in a USB 3 enclosure won't be able to maximise its SSD transfer 
rates.  To do this it will require a Thunderbolt connector and a corresponding 
Thenderbolt PC port, which will connect it internally to the computer's PCIe 
bus, rather than USB/SATA.  Hence a previous comment questioning the perceived 
value of paying for a NVMe SSD M.2 form factor within a USB enclosure.  It 
won't really derive much if any performance benefit compared to a *good* 
quality USB 3 flash drive (UFD), which can be sourced at a much lower price 
point.

External storage medium, transfer protocol, device controller, PC bus and 
cables/connectors/ports, will all have to have aligned generations of 
technology and standards, if you expect to make most of their advertised 
transfer speeds.  Otherwise you'll be stuck at some component of a lower 
performance providing a bottleneck to your aspirations.  ;-)

Sometime ago I bought a SanDisk 1TB Extreme Portable SSD, which has a USB-C 
connector and a USB 3.2 Gen 2 controller as a replacement for a flaky USB 3.0 
stick.  It is slightly bigger than a small UFD and more expensive than the 
cheaper UFD offerings, but the faster speeds more than compensate for it.  At 
the time I bought it, external NVMe M.2 drives were too expensive and I only 
had USB 3.0 ports anyway.  So in my use case it was offering the best bang for 
my buck.

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  reply	other threads:[~2025-02-26 16:40 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-24 18:58 [gentoo-user] m.2 nvme stick not what I was expecting at all Dale
2025-02-24 19:20 ` Jack
2025-02-24 19:49   ` Dale
2025-02-24 19:40 ` Mark Knecht
2025-02-24 19:41 ` Rich Freeman
2025-02-24 19:53   ` Dale
2025-02-24 20:00     ` Rich Freeman
2025-02-24 20:34       ` Dale
2025-02-24 20:02 ` eric
2025-02-24 20:11   ` Mark Knecht
2025-02-24 20:44     ` Dale
2025-02-24 22:48       ` Frank Steinmetzger
2025-02-25  3:56         ` Dale
2025-02-25 10:08           ` Michael
2025-02-25 11:00             ` Dale
2025-02-25 11:16               ` Michael
2025-02-25 15:57                 ` Dale
2025-02-25 17:05                   ` Rich Freeman
2025-02-25 19:00                     ` Michael
2025-02-25 19:18                       ` Rich Freeman
2025-02-25 20:39                         ` Michael
2025-02-25 22:16           ` Frank Steinmetzger
2025-02-25 14:48         ` Peter Humphrey
2025-02-25 17:00           ` Michael
2025-02-24 22:13   ` Frank Steinmetzger
2025-02-25  4:20     ` Dale
2025-02-25  8:18       ` Wols Lists
2025-02-25 14:07         ` [gentoo-user] " Grant Edwards
2025-02-25  8:20 ` [gentoo-user] " Wols Lists
2025-02-25 11:09   ` Dale
2025-02-25 12:26 ` Dale
2025-02-25 15:04   ` Rich Freeman
2025-02-25 15:32     ` Dale
2025-02-25 16:29       ` Rich Freeman
2025-02-25 17:26         ` Dale
2025-02-25 17:41           ` Rich Freeman
2025-02-26 14:43             ` Dale
2025-02-26 16:38               ` Michael [this message]
2025-02-26 20:34                 ` Dale
2025-02-26 20:48                   ` Mark Knecht
2025-02-25 20:19     ` Wol
2025-02-25 20:21       ` Michael
2025-02-25 21:34         ` Frank Steinmetzger
2025-02-26  7:52           ` Wols Lists
2025-02-26 16:44             ` Michael
2025-02-26 18:03               ` Wol
2025-02-26 18:05                 ` Rich Freeman
2025-02-26 18:42                   ` Wol
2025-02-26 19:26                     ` Rich Freeman
2025-02-26 19:47                       ` Wols Lists
2025-02-26 19:56                         ` Rich Freeman
2025-02-26 21:25                           ` Wol
2025-02-26 22:37                             ` Michael
2025-02-26 16:41           ` Michael
2025-02-28  5:43 ` Dale
2025-02-28 16:40   ` [gentoo-user] " Grant Edwards
2025-03-02 16:01     ` Dale
2025-03-02 22:06       ` Frank Steinmetzger
2025-03-02 22:39         ` Dale
2025-03-03  3:01         ` Grant Edwards

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3865089.kQq0lBPeGt@rogueboard \
    --to=confabulate@kintzios.com \
    --cc=gentoo-user@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox