public inbox for gentoo-user@lists.gentoo.org
 help / color / mirror / Atom feed
From: Dale <rdalek1967@gmail.com>
To: gentoo-user@lists.gentoo.org
Subject: Re: [gentoo-user] New hard drive. Is this normal? It looks like a connect problem.
Date: Fri, 30 May 2025 16:06:03 -0500	[thread overview]
Message-ID: <e322c280-50fb-e3fa-d840-668488ad00ec@gmail.com> (raw)
In-Reply-To: <6056060.MhkbZ0Pkbq@rogueboard>

Michael wrote:
>
> You can transfer some data from a tmpfs and measure the speed.  If it gets 
> anywhere near 4.8 Gbit/s (600 MB/s) its a SATA 3.  The delay in the kernel 
> picking it up at boot may be related to its size, but I have no experience of 
> such large drives to be able to confirm this.


When I started reading your reply, I realized I had that drive in the
older external enclosure connected to my main rig.  I'm not sure if it
is SATA 3 capable or not.  I need to research the specs on that
enclosure.  It works fine for running self tests tho.  Anyway, I
connected it to my NAS box.  On it, it reads at speeds that make me
think it is SATA 3.  It's around 250MB/sec with hdparm -t which is
normal on most all my rigs.  I have several drives of various ages but
most are fairly new on the NAS box.  At the moment I'm transferring data
from one set of 4 drives to a set of 3 drives.  The original set of 4 is
a mix of spare drives I had laying around that is the backup of my large
directory.  I came up with a fancy command, for me at least, to get the
drives size and speed.  This is what I get.  The first is the OS drive. 
The last one is the new 20TB drive. 


root@nas ~ # smartctl -i /dev/sda | egrep 'User Capacity|SATA Version'
User Capacity:    500,107,862,016 bytes [500 GB]
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 3.0 Gb/s)
root@nas ~ # smartctl -i /dev/sdb | egrep 'User Capacity|SATA Version'
User Capacity:    10,000,831,348,736 bytes [10.0 TB]
SATA Version is:  SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
root@nas ~ # smartctl -i /dev/sdc | egrep 'User Capacity|SATA Version'
User Capacity:    14,000,519,643,136 bytes [14.0 TB]
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
root@nas ~ # smartctl -i /dev/sdd | egrep 'User Capacity|SATA Version'
User Capacity:    8,001,563,222,016 bytes [8.00 TB]
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
root@nas ~ # smartctl -i /dev/sde | egrep 'User Capacity|SATA Version'
User Capacity:    14,000,519,643,136 bytes [14.0 TB]
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
root@nas ~ # smartctl -i /dev/sdf | egrep 'User Capacity|SATA Version'
User Capacity:    16,000,900,661,248 bytes [16.0 TB]
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 3.0 Gb/s)
root@nas ~ # smartctl -i /dev/sdg | egrep 'User Capacity|SATA Version'
User Capacity:    16,000,900,661,248 bytes [16.0 TB]
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 3.0 Gb/s)
root@nas ~ # smartctl -i /dev/sdh | egrep 'User Capacity|SATA Version'
User Capacity:    16,000,900,661,248 bytes [16.0 TB]
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 3.0 Gb/s)
root@nas ~ # smartctl -i /dev/sdi | egrep 'User Capacity|SATA Version'
User Capacity:    20,000,588,955,648 bytes [20.0 TB]
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 3.0 Gb/s)
root@nas ~ #


This is the PV's in order by the pv name. 


root@nas ~ # pvs -O vg_name
  PV         VG        Fmt  Attr PSize  PFree
  /dev/sdc1  backup-vg lvm2 a--  12.73t    0
  /dev/sdb1  backup-vg lvm2 a--  <9.10t    0
  /dev/sde1  backup-vg lvm2 a--  12.73t    0
  /dev/sdd1  backup-vg lvm2 a--  <7.28t    0
  /dev/sdf1  backup2   lvm2 a--  14.55t    0
  /dev/sdg1  backup2   lvm2 a--  14.55t    0
  /dev/sdh1  backup2   lvm2 a--  14.55t    0
root@nas ~ #


I'm surprised by the speed of the OS drive tho.  It is a SSD drive. 
Anyway, as one can see, some drives are connected at SATA 2 and some at
SATA3.  Now I noticed, the 4 drive set is connected to a PCIe card.  The
three drive set and the OS drive is connected to the mobo itself.  This
makes me wonder, is the mobo ports only SATA 2?  I went and looked at
the manual that shows a block diagram and what is what.  Sure enough,
the mobo ports are SATA 2 or 3GBs/sec.  Well, that explains that.  It
also explains a lot of other things.  I thought encryption was slowing
things down since the CPU doesn't have built in AES support.  Turns out,
the mobo SATA ports are slower than expected. 

I wanted to test two theories.  I left the power connected to the newest
20TB drive that is usually slow to respond.  I disconnected the data
cable from the mobo port and plugged it into the PCIe card.  Guess what,
it had no slow to respond message and it connects at SATA 3 or 6GB/sec
as it should.  So, the new 20TB drive does take a little longer to power
up since it connects fine when power is left applied when moving data
cable.  Also, the mobo is SATA 2 or 3GBs/sec.  The PCIe card is SATA 3
or 6GBs/sec. 

That explains a LOT.  It explains why my backup is slower than
expected.  It explains why drives are not connecting at the faster speed
as well.  It certainly explains why the SSD is connecting at SATA 2.  I
may as well go back to the old spinning rust drive since the mobo is the
bottleneck on this.  Oh, this is my little command to list speed of
drive when the newest 20TB is connected to the PCIe card. 



root@nas ~ # smartctl -i /dev/sdj | egrep 'User Capacity|SATA Version'
User Capacity:    20,000,588,955,648 bytes [20.0 TB]
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
root@nas ~ #



So when connected to the PCIe card, drive speed is fine.  What I learned
from all this, I need to connect all my hard drives, data ones not the
OS, to the PCIe card.  I can also switch back to the old spinning rust
drive for the OS.  No wonder I didn't notice any increase in boot up time. 

I need to find a mobo/CPU/memory combo that supports SATA3 and has a few
PCIe slots for a NAS box.  This older system is getting a bit dated. 
Slower than I'd like too. 

I can't believe that mobo is SATA 2 tho.  I could have sworn it was SATA
3. 

Dale

:-)  :-) 


  parent reply	other threads:[~2025-05-30 21:07 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-05-05 21:15 [gentoo-user] New hard drive. Is this normal? It looks like a connect problem Dale
2025-05-06 12:12 ` Michael
2025-05-06 12:59   ` Dale
2025-05-06 14:31     ` Michael
2025-05-06 20:51       ` Dale
2025-05-06 23:08         ` Wol
2025-05-07  0:16           ` Dale
2025-05-06 23:30         ` Dale
2025-05-07  8:18           ` Michael
2025-05-07 15:13             ` Dale
2025-05-10 15:53               ` Dale
2025-05-10 18:52                 ` Michael
2025-05-12  8:11                   ` Dale
2025-05-12 11:14                     ` Michael
2025-05-13  6:30                       ` Dale
2025-05-12 22:34             ` Frank Steinmetzger
2025-05-13  6:05               ` Dale
2025-05-13  8:30               ` Michael
2025-05-30 13:46     ` Frank Steinmetzger
2025-05-30  1:25 ` Dale
2025-05-30 10:56   ` Michael
2025-05-30 13:47     ` Frank Steinmetzger
2025-05-30 15:10       ` Michael
2025-05-30 21:06     ` Dale [this message]
2025-05-31  8:21       ` Michael
2025-06-01  2:51         ` Dale
2025-06-01 11:02           ` Frank Steinmetzger
2025-06-01 12:20             ` Dale

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e322c280-50fb-e3fa-d840-668488ad00ec@gmail.com \
    --to=rdalek1967@gmail.com \
    --cc=gentoo-user@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox