public inbox for gentoo-user@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-user] Finally got a SSD drive to put my OS on
@ 2023-04-15 22:47 Dale
  2023-04-15 23:24 ` Mark Knecht
                   ` (2 more replies)
  0 siblings, 3 replies; 67+ messages in thread
From: Dale @ 2023-04-15 22:47 UTC (permalink / raw
  To: gentoo-user

Howdy,

I finally broke down and bought a SSD.  It's a Samsung V-Nand 870 EVO
500GB.  My current OS sits on a 160GB drive so should be plenty.  I plan
to even add a boot image for the Gentoo LiveGUI thingy, maybe Knoppix or
something plus my usual OS.  By the way, caught one for sale for
$40.00.  It has a production date of 5/2021. 

My question is this.  Do I need anything special in the kernel or
special fstab options for this thing?  I know at one point there was
folks having problems with certain settings.  I did some googling and it
seems to be worked out but I want to be sure I don't blow this thing up
or something. 

Anything else that makes these special?  Any tips or tricks? 

Dale

:-)  :-) 

P. S.  I'm hoping this will make my system a little more responsive. 
Maybe.  Either way, that 160GB drive is getting a little full. 


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-15 22:47 [gentoo-user] Finally got a SSD drive to put my OS on Dale
@ 2023-04-15 23:24 ` Mark Knecht
  2023-04-15 23:44   ` thelma
  2023-04-16  1:47 ` William Kenworthy
  2023-04-18 14:52 ` [gentoo-user] " Nikos Chantziaras
  2 siblings, 1 reply; 67+ messages in thread
From: Mark Knecht @ 2023-04-15 23:24 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1613 bytes --]

On Sat, Apr 15, 2023 at 3:47 PM Dale <rdalek1967@gmail.com> wrote:
>
> Howdy,
>
> I finally broke down and bought a SSD.  It's a Samsung V-Nand 870 EVO
> 500GB.  My current OS sits on a 160GB drive so should be plenty.  I plan
> to even add a boot image for the Gentoo LiveGUI thingy, maybe Knoppix or
> something plus my usual OS.  By the way, caught one for sale for
> $40.00.  It has a production date of 5/2021.
>
> My question is this.  Do I need anything special in the kernel or
> special fstab options for this thing?  I know at one point there was
> folks having problems with certain settings.  I did some googling and it
> seems to be worked out but I want to be sure I don't blow this thing up
> or something.
>
> Anything else that makes these special?  Any tips or tricks?
>
> Dale
>
> :-)  :-)
>
> P. S.  I'm hoping this will make my system a little more responsive.
> Maybe.  Either way, that 160GB drive is getting a little full.
>

Dale,
   I have 500GB SSDs and 1TB M.2 drives in all of my machines. No
machine boots from a spinning drive anyhmore. Never had any
problems.

   The only thing I've done differently is the errors=remount=ro item
below. Other than that if whatever OS you install sets up boot, and
the machine boots, then it's just a drive in my experience

Best wishes,
Mark

# / was on /dev/nvme1n1p3 during installation
UUID=3fe6798f-653f-42e8-8e96-7ba0d490bfdf /               ext4
 errors=remount-ro 0       1
# /boot/efi was on /dev/nvme0n1p1 during installation
UUID=60DF-9F56  /boot/efi       vfat    umask=0077      0       1

[-- Attachment #2: Type: text/html, Size: 2028 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-15 23:24 ` Mark Knecht
@ 2023-04-15 23:44   ` thelma
  0 siblings, 0 replies; 67+ messages in thread
From: thelma @ 2023-04-15 23:44 UTC (permalink / raw
  To: gentoo-user

On 4/15/23 17:24, Mark Knecht wrote:
> 
> 
> On Sat, Apr 15, 2023 at 3:47 PM Dale <rdalek1967@gmail.com <mailto:rdalek1967@gmail.com>> wrote:
>  >
>  > Howdy,
>  >
>  > I finally broke down and bought a SSD.  It's a Samsung V-Nand 870 EVO
>  > 500GB.  My current OS sits on a 160GB drive so should be plenty.  I plan
>  > to even add a boot image for the Gentoo LiveGUI thingy, maybe Knoppix or
>  > something plus my usual OS.  By the way, caught one for sale for
>  > $40.00.  It has a production date of 5/2021.
>  >
>  > My question is this.  Do I need anything special in the kernel or
>  > special fstab options for this thing?  I know at one point there was
>  > folks having problems with certain settings.  I did some googling and it
>  > seems to be worked out but I want to be sure I don't blow this thing up
>  > or something.
>  >
>  > Anything else that makes these special?  Any tips or tricks?
>  >
>  > Dale
>  >
>  > :-)  :-)
>  >
>  > P. S.  I'm hoping this will make my system a little more responsive.
>  > Maybe.  Either way, that 160GB drive is getting a little full.
>  >
> 
> Dale,
>     I have 500GB SSDs and 1TB M.2 drives in all of my machines. No
> machine boots from a spinning drive anyhmore. Never had any
> problems.
> 
>     The only thing I've done differently is the errors=remount=ro item
> below. Other than that if whatever OS you install sets up boot, and
> the machine boots, then it's just a drive in my experience
> 
> Best wishes,
> Mark
> 
> # / was on /dev/nvme1n1p3 during installation
> UUID=3fe6798f-653f-42e8-8e96-7ba0d490bfdf /               ext4    errors=remount-ro 0       1
> # /boot/efi was on /dev/nvme0n1p1 during installation
> UUID=60DF-9F56  /boot/efi       vfat    umask=0077      0       1

My 5-year old small box running 500GB SSD INTEL SSDSC2BF48 Atom processor, it is ON 24/7 running Asterisk and hylafax.
Never had a problem with it.
But it is recommended to to run via cron fstrim:
30 18 * * 2  /sbin/fstrim -v /



^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-15 22:47 [gentoo-user] Finally got a SSD drive to put my OS on Dale
  2023-04-15 23:24 ` Mark Knecht
@ 2023-04-16  1:47 ` William Kenworthy
  2023-04-16  7:18   ` Peter Humphrey
  2023-04-18 14:52 ` [gentoo-user] " Nikos Chantziaras
  2 siblings, 1 reply; 67+ messages in thread
From: William Kenworthy @ 2023-04-16  1:47 UTC (permalink / raw
  To: gentoo-user


On 16/4/23 06:47, Dale wrote:
> Howdy,
>
> I finally broke down and bought a SSD.  It's a Samsung V-Nand 870 EVO
> 500GB.  My current OS sits on a 160GB drive so should be plenty.  I plan
> to even add a boot image for the Gentoo LiveGUI thingy, maybe Knoppix or
> something plus my usual OS.  By the way, caught one for sale for
> $40.00.  It has a production date of 5/2021.
>
> My question is this.  Do I need anything special in the kernel or
> special fstab options for this thing?  I know at one point there was
> folks having problems with certain settings.  I did some googling and it
> seems to be worked out but I want to be sure I don't blow this thing up
> or something.
>
> Anything else that makes these special?  Any tips or tricks?
>
> Dale
>
> :-)  :-)
>
> P. S.  I'm hoping this will make my system a little more responsive.
> Maybe.  Either way, that 160GB drive is getting a little full.
>
>
>
look into mount options for SSD's (discard option) and "fstrim" for 
maintenance. (read up on trimmimg - doing a manual trim before the drive 
reaches full allocation (they delete files, but do not erase them 
because erasing is time consuming so its an OS controlled operation) or 
auto trimming (which can cause serious pauses at awkward times) can 
prevent serious performance degradation as it has to erase before 
writing.  I am not sure of the current status but in the early days of 
SSD's, this was serious concern.

BillK


BillK




^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-16  1:47 ` William Kenworthy
@ 2023-04-16  7:18   ` Peter Humphrey
  2023-04-16  8:43     ` William Kenworthy
  0 siblings, 1 reply; 67+ messages in thread
From: Peter Humphrey @ 2023-04-16  7:18 UTC (permalink / raw
  To: gentoo-user

On Sunday, 16 April 2023 02:47:00 BST William Kenworthy wrote:

> look into mount options for SSD's (discard option) and "fstrim" for
> maintenance. (read up on trimmimg - doing a manual trim before the drive
> reaches full allocation (they delete files, but do not erase them
> because erasing is time consuming so its an OS controlled operation) or
> auto trimming (which can cause serious pauses at awkward times) can
> prevent serious performance degradation as it has to erase before
> writing.  I am not sure of the current status but in the early days of
> SSD's, this was serious concern.

In short, see https://wiki.gentoo.org/wiki/SSD .  :)

-- 
Regards,
Peter.





^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-16  7:18   ` Peter Humphrey
@ 2023-04-16  8:43     ` William Kenworthy
  2023-04-16 15:08       ` Mark Knecht
  0 siblings, 1 reply; 67+ messages in thread
From: William Kenworthy @ 2023-04-16  8:43 UTC (permalink / raw
  To: gentoo-user


On 16/4/23 15:18, Peter Humphrey wrote:
> On Sunday, 16 April 2023 02:47:00 BST William Kenworthy wrote:
>
>> look into mount options for SSD's (discard option) and "fstrim" for
>> maintenance. (read up on trimmimg - doing a manual trim before the drive
>> reaches full allocation (they delete files, but do not erase them
>> because erasing is time consuming so its an OS controlled operation) or
>> auto trimming (which can cause serious pauses at awkward times) can
>> prevent serious performance degradation as it has to erase before
>> writing.  I am not sure of the current status but in the early days of
>> SSD's, this was serious concern.
> In short, see https://wiki.gentoo.org/wiki/SSD .  :)
>
Excellent, condenses it nicely.

BillK




^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-16  8:43     ` William Kenworthy
@ 2023-04-16 15:08       ` Mark Knecht
  2023-04-16 15:29         ` Dale
                           ` (2 more replies)
  0 siblings, 3 replies; 67+ messages in thread
From: Mark Knecht @ 2023-04-16 15:08 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1765 bytes --]

On Sun, Apr 16, 2023 at 1:44 AM William Kenworthy <billk@iinet.net.au>
wrote:
>
>
> On 16/4/23 15:18, Peter Humphrey wrote:
> > On Sunday, 16 April 2023 02:47:00 BST William Kenworthy wrote:
> >
> >> look into mount options for SSD's (discard option) and "fstrim" for
> >> maintenance. (read up on trimmimg - doing a manual trim before the
drive
> >> reaches full allocation (they delete files, but do not erase them
> >> because erasing is time consuming so its an OS controlled operation) or
> >> auto trimming (which can cause serious pauses at awkward times) can
> >> prevent serious performance degradation as it has to erase before
> >> writing.  I am not sure of the current status but in the early days of
> >> SSD's, this was serious concern.
> > In short, see https://wiki.gentoo.org/wiki/SSD .  :)
> >
> Excellent, condenses it nicely.
>
> BillK
>
>

OK Dale, I'm completely wrong, but also 'slightly' right.

If you have an SSD or nvme drive installed then fstrim should be
installed and run on a regular basis. However it's not 'required'.

Your system will still work, but after all blocks on the drive have
been used for file storage and later deleted, if they are not
written back to zeros then the next time you go to use that
block the write will be slower as the write must first write
zeros and then your data.

fstrim does the write to zeros so that during normal operation
you don't wait.

I've become so completely used to Kubuntu that I had to read
that this is all set up automatically when the system finds an
SSD or nvme. In Gentoo land you have to do this yourself.

Sorry for any confusion. Time to unsubscribe from this list
I guess and leave you all to your beloved distro.

Bye,
Mark

[-- Attachment #2: Type: text/html, Size: 2352 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-16 15:08       ` Mark Knecht
@ 2023-04-16 15:29         ` Dale
  2023-04-16 16:10           ` Mark Knecht
  2023-04-16 17:46         ` Jorge Almeida
  2023-04-16 18:07         ` Frank Steinmetzger
  2 siblings, 1 reply; 67+ messages in thread
From: Dale @ 2023-04-16 15:29 UTC (permalink / raw
  To: gentoo-user; +Cc: Mark Knecht

[-- Attachment #1: Type: text/plain, Size: 3636 bytes --]

Mark Knecht wrote:
>
>
> On Sun, Apr 16, 2023 at 1:44 AM William Kenworthy <billk@iinet.net.au
> <mailto:billk@iinet.net.au>> wrote:
> >
> >
> > On 16/4/23 15:18, Peter Humphrey wrote:
> > > On Sunday, 16 April 2023 02:47:00 BST William Kenworthy wrote:
> > >
> > >> look into mount options for SSD's (discard option) and "fstrim" for
> > >> maintenance. (read up on trimmimg - doing a manual trim before
> the drive
> > >> reaches full allocation (they delete files, but do not erase them
> > >> because erasing is time consuming so its an OS controlled
> operation) or
> > >> auto trimming (which can cause serious pauses at awkward times) can
> > >> prevent serious performance degradation as it has to erase before
> > >> writing.  I am not sure of the current status but in the early
> days of
> > >> SSD's, this was serious concern.
> > > In short, see https://wiki.gentoo.org/wiki/SSD .  :)
> > >
> > Excellent, condenses it nicely.
> >
> > BillK
> >
> >
>
> OK Dale, I'm completely wrong, but also 'slightly' right.
>
> If you have an SSD or nvme drive installed then fstrim should be 
> installed and run on a regular basis. However it's not 'required'.
>
> Your system will still work, but after all blocks on the drive have
> been used for file storage and later deleted, if they are not
> written back to zeros then the next time you go to use that
> block the write will be slower as the write must first write
> zeros and then your data.
>
> fstrim does the write to zeros so that during normal operation
> you don't wait.
>
> I've become so completely used to Kubuntu that I had to read
> that this is all set up automatically when the system finds an
> SSD or nvme. In Gentoo land you have to do this yourself.
>
> Sorry for any confusion. Time to unsubscribe from this list
> I guess and leave you all to your beloved distro.
>
> Bye,
> Mark


Oh, please, don't go anywhere.  <begging>  We already lost the long term
Alan.  BTW, I checked on him a while back.  He's still OK.  It's been a
while tho. 

I read during a google search that some distros handle this sort of
thing automatically, some sort of firmware thing or something.  I
figured Gentoo didn't, it rarely does since that is the point of
Gentoo.  So, no harm.  Heck, I just now applied power to the thing.  I
don't even have it partitioned or anything yet.  Just rebooted after
rearranging all the cables, adding power splitter etc etc.

I do have one gripe.  Why can't drive makers pick a screw size and stick
to it on ALL drives?  It took some digging to find a screw that would
fit.  Some I bought that are supposed to work on SSDs were to short.  It
would likely work on a metal adapter but not a thicker plastic one. 
Luckily, I found 4 screws.  No clue where they came from.  Just in my
junk box.  Before this week, never laid eyes on a SSD before.  Anyone
know the thread size and count on those things?  I want to order a few,
just in case. 

Is running fstrim once a week to often?  I update my OS once a week but
given the amount of extra space, I'd think once a month would be often
enough.  After all, it is 500GB and I'll likely only use less than half
of that.  Most of the extra space will be extra boot options like
Knoppix or something.  I'm just thinking it would give it a longer
life.  Maybe my thinking is wrong???

Now to play with this thing.  I got to remember what all has to be
copied over so I can boot the new thing.  :/  Been ages since I moved a
OS to another hard drive.  Maybe a reinstall would work better.  :-\

Thanks to all. 

Dale

:-)  :-) 

P. S.  CCing Mark just in case. 

[-- Attachment #2: Type: text/html, Size: 5595 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-16 15:29         ` Dale
@ 2023-04-16 16:10           ` Mark Knecht
  2023-04-16 16:54             ` Dale
  0 siblings, 1 reply; 67+ messages in thread
From: Mark Knecht @ 2023-04-16 16:10 UTC (permalink / raw
  To: Dale; +Cc: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 3484 bytes --]

<SNIP>
>
>
> Oh, please, don't go anywhere.  <begging>  We already lost the long term
Alan.  BTW, I checked on him a while back.  He's still OK.  It's been a
while tho.
>

Dale - I touched - truly. I'm happy to stick around but I no longer
run Gentoo and don't want to cause problems or piss off the
folks who need to be here.

> I read during a google search that some distros handle this sort of thing
automatically, some sort of firmware thing or something.  I figured Gentoo
didn't, it rarely does since that is the point of Gentoo.  So, no harm.
Heck, I just now applied power to the thing.  I don't even have it
partitioned or anything yet.  Just rebooted after rearranging all the
cables, adding power splitter etc etc.

I don't know of any distros that do any of this in firmware. Maybe someone
else can address that. Many distros now install fstrim by default and
update crontab. The Ubuntu family does, which I didn't know before this
thread. However it is on both my Kubuntu machine and my Ubuntu
Server machine.

BTW - Windows does this in Disk Defragmenter but you have to
schedule it yourself according to Bard and ChatGPT. I'll be in
Windows later today to record some new music and plan to look into
that then.

Answering the question from below - weekly is what was set up by
default here. If your drive isn't near full and you're not writing a lot of
new
data on it each week then weekly would seem reasonable to me.
However being that you are running Gentoo and hence compiling
lots and lots and lots of stuff every week it's possible that you
might _possibly_ want to run fstrim more often if your intermediate
files are going to this drive.


>
> I do have one gripe.  Why can't drive makers pick a screw size and stick
to it on ALL drives?  It took some digging to find a screw that would fit.
Some I bought that are supposed to work on SSDs were to short.  It would
likely work on a metal adapter but not a thicker plastic one.  Luckily, I
found 4 screws.  No clue where they came from.  Just in my junk box.
Before this week, never laid eyes on a SSD before.  Anyone know the thread
size and count on those things?  I want to order a few, just in case.

I second your gripe. I've purchased a couple of PC builder screw
sets from Amazon.

>
> Is running fstrim once a week to often?  I update my OS once a week but
given the amount of extra space, I'd think once a month would be often
enough.  After all, it is 500GB and I'll likely only use less than half of
that.  Most of the extra space will be extra boot options like Knoppix or
something.  I'm just thinking it would give it a longer life.  Maybe my
thinking is wrong???
>
> Now to play with this thing.  I got to remember what all has to be copied
over so I can boot the new thing.  :/  Been ages since I moved a OS to
another hard drive.  Maybe a reinstall would work better.  :-\
>

I think you have at least 3 options to play with the drive:

1) It's Gentoo so install from scratch. You'll feel great
if it works. It will only take you a day or two.

2) Possibly dd the old drive to the SSD. If the new
SSD boots as the same /dev/sdX device it should
work, maybe, maybe not.

3) If you have another SATA port then dual boot,
either with Gentoo on both or something simple
like Kubuntu. A base Kubuntu install takes about
15 minutes and will probably give you its own
dual boot grub config. When you're sick of Kubuntu
you can once again install Gentoo.

Good luck no matter what path you take.

Mark

[-- Attachment #2: Type: text/html, Size: 4298 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-16 16:10           ` Mark Knecht
@ 2023-04-16 16:54             ` Dale
  2023-04-16 18:14               ` Mark Knecht
  0 siblings, 1 reply; 67+ messages in thread
From: Dale @ 2023-04-16 16:54 UTC (permalink / raw
  To: Gentoo User

Mark Knecht wrote:
> <SNIP>
> >
> >
> > Oh, please, don't go anywhere.  <begging>  We already lost the long
> term Alan.  BTW, I checked on him a while back.  He's still OK.  It's
> been a while tho.
> >
>
> Dale - I touched - truly. I'm happy to stick around but I no longer 
> run Gentoo and don't want to cause problems or piss off the 
> folks who need to be here. 
>
> > I read during a google search that some distros handle this sort of
> thing automatically, some sort of firmware thing or something.  I
> figured Gentoo didn't, it rarely does since that is the point of
> Gentoo.  So, no harm.  Heck, I just now applied power to the thing.  I
> don't even have it partitioned or anything yet.  Just rebooted after
> rearranging all the cables, adding power splitter etc etc.
>
> I don't know of any distros that do any of this in firmware. Maybe someone
> else can address that. Many distros now install fstrim by default and 
> update crontab. The Ubuntu family does, which I didn't know before this
> thread. However it is on both my Kubuntu machine and my Ubuntu
> Server machine.
>

I think what I read is that it is done automatically.  They could have
meant a cron job.  I don't think they said how, just that it is already
set up to do it.  Firmware was mentioned in the thread somewhere so I
thought maybe that was it.  Either way, fstrim is installed here.  It's
part of util-linux and that is pulled in by several packages.  I doubt
it will be going away here anytime soon given the long list of packages
that need it. Just to set up a cron job for it.  Remembering the steps
for that will take time tho.  o_O


> BTW - Windows does this in Disk Defragmenter but you have to 
> schedule it yourself according to Bard and ChatGPT. I'll be in 
> Windows later today to record some new music and plan to look into
> that then.
>
> Answering the question from below - weekly is what was set up by 
> default here. If your drive isn't near full and you're not writing a
> lot of new
> data on it each week then weekly would seem reasonable to me. 
> However being that you are running Gentoo and hence compiling
> lots and lots and lots of stuff every week it's possible that you 
> might _possibly_ want to run fstrim more often if your intermediate 
> files are going to this drive. 
>
>
> >
> > I do have one gripe.  Why can't drive makers pick a screw size and
> stick to it on ALL drives?  It took some digging to find a screw that
> would fit.  Some I bought that are supposed to work on SSDs were to
> short.  It would likely work on a metal adapter but not a thicker
> plastic one.  Luckily, I found 4 screws.  No clue where they came
> from.  Just in my junk box.  Before this week, never laid eyes on a
> SSD before.  Anyone know the thread size and count on those things?  I
> want to order a few, just in case.
>
> I second your gripe. I've purchased a couple of PC builder screw 
> sets from Amazon.
>  
> >
> > Is running fstrim once a week to often?  I update my OS once a week
> but given the amount of extra space, I'd think once a month would be
> often enough.  After all, it is 500GB and I'll likely only use less
> than half of that.  Most of the extra space will be extra boot options
> like Knoppix or something.  I'm just thinking it would give it a
> longer life.  Maybe my thinking is wrong???
> >
> > Now to play with this thing.  I got to remember what all has to be
> copied over so I can boot the new thing.  :/  Been ages since I moved
> a OS to another hard drive.  Maybe a reinstall would work better.  :-\
> >
>
> I think you have at least 3 options to play with the drive:
>
> 1) It's Gentoo so install from scratch. You'll feel great 
> if it works. It will only take you a day or two.
>
> 2) Possibly dd the old drive to the SSD. If the new 
> SSD boots as the same /dev/sdX device it should 
> work, maybe, maybe not.
>
> 3) If you have another SATA port then dual boot, 
> either with Gentoo on both or something simple 
> like Kubuntu. A base Kubuntu install takes about
> 15 minutes and will probably give you its own
> dual boot grub config. When you're sick of Kubuntu
> you can once again install Gentoo.
>
> Good luck no matter what path you take.
>
> Mark


I've thought of a few options myself.  I sort of have a OS copy/backup
already.  I currently do the compiling in a chroot on a separate drive. 
I then copy the compiled packages and use the -k option to update the
live OS.  I'll continue to do that when I start booting from the SSD. 
That should limit writes and such to the SSD.  I also got to rearrange
things so I can put swap on that spare drive I compile on.  I don't want
swap on a SSD.  I wish this thing would stop using swap completely.  I
have swappiness set to 1 already and it still uses swap.

Right now, I'm debating the size of /boot.  Knoppix is pretty large. 
The Gentoo LiveGUI thingy is too.  So, it will have to be larger than
the few hundred megabytes my current one is.  I'm thinking 10GBs or so. 
Maybe 12GBs to make sure I'm good to go for the foreseeable future. 
They may limit them to DVD size right now but one day they could pass
that limit by.  Software isn't getting smaller.  Besides, USB is the
thing now.

Lots of options.

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-16 15:08       ` Mark Knecht
  2023-04-16 15:29         ` Dale
@ 2023-04-16 17:46         ` Jorge Almeida
  2023-04-16 18:07         ` Frank Steinmetzger
  2 siblings, 0 replies; 67+ messages in thread
From: Jorge Almeida @ 2023-04-16 17:46 UTC (permalink / raw
  To: gentoo-user

On Sun, Apr 16, 2023 at 4:09 PM Mark Knecht <markknecht@gmail.com> wrote:

> Sorry for any confusion. Time to unsubscribe from this list
> I guess and leave you all to your beloved distro.
>
Please don't.
I doubt someone is pissed off with you.

Jorge Almeida

> Bye,
See above.
> Mark


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-16 15:08       ` Mark Knecht
  2023-04-16 15:29         ` Dale
  2023-04-16 17:46         ` Jorge Almeida
@ 2023-04-16 18:07         ` Frank Steinmetzger
  2023-04-16 20:22           ` Mark Knecht
  2 siblings, 1 reply; 67+ messages in thread
From: Frank Steinmetzger @ 2023-04-16 18:07 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 2666 bytes --]

Am Sun, Apr 16, 2023 at 08:08:59AM -0700 schrieb Mark Knecht:

> If you have an SSD or nvme drive installed then fstrim should be
> installed and run on a regular basis. However it's not 'required'.
> 
> Your system will still work, but after all blocks on the drive have
> been used for file storage and later deleted, if they are not
> written back to zeros then the next time you go to use that
> block the write will be slower as the write must first write
> zeros and then your data.
> 
> fstrim does the write to zeros so that during normal operation
> you don't wait.

That is not quite correct. Trimming is about the oppisite of what you say, 
namely to *not* rewrite areas. Flash memory can only be written to in 
relatively large blocks. So if your file system wants to write 4 KiB, the 
drive needs to read all the many kB around it (several hundreds at least, 
perhaps eben MiBs, I’m not certain), change the small part in question and 
write the whole block back. This is called write amplification. This also 
occurs on hard drives, for example when you run a database which uses 4 kiB 
datafile chunks, but on a file system with larger sectors. Then the file 
system is the cause for write amplification.

If the SSD knew beforehand that the area is unused, it does not need to read 
it all in and then write it back. The SSD controller has no knowledge of 
file systems. And this is where trim comes in: it does know file systems, 
detects the unused areas and translates that info for the drive controller. 
Also, only trimmed areas (i.e. areas the controller knows are unused) can be 
used for wear leveling.

I even think that If you read from a trimmed area, the controller does not 
actually read the flash device, but simply returns zeroes. This is basically 
what a quick erase does; it trims the entire drive, which takes only a few 
seconds, and then all the data has become inaccessible (unless you address 
the memory chips directly). It is similar to deleting a file: you erase its 
entry in the directory, but not the actual payload bytes.

AFAIK, SMR HDDs also support trim these days, so they don’t need to do their 
SMR reshuffling. I have a WD Passport Ultra external 2.5″ HDD with 5 TB, and 
it supports trim. However, a WD Elements 2.5″ 4 TB does not. Perhaps because 
it is a cheaper series. Every laptop HDD of 2 (or even 1) TB is SMR.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

“It is hard to be a conquering hero when it is not in your nature.”
  – Captain Hans Geering, ’Allo ’Allo

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-16 16:54             ` Dale
@ 2023-04-16 18:14               ` Mark Knecht
  2023-04-16 18:53                 ` Dale
  0 siblings, 1 reply; 67+ messages in thread
From: Mark Knecht @ 2023-04-16 18:14 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 2156 bytes --]

On Sun, Apr 16, 2023 at 9:55 AM Dale <rdalek1967@gmail.com> wrote:
<SNIP>
>
> I think what I read is that it is done automatically.  They could have
> meant a cron job.  I don't think they said how, just that it is already
> set up to do it.  Firmware was mentioned in the thread somewhere so I
> thought maybe that was it.  Either way, fstrim is installed here.  It's
> part of util-linux and that is pulled in by several packages.  I doubt
> it will be going away here anytime soon given the long list of packages
> that need it. Just to set up a cron job for it.  Remembering the steps
> for that will take time tho.  o_O
>

Hey - it's the Internet. I thought we could trust everything we read...

<SNIP>
>
>
> I've thought of a few options myself.  I sort of have a OS copy/backup
> already.  I currently do the compiling in a chroot on a separate drive.
> I then copy the compiled packages and use the -k option to update the
> live OS.  I'll continue to do that when I start booting from the SSD.
> That should limit writes and such to the SSD.  I also got to rearrange
> things so I can put swap on that spare drive I compile on.  I don't want
> swap on a SSD.  I wish this thing would stop using swap completely.  I
> have swappiness set to 1 already and it still uses swap.
>

Well, that sounds like a solution to do your emerge work although
you're limited to the speed of that hard drive. If it's done in the
background then maybe you don't care.

> Right now, I'm debating the size of /boot.  Knoppix is pretty large.
> The Gentoo LiveGUI thingy is too.  So, it will have to be larger than
> the few hundred megabytes my current one is.  I'm thinking 10GBs or so.
> Maybe 12GBs to make sure I'm good to go for the foreseeable future.
> They may limit them to DVD size right now but one day they could pass
> that limit by.  Software isn't getting smaller.  Besides, USB is the
> thing now.
>

I guess I don't understand why you would put Knoppix in the boot
partition vs somewhere else. Is this for some sort of recovery
process you're comfortable with vs recovering from a bootable DVD?

- Mark

[-- Attachment #2: Type: text/html, Size: 2632 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-16 18:14               ` Mark Knecht
@ 2023-04-16 18:53                 ` Dale
  2023-04-16 19:30                   ` Mark Knecht
  0 siblings, 1 reply; 67+ messages in thread
From: Dale @ 2023-04-16 18:53 UTC (permalink / raw
  To: Gentoo User

Mark Knecht wrote:
> <<< SNIP >>>
> I guess I don't understand why you would put Knoppix in the boot 
> partition vs somewhere else. Is this for some sort of recovery 
> process you're comfortable with vs recovering from a bootable DVD?
>
> - Mark

I'm wanting to be able to boot something from the hard drive in the
event the OS itself won't boot.  The other day I had to dig around and
find a bootable USB stick and also found a DVD.  Ended up with the DVD
working best.  I already have memtest on /boot.  Thing is, I very rarely
use it.  ;-)

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-16 18:53                 ` Dale
@ 2023-04-16 19:30                   ` Mark Knecht
  2023-04-16 22:26                     ` Dale
  0 siblings, 1 reply; 67+ messages in thread
From: Mark Knecht @ 2023-04-16 19:30 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 967 bytes --]

On Sun, Apr 16, 2023 at 11:54 AM Dale <rdalek1967@gmail.com> wrote:
>
> Mark Knecht wrote:
> > <<< SNIP >>>
> > I guess I don't understand why you would put Knoppix in the boot
> > partition vs somewhere else. Is this for some sort of recovery
> > process you're comfortable with vs recovering from a bootable DVD?
> >
> > - Mark
>
> I'm wanting to be able to boot something from the hard drive in the
> event the OS itself won't boot.  The other day I had to dig around and
> find a bootable USB stick and also found a DVD.  Ended up with the DVD
> working best.  I already have memtest on /boot.  Thing is, I very rarely
> use it.  ;-)

So in the scenario you are suggesting, is grub working, giving you a
boot choice screen, and your new Gentoo install is not working so
you want to choose Knoppix to repair whatever is wrong with
Gentoo?

If that's the case why shoehorn Knoppix into the boot partition
vs just give it its own partition?

[-- Attachment #2: Type: text/html, Size: 1299 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-16 18:07         ` Frank Steinmetzger
@ 2023-04-16 20:22           ` Mark Knecht
  2023-04-16 22:17             ` Frank Steinmetzger
  0 siblings, 1 reply; 67+ messages in thread
From: Mark Knecht @ 2023-04-16 20:22 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 3088 bytes --]

Frank,
   Thank you for the in-depth explanation.

   I need to do some study before commenting further other than to say
so far I'm finding different comments depending on whether it's
an SSD or an M.2 drive.

Much appreciated,
Mark

On Sun, Apr 16, 2023 at 11:08 AM Frank Steinmetzger <Warp_7@gmx.de> wrote:

> Am Sun, Apr 16, 2023 at 08:08:59AM -0700 schrieb Mark Knecht:
>
> > If you have an SSD or nvme drive installed then fstrim should be
> > installed and run on a regular basis. However it's not 'required'.
> >
> > Your system will still work, but after all blocks on the drive have
> > been used for file storage and later deleted, if they are not
> > written back to zeros then the next time you go to use that
> > block the write will be slower as the write must first write
> > zeros and then your data.
> >
> > fstrim does the write to zeros so that during normal operation
> > you don't wait.
>
> That is not quite correct. Trimming is about the oppisite of what you say,
> namely to *not* rewrite areas. Flash memory can only be written to in
> relatively large blocks. So if your file system wants to write 4 KiB, the
> drive needs to read all the many kB around it (several hundreds at least,
> perhaps eben MiBs, I’m not certain), change the small part in question and
> write the whole block back. This is called write amplification. This also
> occurs on hard drives, for example when you run a database which uses 4
> kiB
> datafile chunks, but on a file system with larger sectors. Then the file
> system is the cause for write amplification.
>
> If the SSD knew beforehand that the area is unused, it does not need to
> read
> it all in and then write it back. The SSD controller has no knowledge of
> file systems. And this is where trim comes in: it does know file systems,
> detects the unused areas and translates that info for the drive
> controller.
> Also, only trimmed areas (i.e. areas the controller knows are unused) can
> be
> used for wear leveling.
>
> I even think that If you read from a trimmed area, the controller does not
> actually read the flash device, but simply returns zeroes. This is
> basically
> what a quick erase does; it trims the entire drive, which takes only a few
> seconds, and then all the data has become inaccessible (unless you address
> the memory chips directly). It is similar to deleting a file: you erase
> its
> entry in the directory, but not the actual payload bytes.
>
> AFAIK, SMR HDDs also support trim these days, so they don’t need to do
> their
> SMR reshuffling. I have a WD Passport Ultra external 2.5″ HDD with 5 TB,
> and
> it supports trim. However, a WD Elements 2.5″ 4 TB does not. Perhaps
> because
> it is a cheaper series. Every laptop HDD of 2 (or even 1) TB is SMR.
>
> --
> Grüße | Greetings | Salut | Qapla’
> Please do not share anything from, with or about me on any social network.
>
> “It is hard to be a conquering hero when it is not in your nature.”
>   – Captain Hans Geering, ’Allo ’Allo
>

[-- Attachment #2: Type: text/html, Size: 3609 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-16 20:22           ` Mark Knecht
@ 2023-04-16 22:17             ` Frank Steinmetzger
  2023-04-17  0:34               ` Mark Knecht
  0 siblings, 1 reply; 67+ messages in thread
From: Frank Steinmetzger @ 2023-04-16 22:17 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 954 bytes --]

Am Sun, Apr 16, 2023 at 01:22:32PM -0700 schrieb Mark Knecht:
> Frank,
>    Thank you for the in-depth explanation.
> 
>    I need to do some study before commenting further other than to say
> so far I'm finding different comments depending on whether it's
> an SSD or an M.2 drive.

Uhm, I think you mix up some terms here. An M.2 drive *is* an SSD 
(literally, as the name says, a solid state drive). By “SSD”, did you mean 
the classic laptop form factor for SATA HDDs and SSDs?

Because M.2 is also only a physical form factor. It supports both NVMe and 
SATA. While NVMe is more modern and better suited for solid state media and 
their properties, in the end it is still only a data protocol to transfer 
data to and fro.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

„He who prefers security to freedom deserves to be a slave.“ – Aristotle

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-16 19:30                   ` Mark Knecht
@ 2023-04-16 22:26                     ` Dale
  2023-04-16 23:16                       ` Frank Steinmetzger
  0 siblings, 1 reply; 67+ messages in thread
From: Dale @ 2023-04-16 22:26 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1707 bytes --]

Mark Knecht wrote:
>
>
> On Sun, Apr 16, 2023 at 11:54 AM Dale <rdalek1967@gmail.com
> <mailto:rdalek1967@gmail.com>> wrote:
> >
> > Mark Knecht wrote:
> > > <<< SNIP >>>
> > > I guess I don't understand why you would put Knoppix in the boot
> > > partition vs somewhere else. Is this for some sort of recovery
> > > process you're comfortable with vs recovering from a bootable DVD?
> > >
> > > - Mark
> >
> > I'm wanting to be able to boot something from the hard drive in the
> > event the OS itself won't boot.  The other day I had to dig around and
> > find a bootable USB stick and also found a DVD.  Ended up with the DVD
> > working best.  I already have memtest on /boot.  Thing is, I very rarely
> > use it.  ;-)
>
> So in the scenario you are suggesting, is grub working, giving you a
> boot choice screen, and your new Gentoo install is not working so
> you want to choose Knoppix to repair whatever is wrong with 
> Gentoo? 
>
> If that's the case why shoehorn Knoppix into the boot partition
> vs just give it its own partition? 
>
>


Dang Mark, I hadn't thought of that.  <slaps forehead>  See, that's why
you need to stick around here.  ROFL  I wonder, can I do that with the
Gentoo LiveGUI too??  Anyone know?  I may need help getting Grub to see
those tho.  I let it do it's thing with my kernels but I have no idea on
pointing it to something else. 

Given I have a 500GB drive, I got plenty of space.  Heck, a 10GB
partition each is more than enough for either Knoppix or LiveGUI.  I
could even store info on there about drive partitions and scripts that I
use a lot.  Jeez, that's a idea. 

Thanks.

Dale

:-)  :-)  :-)  :-) 

P. S. Extra happy there.  ;-)

[-- Attachment #2: Type: text/html, Size: 2986 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-16 22:26                     ` Dale
@ 2023-04-16 23:16                       ` Frank Steinmetzger
  2023-04-17  1:14                         ` Dale
  2023-10-07  7:22                         ` Dale
  0 siblings, 2 replies; 67+ messages in thread
From: Frank Steinmetzger @ 2023-04-16 23:16 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1805 bytes --]

Am Sun, Apr 16, 2023 at 05:26:15PM -0500 schrieb Dale:

> > > I'm wanting to be able to boot something from the hard drive in the
> > > event the OS itself won't boot.  The other day I had to dig around and
> > > find a bootable USB stick and also found a DVD.  Ended up with the DVD
> > > working best.  I already have memtest on /boot.  Thing is, I very rarely
> > > use it.  ;-)
> >
> > So in the scenario you are suggesting, is grub working, giving you a
> > boot choice screen, and your new Gentoo install is not working so
> > you want to choose Knoppix to repair whatever is wrong with 
> > Gentoo? 
> 
> Given I have a 500GB drive, I got plenty of space.  Heck, a 10GB
> partition each is more than enough for either Knoppix or LiveGUI.  I
> could even store info on there about drive partitions and scripts that I
> use a lot.  Jeez, that's a idea. 

Back in the day, I was annoyed that whenever I needed $LIVE_SYSTEM, I had to 
reformat an entire USB stick for that. In times when you don’t even get 
sticks below 8 GB anymore, I found it a waste of material and useful storage 
space.

And then I found ventoy: https://www.ventoy.net/

It is a mini-Bootloader which you install once to a USB device, kind-of a 
live system of its own. But when booting it, it dynamically scans the 
content of its device and creates a new boot menu from it. So you can put 
many ISOs on one device as simple files, delete them, upgrade them, 
whatever, and then you can select one to boot from. Plus, the rest of the 
stick remains usable as storage, unlike sticks that were dd’ed with an ISO.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

The four elements: earth, air and firewater.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-16 22:17             ` Frank Steinmetzger
@ 2023-04-17  0:34               ` Mark Knecht
  0 siblings, 0 replies; 67+ messages in thread
From: Mark Knecht @ 2023-04-17  0:34 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 2729 bytes --]

On Sun, Apr 16, 2023 at 3:18 PM Frank Steinmetzger <Warp_7@gmx.de> wrote:
>
> Am Sun, Apr 16, 2023 at 01:22:32PM -0700 schrieb Mark Knecht:
> > Frank,
> >    Thank you for the in-depth explanation.
> >
> >    I need to do some study before commenting further other than to say
> > so far I'm finding different comments depending on whether it's
> > an SSD or an M.2 drive.
>
> Uhm, I think you mix up some terms here. An M.2 drive *is* an SSD
> (literally, as the name says, a solid state drive). By “SSD”, did you mean
> the classic laptop form factor for SATA HDDs and SSDs?
>
> Because M.2 is also only a physical form factor. It supports both NVMe and
> SATA. While NVMe is more modern and better suited for solid state media
and
> their properties, in the end it is still only a data protocol to transfer
> data to and fro.
>

No, I don't believe I've mixed them up but if you see something I'm wrong
about
let me know.

When I speak of SSDs I do mean devices that are marketed as SSDs &
probably use SATA.

When I speak of M.2 I mean what you and I both call M.2.

While SSD & M.2 are both Flash devices they don't  provide the same info
when queried by smartctl which makes a direct comparison more
difficult.

Depending on the manufacturer & the foundry they build the chips in the
technologies in these devices can be quite different independent of whether
they are M.2 or SSD.

1) Dale's Samsung 870 EVO - V-NAND TLC (8-bits/cell) and 600TB written
2) My Crucial 1TB M.2 is QLC (16 bits/cell) and 450TB written
3) My Sabrent 1TB M.2 is TLC (8 bits/cell) and 700TB written
4) My Crucial 250GB is unknown because Crucial sells 5 versions
that come from different fabs and have different specs.

All 4 drives are warranted for 5 years or hitting the TB written value.

All 4 drives have 16K page sizes.

That said, I've been using the Crucial on my Kubuntu dual boot for over a
year and only have 28TB written so I'm a long way from the 450TB spec and
likely won't come close in 5 years. (If I'm even still using this machine.)

On the Windows side which I use far less I've only written about 2TB total.

In my case the workloads are generally fairly large files. They are
generally
either 24MB photo files for astrophotography or audio recording files which
are typically 50-100K. Neither of them are 'modified' and need to be
rewritten. They are either saved or deleted.

Whether the write amplification makes a difference or not in real life
I don't know. I'm sure for some work loads it does but the 'percent
used' value that smartctl returns is 2% for the Crucial and 0% for
the Sabrent so both appear to have a lot of life left in them.

Mark

[-- Attachment #2: Type: text/html, Size: 3505 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-16 23:16                       ` Frank Steinmetzger
@ 2023-04-17  1:14                         ` Dale
  2023-04-17  9:40                           ` Wols Lists
  2023-10-07  7:22                         ` Dale
  1 sibling, 1 reply; 67+ messages in thread
From: Dale @ 2023-04-17  1:14 UTC (permalink / raw
  To: gentoo-user

Frank Steinmetzger wrote:
> Am Sun, Apr 16, 2023 at 05:26:15PM -0500 schrieb Dale:
>
>>>> I'm wanting to be able to boot something from the hard drive in the
>>>> event the OS itself won't boot.  The other day I had to dig around and
>>>> find a bootable USB stick and also found a DVD.  Ended up with the DVD
>>>> working best.  I already have memtest on /boot.  Thing is, I very rarely
>>>> use it.  ;-)
>>> So in the scenario you are suggesting, is grub working, giving you a
>>> boot choice screen, and your new Gentoo install is not working so
>>> you want to choose Knoppix to repair whatever is wrong with 
>>> Gentoo? 
>> Given I have a 500GB drive, I got plenty of space.  Heck, a 10GB
>> partition each is more than enough for either Knoppix or LiveGUI.  I
>> could even store info on there about drive partitions and scripts that I
>> use a lot.  Jeez, that's a idea. 
> Back in the day, I was annoyed that whenever I needed $LIVE_SYSTEM, I had to 
> reformat an entire USB stick for that. In times when you don’t even get 
> sticks below 8 GB anymore, I found it a waste of material and useful storage 
> space.
>
> And then I found ventoy: https://www.ventoy.net/
>
> It is a mini-Bootloader which you install once to a USB device, kind-of a 
> live system of its own. But when booting it, it dynamically scans the 
> content of its device and creates a new boot menu from it. So you can put 
> many ISOs on one device as simple files, delete them, upgrade them, 
> whatever, and then you can select one to boot from. Plus, the rest of the 
> stick remains usable as storage, unlike sticks that were dd’ed with an ISO.
>

My current install is over a decade old.  My /boot partition is about
375MBs.  I should have made it larger but at the time, I booted CD/DVD
media when needed.  I didn't have USB sticks at the time.  This time, I
plan to make some changes.  If I put Knoppix and/or Gentoo LiveGUI in
/boot, it will be larger.  Much larger.  Mark's idea is best tho.  If I
can get Grub to work and boot it. 

I'll look into your link more.  It sounds interesting but can't figure
out exactly how it works.  May check youtube for a video.  Should clear
up the muddy water. 

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-17  1:14                         ` Dale
@ 2023-04-17  9:40                           ` Wols Lists
  2023-04-17 17:45                             ` Mark Knecht
  0 siblings, 1 reply; 67+ messages in thread
From: Wols Lists @ 2023-04-17  9:40 UTC (permalink / raw
  To: gentoo-user

On 17/04/2023 02:14, Dale wrote:
> My current install is over a decade old.  My /boot partition is about
> 375MBs.  I should have made it larger but at the time, I booted CD/DVD
> media when needed.  I didn't have USB sticks at the time.  This time, I
> plan to make some changes.  If I put Knoppix and/or Gentoo LiveGUI in
> /boot, it will be larger.  Much larger.  Mark's idea is best tho.  If I
> can get Grub to work and boot it.

If you dd your boot partition across, you can copy it into a larger 
partition on the new drive, and then just expand the filesystem.

So changing partition sizes isn't a problem if you want to just copy 
your system drive onto a new disk.

Cheers,
Wol


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-17  9:40                           ` Wols Lists
@ 2023-04-17 17:45                             ` Mark Knecht
  2023-04-18  0:35                               ` Dale
  2023-04-18  8:03                               ` Frank Steinmetzger
  0 siblings, 2 replies; 67+ messages in thread
From: Mark Knecht @ 2023-04-17 17:45 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1473 bytes --]

On Mon, Apr 17, 2023 at 2:41 AM Wols Lists <antlists@youngman.org.uk> wrote:
>
> On 17/04/2023 02:14, Dale wrote:
> > My current install is over a decade old.  My /boot partition is about
> > 375MBs.  I should have made it larger but at the time, I booted CD/DVD
> > media when needed.  I didn't have USB sticks at the time.  This time, I
> > plan to make some changes.  If I put Knoppix and/or Gentoo LiveGUI in
> > /boot, it will be larger.  Much larger.  Mark's idea is best tho.  If I
> > can get Grub to work and boot it.
>
> If you dd your boot partition across, you can copy it into a larger
> partition on the new drive, and then just expand the filesystem.
>
> So changing partition sizes isn't a problem if you want to just copy
> your system drive onto a new disk.
>
> Cheers,
> Wol

I'm not sure I'd use dd in this case. If he's moving from an HDD with
a 4K block size and a 4K file system block size to an SDD with a 16K
physical block size he might want to consider changing the filesystem
block size to 16K which should help on the write amplification side.

Maybe dd can do that but I wouldn't think so.

And I don't know that formatting ext4 or some other FS to 16K
really helps the write amplification issue but it makes sense to
me to match the file system blocks to the underlying flash
block size. Real speed testing would be required to ensure reading
16K blocks doesn't slow him down though.

Just a thought,
Mark

[-- Attachment #2: Type: text/html, Size: 1868 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-17 17:45                             ` Mark Knecht
@ 2023-04-18  0:35                               ` Dale
  2023-04-18  8:03                               ` Frank Steinmetzger
  1 sibling, 0 replies; 67+ messages in thread
From: Dale @ 2023-04-18  0:35 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 2039 bytes --]

Mark Knecht wrote:
>
>
> On Mon, Apr 17, 2023 at 2:41 AM Wols Lists <antlists@youngman.org.uk
> <mailto:antlists@youngman.org.uk>> wrote:
> >
> > On 17/04/2023 02:14, Dale wrote:
> > > My current install is over a decade old.  My /boot partition is about
> > > 375MBs.  I should have made it larger but at the time, I booted CD/DVD
> > > media when needed.  I didn't have USB sticks at the time.  This
> time, I
> > > plan to make some changes.  If I put Knoppix and/or Gentoo LiveGUI in
> > > /boot, it will be larger.  Much larger.  Mark's idea is best tho. 
> If I
> > > can get Grub to work and boot it.
> >
> > If you dd your boot partition across, you can copy it into a larger
> > partition on the new drive, and then just expand the filesystem.
> >
> > So changing partition sizes isn't a problem if you want to just copy
> > your system drive onto a new disk.
> >
> > Cheers,
> > Wol
>
> I'm not sure I'd use dd in this case. If he's moving from an HDD with
> a 4K block size and a 4K file system block size to an SDD with a 16K 
> physical block size he might want to consider changing the filesystem 
> block size to 16K which should help on the write amplification side.
>
> Maybe dd can do that but I wouldn't think so.
>
> And I don't know that formatting ext4 or some other FS to 16K 
> really helps the write amplification issue but it makes sense to
> me to match the file system blocks to the underlying flash
> block size. Real speed testing would be required to ensure reading
> 16K blocks doesn't slow him down though.
>
> Just a thought,
> Mark


I still haven't got around to partitioning the drive or anything so I'm
glad you mentioned the block size.  I need to try and remember that.  It
may detect it itself but may not.  I'd rather fix it now than wish I did
later on.  I assume that setting is in the man page. 

Thanks for that tidbit.  Now to remember it.  :/

Dale

:-)  :-) 

P. s.  It's garden time for folks around here.  I been busy the past few
days.  Tractor and tiller too.

[-- Attachment #2: Type: text/html, Size: 3439 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-17 17:45                             ` Mark Knecht
  2023-04-18  0:35                               ` Dale
@ 2023-04-18  8:03                               ` Frank Steinmetzger
  1 sibling, 0 replies; 67+ messages in thread
From: Frank Steinmetzger @ 2023-04-18  8:03 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 3123 bytes --]

Am Mon, Apr 17, 2023 at 10:45:46AM -0700 schrieb Mark Knecht:

> And I don't know that formatting ext4 or some other FS to 16K
> really helps the write amplification issue but it makes sense to
> me to match the file system blocks to the underlying flash
> block size.

The problem is finding out the write block size. This 7-year-old post says 
it’s reached 16 K: https://superuser.com/questions/976257/page-sizes-ssd

So I would say don’t bother. If everything is trimmed, there is no 
amplification. And if the disk becomes full and you get WA when writing 
itsy-bitsy 4 K files, you probably still won’t notice much difference, as 
random 4 K writes are slow anyways and how often do you write thousands of
4 K files outside of portage?

Erase block sizes probably go into the megabytes these days:
https://news.ycombinator.com/item?id=29165202

Some more detailed explanation:
https://spdk.io/doc/ssd_internals.html
  “For each erase block, each bit may be written to (i.e. have its bit 
  flipped from 0 to 1) with bit-granularity once. In order to write to the 
  erase block a second time, the entire block must be erased (i.e. all bits 
  in the block are flipped back to 0).”

This sounds like my initial statement was partially wrong – trimming does 
cause writing zeroes, because that’s what an erase does. But it still 
prevents write amplification (and one extra erase cycle) because 
neighbouring blocks don’t need to be read and written back.

> Real speed testing would be required to ensure reading
> 16K blocks doesn't slow him down though.

Here are some numbers and a conclusion gathered from a read test:
https://superuser.com/questions/728858/how-to-determine-ssds-nand-erase-block-size

Unless I positively need the speed for high-performance computing, I’d 
rather keep the smaller granularity for more capacity at low file sizes.

A problem is what some call “parts lottery” these days: manufacturers 
promise some performance on the data sheet (“up to xxx”), but not with which 
parts they want to achieve this (types of flash chips, TLC/QLC, controller, 
DRAM and so on). Meaning during the lifetime of a product, its internals may 
change and as a consequence those specs are not in the data sheet:

https://unix.stackexchange.com/questions/334804/is-there-a-way-to-find-out-ssd-page-size-on-linux-unix-what-is-physical-block
  “There is no standard way for a SSD to report its page size or erase block 
  size. Few if any manufacturers report them in the datasheets. (Because 
  they may change during the lifetime of a SKU, for example because of 
  changing suppliers.)
  For practical use just align all your data structures (partitions, payload 
  of LUKS containers, LVM logical volumes) to 1 or 2 MiB boundaries. It's an 
  SSD after all--it is designed to cope with usual filesystems, such as NTFS 
  (which uses 4 KiB allocation units).”

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

The worst disease is indifference. So what?

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-15 22:47 [gentoo-user] Finally got a SSD drive to put my OS on Dale
  2023-04-15 23:24 ` Mark Knecht
  2023-04-16  1:47 ` William Kenworthy
@ 2023-04-18 14:52 ` Nikos Chantziaras
  2023-04-18 15:05   ` Dale
  2023-04-18 17:52   ` Mark Knecht
  2 siblings, 2 replies; 67+ messages in thread
From: Nikos Chantziaras @ 2023-04-18 14:52 UTC (permalink / raw
  To: gentoo-user

On 16/04/2023 01:47, Dale wrote:
> Anything else that makes these special?  Any tips or tricks?

Only three things.

1. Make sure the fstrim service is active (should run every week by 
default, at least with systemd, "systemctl enable fstrim.timer".)

2. Don't use the "discard" mount option.

3. Use smartctl to keep track of TBW.

People are always mentioning performance, but it's not the important 
factor for me. The more important factor is longevity. You want your 
storage device to last as long as possible, and fstrim helps, discard hurts.

With "smartctl -x /dev/sda" (or whatever device your SSD is in /dev) pay 
attention to the "Data Units Written" field. Your 500GB 870 Evo has a 
TBW of 300TBW. That's "terabytes written". This is the manufacturer's 
"guarantee" that the device won't fail prior to writing that many 
terabytes to it. When you reach that, it doesn't mean it will fail, but 
it does mean you might want to start thinking of replacing it with a new 
one just in case, and then keep using it as a secondary drive.

If you use KDE, you can also view that SMART data in the "SMART Status" 
UI (just type "SMART status" in the KDE application launcher.)



^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-18 14:52 ` [gentoo-user] " Nikos Chantziaras
@ 2023-04-18 15:05   ` Dale
  2023-04-18 15:36     ` Nikos Chantziaras
  2023-04-18 22:18     ` Frank Steinmetzger
  2023-04-18 17:52   ` Mark Knecht
  1 sibling, 2 replies; 67+ messages in thread
From: Dale @ 2023-04-18 15:05 UTC (permalink / raw
  To: gentoo-user

Nikos Chantziaras wrote:
> On 16/04/2023 01:47, Dale wrote:
>> Anything else that makes these special?  Any tips or tricks?
>
> Only three things.
>
> 1. Make sure the fstrim service is active (should run every week by
> default, at least with systemd, "systemctl enable fstrim.timer".)
>
> 2. Don't use the "discard" mount option.
>
> 3. Use smartctl to keep track of TBW.
>
> People are always mentioning performance, but it's not the important
> factor for me. The more important factor is longevity. You want your
> storage device to last as long as possible, and fstrim helps, discard
> hurts.
>
> With "smartctl -x /dev/sda" (or whatever device your SSD is in /dev)
> pay attention to the "Data Units Written" field. Your 500GB 870 Evo
> has a TBW of 300TBW. That's "terabytes written". This is the
> manufacturer's "guarantee" that the device won't fail prior to writing
> that many terabytes to it. When you reach that, it doesn't mean it
> will fail, but it does mean you might want to start thinking of
> replacing it with a new one just in case, and then keep using it as a
> secondary drive.
>
> If you use KDE, you can also view that SMART data in the "SMART
> Status" UI (just type "SMART status" in the KDE application launcher.)
>
>
>


I'm on openrc here but someone posted a link to make a cron job for
fstrim.  When I get around to doing something with the drive, it's on my
todo list.  I may go a month tho.  I only update my OS once a week, here
lately, every other week, and given the large amount of unused space, I
doubt it will run short of any space.  I'm still thinking on that. 

I've read about discard.  Gonna avoid that.  ;-) 

Given how I plan to use this drive, that should last a long time.  I'm
just putting the OS stuff on the drive and I compile on a spinning rust
drive and use -k to install the built packages on the live system.  That
should help minimize the writes.  Since I still need a spinning rust
drive for swap and such, I thought about putting /var on spinning rust. 
After all, when running software, activity on /var is minimal. Thing is,
I got a larger drive so I got plenty of space.  It could make it a
little faster.  Maybe. 

I read about that bytes written.  With the way you explained it, it
confirms what I was thinking it meant.  That's a lot of data.  I
currently have around 100TBs of drives lurking about, either in my rig
or for backups.  I'd have to write three times that amount of data on
that little drive.  That's a LOT of data for a 500GB drive. 

All good info and really helpful.  Thanks. 

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 67+ messages in thread

* [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-18 15:05   ` Dale
@ 2023-04-18 15:36     ` Nikos Chantziaras
  2023-04-18 20:01       ` Dale
  2023-04-18 22:18     ` Frank Steinmetzger
  1 sibling, 1 reply; 67+ messages in thread
From: Nikos Chantziaras @ 2023-04-18 15:36 UTC (permalink / raw
  To: gentoo-user

On 18/04/2023 18:05, Dale wrote:
> I compile on a spinning rust
> drive and use -k to install the built packages on the live system.  That
> should help minimize the writes.  

I just use tmpfs for /var/tmp/portage (16GB, I'm on 32GB RAM.) When I 
keep binary packages around, those I have on my HDD, as well as the 
distfiles:

DISTDIR="/mnt/Data/gentoo/distfiles"
PKGDIR="/mnt/Data/gentoo/binpkgs"


> Since I still need a spinning rust
> drive for swap and such, I thought about putting /var on spinning rust.

Nah. The data written there is absolutely minuscule. Firefox writes like 
10 times more just while running it without even any web page loaded... 
And for actual browsing, it becomes more like 1000 times more (mostly 
the Firefox cache.)

I wouldn't worry too much about it. I've been using my current SSD since 
2020, and I'm at 7TBW right now (out of 200 the drive is rated for) and 
I dual boot Windows and install/uninstall large games on it quite often. 
So with an average of 3TBW per year, I'd need over 80 years to reach 
200TBW :-P But I mentioned it in case your use case is different (like 
large video files or recording and whatnot.)



^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-18 14:52 ` [gentoo-user] " Nikos Chantziaras
  2023-04-18 15:05   ` Dale
@ 2023-04-18 17:52   ` Mark Knecht
  1 sibling, 0 replies; 67+ messages in thread
From: Mark Knecht @ 2023-04-18 17:52 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 2065 bytes --]

On Tue, Apr 18, 2023 at 7:53 AM Nikos Chantziaras <realnc@gmail.com> wrote:
>
> On 16/04/2023 01:47, Dale wrote:
> > Anything else that makes these special?  Any tips or tricks?
>
> Only three things.
>
> 1. Make sure the fstrim service is active (should run every week by
> default, at least with systemd, "systemctl enable fstrim.timer".)
>
> 2. Don't use the "discard" mount option.
>
> 3. Use smartctl to keep track of TBW.
>
> People are always mentioning performance, but it's not the important
> factor for me. The more important factor is longevity. You want your
> storage device to last as long as possible, and fstrim helps, discard
hurts.
>
> With "smartctl -x /dev/sda" (or whatever device your SSD is in /dev) pay
> attention to the "Data Units Written" field. Your 500GB 870 Evo has a
> TBW of 300TBW. That's "terabytes written". This is the manufacturer's
> "guarantee" that the device won't fail prior to writing that many
> terabytes to it. When you reach that, it doesn't mean it will fail, but
> it does mean you might want to start thinking of replacing it with a new
> one just in case, and then keep using it as a secondary drive.
>
> If you use KDE, you can also view that SMART data in the "SMART Status"
> UI (just type "SMART status" in the KDE application launcher.)
>

Add to that list that Samsung only warranties the drive for 5 years
no matter how much or how little you use it. Again, it doesn't mean
it will die in 5 years just as it doesn't mean it will die if it has had
more than 300TBW. However it _might_ mean that data written
to the drive and never touched again may be gone in 5 years.

Non-volatile memory doesn't hold it's charge forever, just as
magnetic disk drives and magnetic tape will eventually lose their
data.

On all of my systems here at home, looking at the TBW values, my
drives will go out of warranty at 5 years long before I'll get anywhere
near the TBW spec. However I run stable, long term distros that don't
update often and mostly use larger data files.

[-- Attachment #2: Type: text/html, Size: 2638 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-18 15:36     ` Nikos Chantziaras
@ 2023-04-18 20:01       ` Dale
  2023-04-18 20:53         ` Wol
  2023-04-18 20:57         ` Mark Knecht
  0 siblings, 2 replies; 67+ messages in thread
From: Dale @ 2023-04-18 20:01 UTC (permalink / raw
  To: gentoo-user

Nikos Chantziaras wrote:
> On 18/04/2023 18:05, Dale wrote:
>> I compile on a spinning rust
>> drive and use -k to install the built packages on the live system.  That
>> should help minimize the writes.  
>
> I just use tmpfs for /var/tmp/portage (16GB, I'm on 32GB RAM.) When I
> keep binary packages around, those I have on my HDD, as well as the
> distfiles:
>
> DISTDIR="/mnt/Data/gentoo/distfiles"
> PKGDIR="/mnt/Data/gentoo/binpkgs"
>
>

Most of mine is in tmpfs to except for the larger packages, such as
Firefox, LOo and a couple others.  Thing is, those few large ones would
rack up a lot of writes themselves since they are so large.  That said,
it would be faster.  ;-) 


>> Since I still need a spinning rust
>> drive for swap and such, I thought about putting /var on spinning rust.
>
> Nah. The data written there is absolutely minuscule. Firefox writes
> like 10 times more just while running it without even any web page
> loaded... And for actual browsing, it becomes more like 1000 times
> more (mostly the Firefox cache.)
>
> I wouldn't worry too much about it. I've been using my current SSD
> since 2020, and I'm at 7TBW right now (out of 200 the drive is rated
> for) and I dual boot Windows and install/uninstall large games on it
> quite often. So with an average of 3TBW per year, I'd need over 80
> years to reach 200TBW :-P But I mentioned it in case your use case is
> different (like large video files or recording and whatnot.)
>
>
> .
>


That's kinda my thinking on one side of the coin.  Having it on a
spinning rust drive just wouldn't make much difference.  Most things
there like log files and such are just files being added to not
completely rewritten.  I don't think it would make much difference to
the life span of the drive. 

Someone mentioned 16K block size.  I've yet to find out how to do that. 
The man page talks about the option, -b I think, but google searches
seem to say it isn't supported.  Anyone actually set that option? 
Recall the options that were used? 

I did so much the past few days, I'm worthless today.  Parts of me are
pretty angry, joints and such.  Still, I'm glad I got done what I did. 
It's that busy time of year. 

Thanks.

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-18 20:01       ` Dale
@ 2023-04-18 20:53         ` Wol
  2023-04-18 22:13           ` Frank Steinmetzger
  2023-04-18 20:57         ` Mark Knecht
  1 sibling, 1 reply; 67+ messages in thread
From: Wol @ 2023-04-18 20:53 UTC (permalink / raw
  To: gentoo-user

On 18/04/2023 21:01, Dale wrote:
>> I just use tmpfs for /var/tmp/portage (16GB, I'm on 32GB RAM.) When I
>> keep binary packages around, those I have on my HDD, as well as the
>> distfiles:
>>
>> DISTDIR="/mnt/Data/gentoo/distfiles"
>> PKGDIR="/mnt/Data/gentoo/binpkgs"
>>
>>
> Most of mine is in tmpfs to except for the larger packages, such as
> Firefox, LOo and a couple others.  Thing is, those few large ones would
> rack up a lot of writes themselves since they are so large.  That said,
> it would be faster.  😉
> 
Not sure if it's set up on my current system, but I always configured 
/var/tmp/portage on tmpfs. And on every disk I allocate a swap partition 
equal to twice the mobo's max memory. Three drives times 64GB times two 
is a helluva lot of swap.

So here I would just allocate /var/tmp/portage maybe 64 - 128 GB of 
space. If the emerge fits in my current 32GB ram, then fine. If not, it 
spills over into swap. I don't have to worry about allocating extra 
space for memory hogs like Firefox, LO, Rust etc.

And seeing as my smallest drive is 3TB, losing 128GB per drive to swap 
isn't actually that much.

Although, as was pointed out to me, if I did suffer a denial-of-service 
attack that tried to fill memory, that amount of swap would knacker my 
system for a LONG time.

Cheers,
Wol


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-18 20:01       ` Dale
  2023-04-18 20:53         ` Wol
@ 2023-04-18 20:57         ` Mark Knecht
  2023-04-18 21:15           ` Dale
  1 sibling, 1 reply; 67+ messages in thread
From: Mark Knecht @ 2023-04-18 20:57 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 452 bytes --]

On Tue, Apr 18, 2023 at 1:02 PM Dale <rdalek1967@gmail.com> wrote:
<SNIP>
>
> Someone mentioned 16K block size.
<SNIP>

I mentioned it but I'm NOT suggesting it.

It would be the -b option if you were to do it for ext4.

I'm using the default block size (4k) on all my SSDs and M.2's and
as I've said a couple of time, I'm going to blast past the 5 year
warranty time long before I write too many terabytes.

Keep it simple.

- Mark

[-- Attachment #2: Type: text/html, Size: 714 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-18 20:57         ` Mark Knecht
@ 2023-04-18 21:15           ` Dale
  2023-04-18 21:25             ` Mark Knecht
  0 siblings, 1 reply; 67+ messages in thread
From: Dale @ 2023-04-18 21:15 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 871 bytes --]

Mark Knecht wrote:
>
>
> On Tue, Apr 18, 2023 at 1:02 PM Dale <rdalek1967@gmail.com
> <mailto:rdalek1967@gmail.com>> wrote:
> <SNIP>
> >
> > Someone mentioned 16K block size.
> <SNIP>
>
> I mentioned it but I'm NOT suggesting it.
>
> It would be the -b option if you were to do it for ext4.
>
> I'm using the default block size (4k) on all my SSDs and M.2's and
> as I've said a couple of time, I'm going to blast past the 5 year
> warranty time long before I write too many terabytes.
>
> Keep it simple. 
>
> - Mark


One reason I ask, some info I found claimed it isn't even supported.  It
actually spits out a error message and doesn't create the file system. 
I wasn't sure if that info was outdated or what so I thought I'd ask.  I
think I'll skip that part.  Just let it do its thing. 

Dale

:-)  :-) 

P. S.  Kudos to however came up with Tylenol. 

[-- Attachment #2: Type: text/html, Size: 1912 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-18 21:15           ` Dale
@ 2023-04-18 21:25             ` Mark Knecht
  2023-04-19  1:36               ` Dale
  0 siblings, 1 reply; 67+ messages in thread
From: Mark Knecht @ 2023-04-18 21:25 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1358 bytes --]

On Tue, Apr 18, 2023 at 2:15 PM Dale <rdalek1967@gmail.com> wrote:
>
> Mark Knecht wrote:
>
>
>
> On Tue, Apr 18, 2023 at 1:02 PM Dale <rdalek1967@gmail.com> wrote:
> <SNIP>
> >
> > Someone mentioned 16K block size.
> <SNIP>
>
> I mentioned it but I'm NOT suggesting it.
>
> It would be the -b option if you were to do it for ext4.
>
> I'm using the default block size (4k) on all my SSDs and M.2's and
> as I've said a couple of time, I'm going to blast past the 5 year
> warranty time long before I write too many terabytes.
>
> Keep it simple.
>
> - Mark
>
> One reason I ask, some info I found claimed it isn't even supported.  It
actually spits out a error message and doesn't create the file system.  I
wasn't sure if that info was outdated or what so I thought I'd ask.  I
think I'll skip that part.  Just let it do its thing.
>
> Dale
<SNIP>

I'd start with something like

mkfs.ext4 -b 16384 /dev/sdX

and see where it leads. It's *possible* that the SSD might fight
back, sending the OS a response that says it doesn't want to
do that.

It could also be a partition alignment issue, although if you
started your partition at the default starting address I'd doubt
that one.

Anyway, I just wanted to be clear that I'm not worried about
write amplification based on my system data.

Cheers,
Mark

[-- Attachment #2: Type: text/html, Size: 1866 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-18 20:53         ` Wol
@ 2023-04-18 22:13           ` Frank Steinmetzger
  2023-04-18 23:08             ` Wols Lists
  0 siblings, 1 reply; 67+ messages in thread
From: Frank Steinmetzger @ 2023-04-18 22:13 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1223 bytes --]

Am Tue, Apr 18, 2023 at 09:53:18PM +0100 schrieb Wol:

> On 18/04/2023 21:01, Dale wrote:
> > > I just use tmpfs for /var/tmp/portage (16GB, I'm on 32GB RAM.)

Same.

> /var/tmp/portage on tmpfs. And on every disk I allocate a swap partition
> equal to twice the mobo's max memory. Three drives times 64GB times two is a
> helluva lot of swap.

Uhm … why? The moniker of swap = 2×RAM comes from times when RAM was scarce. 
What do you need so much swap for, especially with 32 GB RAM to begin with?
And if you really do have use cases which cause regular swapping, it’d be 
less painful if you just added some more RAM.

I never used swap, even on my 3 GB laptop 15 years ago, except for extreme 
circumstances for which I specifically activated it (though I never compiled 
huge packages like Firefox or LO myself). These days I run a few zswap 
devices, which act as swap, but technically are compressed RAM disks. So 
when RAM gets full, I get a visible spike in the taskbar’s swap meter before 
the system grinds to a halt.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Night is so dark only so one can see it better.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-18 15:05   ` Dale
  2023-04-18 15:36     ` Nikos Chantziaras
@ 2023-04-18 22:18     ` Frank Steinmetzger
  2023-04-18 22:41       ` Frank Steinmetzger
  2023-04-19  1:45       ` Dale
  1 sibling, 2 replies; 67+ messages in thread
From: Frank Steinmetzger @ 2023-04-18 22:18 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1388 bytes --]

Am Tue, Apr 18, 2023 at 10:05:27AM -0500 schrieb Dale:

> Given how I plan to use this drive, that should last a long time.  I'm
> just putting the OS stuff on the drive and I compile on a spinning rust
> drive and use -k to install the built packages on the live system.  That
> should help minimize the writes.

Well, 300 TB over 5 years is 60 TB per year, or 165 GB per day. Every day. 
I’d say don’t worry. Besides: endurance tests showed that SSDs were able to 
withstand multiples of their guaranteed TBW until they actually failed (of 
course there are always exceptions to the rule).

> I read about that bytes written.  With the way you explained it, it
> confirms what I was thinking it meant.  That's a lot of data.  I
> currently have around 100TBs of drives lurking about, either in my rig
> or for backups.  I'd have to write three times that amount of data on
> that little drive.  That's a LOT of data for a 500GB drive. 

If you use ext4, run `dumpe2fs -h /dev/your-root-partition | grep Lifetime` 
to see how much data has been written to that partition since you formatted 
it. Just to get an idea of what you are looking at on your setup.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

What woman is looking for a man who is looking for a woman looking for a man?

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-18 22:18     ` Frank Steinmetzger
@ 2023-04-18 22:41       ` Frank Steinmetzger
  2023-04-19  1:45       ` Dale
  1 sibling, 0 replies; 67+ messages in thread
From: Frank Steinmetzger @ 2023-04-18 22:41 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1508 bytes --]

Am Wed, Apr 19, 2023 at 12:18:14AM +0200 schrieb Frank Steinmetzger:

> If you use ext4, run `dumpe2fs -h /dev/your-root-partition | grep Lifetime` 
> to see how much data has been written to that partition since you formatted 
> it. Just to get an idea of what you are looking at on your setup.

For comparison:

I’m writing from my Surface Go 1 right now. It’s running Arch linux with KDE 
and I don’t use it very often (meaning, I don’t update it as often as my 
main rig). But updates in Arch linux can be volume-intensive, especially 
because there are frequent kernel updates (I’ve had over 50 since June 2020, 
each accounting for over 300 MB of writes), and other updates of big 
packages if a dependency like python changes. In Gentoo you do revdep-rebuild,
binary distros ship new versions of all affected packages, like libreoffice, 
or Qt, or texlive.

Anyways, the root partition measures 22 G and has a lifetime write of 571 GB 
in almost three years. The home partition (97 GB in size) is at 877 GB. That 
seems actually a lot, because I don’t really do that much high-volume stuff 
there. My media archive with all the photos and music and such sits on a 
separate data partition, which is not synced to the Surface due to its small 
SSD of only 128 GB.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

We shall be landing shortly.
Please return your stewardess to the upright position.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-18 22:13           ` Frank Steinmetzger
@ 2023-04-18 23:08             ` Wols Lists
  2023-04-19  1:15               ` Dale
  0 siblings, 1 reply; 67+ messages in thread
From: Wols Lists @ 2023-04-18 23:08 UTC (permalink / raw
  To: gentoo-user

On 18/04/2023 23:13, Frank Steinmetzger wrote:
>> /var/tmp/portage on tmpfs. And on every disk I allocate a swap partition
>> equal to twice the mobo's max memory. Three drives times 64GB times two is a
>> helluva lot of swap.

> Uhm … why? The moniker of swap = 2×RAM comes from times when RAM was scarce.
> What do you need so much swap for, especially with 32 GB RAM to begin with?
> And if you really do have use cases which cause regular swapping, it’d be
> less painful if you just added some more RAM.

Actually, if you know your history, it does NOT come from "times when 
RAM was scarce". It comes from the original Unix swap algorithm which 
NEEDED twice ram.

I've searched (unsuccessfully) on LWN for the story, but at some point 
(I think round about kernel 2.4.10) Linus ripped out all the ugly 
"optimisation" code, and anybody who ran the vanilla kernel with "swap 
but less than twice ram" found it crashed the instant the system touched 
swap. Linus was not sympathetic to people who hadn't read the release 
notes ...

Andrea Arcangeli and someone else (I've forgotten who) wrote two 
competing memory managers in classic "Linus managerial style" as he 
played them off against each other.

I've always allocated swap like that pretty much ever since. Maybe the 
new algorithm hasn't got the old wanting twice ram, maybe it has, I 
never found out, but I've not changed that habit.

(NB This system is pretty recent, my previous system had iirc 8GB (and a 
maxed out value of 16GB), not enough for a lot of the bigger programs.

Before that point, I gather it actually made a difference to the 
efficiency of the system as the optimisations kicked in, but everybody 
believed it was an old wives tale - until Linus did that ...

Cheers,
Wol


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-18 23:08             ` Wols Lists
@ 2023-04-19  1:15               ` Dale
  0 siblings, 0 replies; 67+ messages in thread
From: Dale @ 2023-04-19  1:15 UTC (permalink / raw
  To: gentoo-user

Wols Lists wrote:
> On 18/04/2023 23:13, Frank Steinmetzger wrote:
>>> /var/tmp/portage on tmpfs. And on every disk I allocate a swap
>>> partition
>>> equal to twice the mobo's max memory. Three drives times 64GB times
>>> two is a
>>> helluva lot of swap.
>
>> Uhm … why? The moniker of swap = 2×RAM comes from times when RAM was
>> scarce.
>> What do you need so much swap for, especially with 32 GB RAM to begin
>> with?
>> And if you really do have use cases which cause regular swapping,
>> it’d be
>> less painful if you just added some more RAM.
>
> Actually, if you know your history, it does NOT come from "times when
> RAM was scarce". It comes from the original Unix swap algorithm which
> NEEDED twice ram.
>
> I've searched (unsuccessfully) on LWN for the story, but at some point
> (I think round about kernel 2.4.10) Linus ripped out all the ugly
> "optimisation" code, and anybody who ran the vanilla kernel with "swap
> but less than twice ram" found it crashed the instant the system
> touched swap. Linus was not sympathetic to people who hadn't read the
> release notes ...
>
> Andrea Arcangeli and someone else (I've forgotten who) wrote two
> competing memory managers in classic "Linus managerial style" as he
> played them off against each other.
>
> I've always allocated swap like that pretty much ever since. Maybe the
> new algorithm hasn't got the old wanting twice ram, maybe it has, I
> never found out, but I've not changed that habit.
>
> (NB This system is pretty recent, my previous system had iirc 8GB (and
> a maxed out value of 16GB), not enough for a lot of the bigger programs.
>
> Before that point, I gather it actually made a difference to the
> efficiency of the system as the optimisations kicked in, but everybody
> believed it was an old wives tale - until Linus did that ...
>
> Cheers,
> Wol
>
>


I've always had some swap but never twice the ram.  Even back on my
single core rig with 4GBs of ram, I only had a couple GBs or so of
swap.  Heck, I have swappiness set to 1 here on current rig.  I don't
want swap used unless it is to prevent a out of memory crash.  Given the
amount of memory available today, unless you know you still don't have
enough memory, swap really isn't needed.  Adding more memory is a much
better solution. 

I've actually considered disabling mine completely but sometimes Firefox
gets a wild hair and starts consuming memory.  When my TV stops playing,
I know it's at it again.  Hasn't happened in a while tho.  That's the
thing about swap, it usually slows a system to a crawl.  I can usually
tell if even a few MBs of swap is being used just by the slowness of the
system to respond to even simply changing from one desktop to another. 

The need for swap in most cases isn't what it used to be. 

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-18 21:25             ` Mark Knecht
@ 2023-04-19  1:36               ` Dale
  0 siblings, 0 replies; 67+ messages in thread
From: Dale @ 2023-04-19  1:36 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1939 bytes --]

Mark Knecht wrote:
>
>
> On Tue, Apr 18, 2023 at 2:15 PM Dale <rdalek1967@gmail.com
> <mailto:rdalek1967@gmail.com>> wrote:
> >
> > Mark Knecht wrote:
> >
> >
> >
> > On Tue, Apr 18, 2023 at 1:02 PM Dale <rdalek1967@gmail.com
> <mailto:rdalek1967@gmail.com>> wrote:
> > <SNIP>
> > >
> > > Someone mentioned 16K block size.
> > <SNIP>
> >
> > I mentioned it but I'm NOT suggesting it.
> >
> > It would be the -b option if you were to do it for ext4.
> >
> > I'm using the default block size (4k) on all my SSDs and M.2's and
> > as I've said a couple of time, I'm going to blast past the 5 year
> > warranty time long before I write too many terabytes.
> >
> > Keep it simple.
> >
> > - Mark
> >
> > One reason I ask, some info I found claimed it isn't even
> supported.  It actually spits out a error message and doesn't create
> the file system.  I wasn't sure if that info was outdated or what so I
> thought I'd ask.  I think I'll skip that part.  Just let it do its thing.
> >
> > Dale
> <SNIP>
>
> I'd start with something like
>
> mkfs.ext4 -b 16384 /dev/sdX
>
> and see where it leads. It's *possible* that the SSD might fight 
> back, sending the OS a response that says it doesn't want to 
> do that.
>
> It could also be a partition alignment issue, although if you
> started your partition at the default starting address I'd doubt 
> that one.
>
> Anyway, I just wanted to be clear that I'm not worried about
> write amplification based on my system data.
>
> Cheers,
> Mark


I found where it was claimed it doesn't work.  This is the link.

https://askubuntu.com/questions/1007716/formatting-an-ext4-partition-with-a-16kb-block-possible

That is a few years old and things may have changed.  I also saw similar
info elsewhere.  I may try it just to see if I get the same output.  If
it works, fine.  If not, then we know.

Odd it would have the option but not allow you to use it tho.  :/

Dale

:-)  :-) 

[-- Attachment #2: Type: text/html, Size: 3647 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-18 22:18     ` Frank Steinmetzger
  2023-04-18 22:41       ` Frank Steinmetzger
@ 2023-04-19  1:45       ` Dale
  2023-04-19  8:00         ` Nikos Chantziaras
  1 sibling, 1 reply; 67+ messages in thread
From: Dale @ 2023-04-19  1:45 UTC (permalink / raw
  To: gentoo-user

Frank Steinmetzger wrote:
> Am Tue, Apr 18, 2023 at 10:05:27AM -0500 schrieb Dale:
>
>> Given how I plan to use this drive, that should last a long time.  I'm
>> just putting the OS stuff on the drive and I compile on a spinning rust
>> drive and use -k to install the built packages on the live system.  That
>> should help minimize the writes.
> Well, 300 TB over 5 years is 60 TB per year, or 165 GB per day. Every day. 
> I’d say don’t worry. Besides: endurance tests showed that SSDs were able to 
> withstand multiples of their guaranteed TBW until they actually failed (of 
> course there are always exceptions to the rule).
>
>> I read about that bytes written.  With the way you explained it, it
>> confirms what I was thinking it meant.  That's a lot of data.  I
>> currently have around 100TBs of drives lurking about, either in my rig
>> or for backups.  I'd have to write three times that amount of data on
>> that little drive.  That's a LOT of data for a 500GB drive. 
> If you use ext4, run `dumpe2fs -h /dev/your-root-partition | grep Lifetime` 
> to see how much data has been written to that partition since you formatted 
> it. Just to get an idea of what you are looking at on your setup.
>


I skipped the grep part and looked at the whole output.  I don't recall
ever seeing that command before so I wanted to see what it did.  Dang,
lots of info. 

Filesystem created:       Sun Apr 15 03:24:56 2012
Lifetime writes:          993 GB

That's for the main / partition.  I have /usr on it's own partition tho. 

Filesystem created:       Sun Apr 15 03:25:48 2012
Lifetime writes:          1063 GB

I'd think that / and /usr would be the most changed parts of the OS. 
After all, /bin and /sbin are on / too as is /lib*.  If that is even
remotely correct, both would only be around 2TBs.  That dang thing may
outlive me even if I don't try to minimize writes.  ROFLMBO

Now that says a lot.  Really nice info. 

Thanks.

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 67+ messages in thread

* [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-19  1:45       ` Dale
@ 2023-04-19  8:00         ` Nikos Chantziaras
  2023-04-19  9:42           ` Dale
  2023-04-19 10:34           ` Peter Humphrey
  0 siblings, 2 replies; 67+ messages in thread
From: Nikos Chantziaras @ 2023-04-19  8:00 UTC (permalink / raw
  To: gentoo-user

On 19/04/2023 04:45, Dale wrote:
> Filesystem created:       Sun Apr 15 03:24:56 2012
> Lifetime writes:          993 GB
> 
> That's for the main / partition.  I have /usr on it's own partition tho.
> 
> Filesystem created:       Sun Apr 15 03:25:48 2012
> Lifetime writes:          1063 GB
> 
> I'd think that / and /usr would be the most changed parts of the OS.
> After all, /bin and /sbin are on / too as is /lib*.  If that is even
> remotely correct, both would only be around 2TBs.  That dang thing may
> outlive me even if I don't try to minimize writes.  ROFLMBO

I believe this only shows the lifetime writes to that particular 
filesystem since it's been created?

You can use smartctl here too. At least on my HDD, the HDD's firmware 
keeps tracks of the lifetime logical sectors written. Logical sectors 
are 512 bytes (physical are 4096). The logical sector size is also shown 
by smartctl.

With my HDD:

   # smartctl -x /dev/sda | grep -i 'sector size'
   Sector Sizes:     512 bytes logical, 4096 bytes physical

Then to get the total logical sectors written:

   # smartctl -x /dev/sda | grep -i 'sectors written'
   0x01  0x018  6     37989289142  ---  Logical Sectors Written

Converting that to terabytes written with "bc -l":

   37988855446 * 512 / 1024^4
   17.68993933033198118209

Almost 18TB.



^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-19  8:00         ` Nikos Chantziaras
@ 2023-04-19  9:42           ` Dale
  2023-04-19 10:34           ` Peter Humphrey
  1 sibling, 0 replies; 67+ messages in thread
From: Dale @ 2023-04-19  9:42 UTC (permalink / raw
  To: gentoo-user

Nikos Chantziaras wrote:
> On 19/04/2023 04:45, Dale wrote:
>> Filesystem created:       Sun Apr 15 03:24:56 2012
>> Lifetime writes:          993 GB
>>
>> That's for the main / partition.  I have /usr on it's own partition tho.
>>
>> Filesystem created:       Sun Apr 15 03:25:48 2012
>> Lifetime writes:          1063 GB
>>
>> I'd think that / and /usr would be the most changed parts of the OS.
>> After all, /bin and /sbin are on / too as is /lib*.  If that is even
>> remotely correct, both would only be around 2TBs.  That dang thing may
>> outlive me even if I don't try to minimize writes.  ROFLMBO
>
> I believe this only shows the lifetime writes to that particular
> filesystem since it's been created?
>
> You can use smartctl here too. At least on my HDD, the HDD's firmware
> keeps tracks of the lifetime logical sectors written. Logical sectors
> are 512 bytes (physical are 4096). The logical sector size is also
> shown by smartctl.
>
> With my HDD:
>
>   # smartctl -x /dev/sda | grep -i 'sector size'
>   Sector Sizes:     512 bytes logical, 4096 bytes physical
>
> Then to get the total logical sectors written:
>
>   # smartctl -x /dev/sda | grep -i 'sectors written'
>   0x01  0x018  6     37989289142  ---  Logical Sectors Written
>
> Converting that to terabytes written with "bc -l":
>
>   37988855446 * 512 / 1024^4
>   17.68993933033198118209
>
> Almost 18TB.
>
>
>


I'm sure it is since the file system was created.  Look at the year
tho.  It's about 11 years ago when I first built this rig.  If I've only
written that amount of data to my current drive over the last 11 years,
the SSD drive should last for many, MANY, years, decades even.  At this
point, I should worry more about something besides it running out of
write cycles.  LOL  I'd think technology changes will bring it to its
end of life rather than write cycles. 

Eventually, I'll have time to put it to use.  To much going on right now
tho. 

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-19  8:00         ` Nikos Chantziaras
  2023-04-19  9:42           ` Dale
@ 2023-04-19 10:34           ` Peter Humphrey
  2023-04-19 17:14             ` Mark Knecht
  2023-04-19 17:59             ` Dale
  1 sibling, 2 replies; 67+ messages in thread
From: Peter Humphrey @ 2023-04-19 10:34 UTC (permalink / raw
  To: gentoo-user

On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:

> With my HDD:
> 
>    # smartctl -x /dev/sda | grep -i 'sector size'
>    Sector Sizes:     512 bytes logical, 4096 bytes physical

Or, with an NVMe drive:

# smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
 0 +     512       0         0

:)

-- 
Regards,
Peter.





^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-19 10:34           ` Peter Humphrey
@ 2023-04-19 17:14             ` Mark Knecht
  2023-04-19 17:59             ` Dale
  1 sibling, 0 replies; 67+ messages in thread
From: Mark Knecht @ 2023-04-19 17:14 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 3424 bytes --]

On Wed, Apr 19, 2023 at 3:35 AM Peter Humphrey <peter@prh.myzen.co.uk>
wrote:
>
> On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:
>
> > With my HDD:
> >
> >    # smartctl -x /dev/sda | grep -i 'sector size'
> >    Sector Sizes:     512 bytes logical, 4096 bytes physical
>
> Or, with an NVMe drive:
>
> # smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
> Supported LBA Sizes (NSID 0x1)
> Id Fmt  Data  Metadt  Rel_Perf
>  0 +     512       0         0
>

That command, on my system anyway, does pick up all the
LBA sizes:

1) Windows - 1TB Sabrent:

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
0 +     512       0         2
1 -    4096       0         1

Data Units Read:                    8,907,599 [4.56 TB]
Data Units Written:                 4,132,726 [2.11 TB]
Host Read Commands:                 78,849,158
Host Write Commands:                55,570,509

Error Information (NVMe Log 0x01, 16 of 63 entries)
Num   ErrCount  SQId   CmdId  Status  PELoc          LBA  NSID    VS
 0       1406     0  0x600b  0x4004  0x028            0     0     -

2) Kubuntu - 1TB Crucial

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
0 +     512       0         1
1 -    4096       0         0

Data Units Read:                    28,823,498 [14.7 TB]
Data Units Written:                 28,560,888 [14.6 TB]
Host Read Commands:                 137,865,594
Host Write Commands:                209,406,594

Error Information (NVMe Log 0x01, 16 of 16 entries)
Num   ErrCount  SQId   CmdId  Status  PELoc          LBA  NSID    VS
 0       1735     0  0x100c  0x4005  0x028            0     0     -

3) Scratch pad - 128GB SSSTC (No name) M.2 chip mounted on Joylifeboard
PCIe card

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
0 +     512       0         0

Data Units Read:                    363,470 [186 GB]
Data Units Written:                 454,447 [232 GB]
Host Read Commands:                 2,832,367
Host Write Commands:                2,833,717

Error Information (NVMe Log 0x01, 16 of 64 entries)
No Errors Logged

NOTE: When I first got interested in M.2 I bought a PCI Express
card and an M.2 chip just to use for a while with Astrophotography
files which tend to be 24MB coming out of my camera but grow
to possibly 1GB as processing occurs. Total cost was about
$30 and might be a possible solution for Gentoo users who
want a faster scratch pad for system updates. Even this
second rate hardware has been reliable and it pretty fast:

https://www.amazon.com/gp/product/B09K4YXN33
https://www.amazon.com/gp/product/B08ZB6YVPW

mark@science2:~$ sudo hdparm -tT /dev/nvme2n1
/dev/nvme2n1:
Timing cached reads:   48164 MB in  1.99 seconds = 24144.06 MB/sec
Timing buffered disk reads: 1210 MB in  3.00 seconds = 403.08 MB/sec
mark@science2:~$

Although not as fast as M.2 on the MB where the Sabrent M.2 blows
away the Crucial M.2

mark@science2:~$ sudo hdparm -tT /dev/nvme0n1

/dev/nvme0n1:
Timing cached reads:   47660 MB in  1.99 seconds = 23890.55 MB/sec
Timing buffered disk reads: 5452 MB in  3.00 seconds = 1817.10 MB/sec
mark@science2:~$ sudo hdparm -tT /dev/nvme1n1

/dev/nvme1n1:
Timing cached reads:   47310 MB in  1.99 seconds = 23714.77 MB/sec
Timing buffered disk reads: 1932 MB in  3.00 seconds = 643.49 MB/sec
mark@science2:~$

[-- Attachment #2: Type: text/html, Size: 4169 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-19 10:34           ` Peter Humphrey
  2023-04-19 17:14             ` Mark Knecht
@ 2023-04-19 17:59             ` Dale
  2023-04-19 18:13               ` Mark Knecht
  2023-04-20  9:46               ` Peter Humphrey
  1 sibling, 2 replies; 67+ messages in thread
From: Dale @ 2023-04-19 17:59 UTC (permalink / raw
  To: gentoo-user

Peter Humphrey wrote:
> On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:
>
>> With my HDD:
>>
>>    # smartctl -x /dev/sda | grep -i 'sector size'
>>    Sector Sizes:     512 bytes logical, 4096 bytes physical
> Or, with an NVMe drive:
>
> # smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
> Supported LBA Sizes (NSID 0x1)
> Id Fmt  Data  Metadt  Rel_Perf
>  0 +     512       0         0
>
> :)
>

When I run that command, sdd is my SDD drive, ironic I know.  Anyway, it
doesn't show block sizes.  It returns nothing.

root@fireball / # smartctl -x /dev/sdd  | grep -A2 'Supported LBA Sizes'
root@fireball / #

This is the FULL output, in case it is hidden somewhere grep and I can't
find.  Keep in mind, this is a blank drive with no partitions or anything. 

root@fireball / # smartctl -x /dev/sdd
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-5.14.15-gentoo] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Samsung based SSDs
Device Model:     Samsung SSD 870 EVO 500GB
Serial Number:    S6PWNXXXXXXXXXXX
LU WWN Device Id: 5 002538 XXXXXXXXXX
Firmware Version: SVT01B6Q
User Capacity:    500,107,862,016 bytes [500 GB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    Solid State Device
Form Factor:      2.5 inches
TRIM Command:     Available, deterministic, zeroed
Device is:        In smartctl database 7.3/5440
ATA Version is:   ACS-4 T13/BSR INCITS 529 revision 5
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Wed Apr 19 12:57:03 2023 CDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is:   Unavailable
APM feature is:   Unavailable
Rd look-ahead is: Enabled
Write cache is:   Enabled
DSN feature is:   Unavailable
ATA Security is:  Disabled, frozen [SEC2]
Wt Cache Reorder: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x80) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection:
Enabled.
Self-test execution status:      (   0) The previous self-test routine
completed
                                        without error or no self-test
has ever
                                        been run.
Total time to complete Offline
data collection:                (    0) seconds.
Offline data collection
capabilities:                    (0x53) SMART execute Offline immediate.
                                        Auto Offline data collection
on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        No Offline surface scan supported.
                                        Self-test supported.
                                        No Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        (  85) minutes.
SCT capabilities:              (0x003d) SCT Status supported.
                                        SCT Error Recovery Control
supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAGS    VALUE WORST THRESH FAIL RAW_VALUE
  5 Reallocated_Sector_Ct   PO--CK   100   100   010    -    0
  9 Power_On_Hours          -O--CK   099   099   000    -    75
 12 Power_Cycle_Count       -O--CK   099   099   000    -    3
177 Wear_Leveling_Count     PO--C-   100   100   000    -    0
179 Used_Rsvd_Blk_Cnt_Tot   PO--C-   100   100   010    -    0
181 Program_Fail_Cnt_Total  -O--CK   100   100   010    -    0
182 Erase_Fail_Count_Total  -O--CK   100   100   010    -    0
183 Runtime_Bad_Block       PO--C-   100   100   010    -    0
187 Uncorrectable_Error_Cnt -O--CK   100   100   000    -    0
190 Airflow_Temperature_Cel -O--CK   077   069   000    -    23
195 ECC_Error_Rate          -O-RC-   200   200   000    -    0
199 CRC_Error_Count         -OSRCK   100   100   000    -    0
235 POR_Recovery_Count      -O--C-   099   099   000    -    1
241 Total_LBAs_Written      -O--CK   100   100   000    -    0
                            ||||||_ K auto-keep
                            |||||__ C event count
                            ||||___ R error rate
                            |||____ S speed/performance
                            ||_____ O updated online
                            |______ P prefailure warning

General Purpose Log Directory Version 1
SMART           Log Directory Version 1 [multi-sector log support]
Address    Access  R/W   Size  Description
0x00       GPL,SL  R/O      1  Log Directory
0x01           SL  R/O      1  Summary SMART error log
0x02           SL  R/O      1  Comprehensive SMART error log
0x03       GPL     R/O      1  Ext. Comprehensive SMART error log
0x04       GPL,SL  R/O      8  Device Statistics log
0x06           SL  R/O      1  SMART self-test log
0x07       GPL     R/O      1  Extended self-test log
0x09           SL  R/W      1  Selective self-test log
0x10       GPL     R/O      1  NCQ Command Error log
0x11       GPL     R/O      1  SATA Phy Event Counters log
0x13       GPL     R/O      1  SATA NCQ Send and Receive log
0x30       GPL,SL  R/O      9  IDENTIFY DEVICE data log
0x80-0x9f  GPL,SL  R/W     16  Host vendor specific log
0xa1           SL  VS      16  Device vendor specific log
0xa5           SL  VS      16  Device vendor specific log
0xce           SL  VS      16  Device vendor specific log
0xe0       GPL,SL  R/W      1  SCT Command/Status
0xe1       GPL,SL  R/W      1  SCT Data Transfer

SMART Extended Comprehensive Error Log Version: 1 (1 sectors)
No Errors Logged

SMART Extended Self-test Log Version: 1 (1 sectors)
Num  Test_Description    Status                  Remaining 
LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%       
74         -
# 2  Short offline       Completed without error       00%       
50         -
# 3  Short offline       Completed without error       00%       
26         -
# 4  Short offline       Completed without error       00%        
2         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
  256        0    65535  Read_scanning was never started
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

SCT Status Version:                  3
SCT Version (vendor specific):       256 (0x0100)
Device State:                        Active (0)
Current Temperature:                    23 Celsius
Power Cycle Min/Max Temperature:     20/40 Celsius
Lifetime    Min/Max Temperature:     20/40 Celsius
Specified Max Operating Temperature:    70 Celsius
Under/Over Temperature Limit Count:   0/0
SMART Status:                        0xc24f (PASSED)

SCT Temperature History Version:     2
Temperature Sampling Period:         10 minutes
Temperature Logging Interval:        10 minutes
Min/Max recommended Temperature:      0/70 Celsius
Min/Max Temperature Limit:            0/70 Celsius
Temperature History Size (Index):    128 (80)

Index    Estimated Time   Temperature Celsius
  81    2023-04-18 15:40    23  ****
 ...    ..(  2 skipped).    ..  ****
  84    2023-04-18 16:10    23  ****
  85    2023-04-18 16:20    24  *****
  86    2023-04-18 16:30    24  *****
  87    2023-04-18 16:40    24  *****
  88    2023-04-18 16:50    23  ****
  89    2023-04-18 17:00    23  ****
  90    2023-04-18 17:10    24  *****
 ...    ..(  2 skipped).    ..  *****
  93    2023-04-18 17:40    24  *****
  94    2023-04-18 17:50    23  ****
  95    2023-04-18 18:00    24  *****
  96    2023-04-18 18:10    24  *****
  97    2023-04-18 18:20    24  *****
  98    2023-04-18 18:30    23  ****
  99    2023-04-18 18:40    24  *****
 100    2023-04-18 18:50    24  *****
 101    2023-04-18 19:00    24  *****
 102    2023-04-18 19:10    23  ****
 103    2023-04-18 19:20    24  *****
 104    2023-04-18 19:30    23  ****
 105    2023-04-18 19:40    24  *****
 ...    ..( 15 skipped).    ..  *****
 121    2023-04-18 22:20    24  *****
 122    2023-04-18 22:30    23  ****
 ...    ..(  5 skipped).    ..  ****
   0    2023-04-18 23:30    23  ****
   1    2023-04-18 23:40    22  ***
   2    2023-04-18 23:50    22  ***
   3    2023-04-19 00:00    23  ****
   4    2023-04-19 00:10    22  ***
   5    2023-04-19 00:20    23  ****
 ...    ..( 22 skipped).    ..  ****
  28    2023-04-19 04:10    23  ****
  29    2023-04-19 04:20    22  ***
 ...    ..( 30 skipped).    ..  ***
  60    2023-04-19 09:30    22  ***
  61    2023-04-19 09:40    21  **
  62    2023-04-19 09:50    21  **
  63    2023-04-19 10:00    22  ***
 ...    ..(  7 skipped).    ..  ***
  71    2023-04-19 11:20    22  ***
  72    2023-04-19 11:30    23  ****
 ...    ..(  2 skipped).    ..  ****
  75    2023-04-19 12:00    23  ****
  76    2023-04-19 12:10    25  ******
  77    2023-04-19 12:20    23  ****
 ...    ..(  2 skipped).    ..  ****
  80    2023-04-19 12:50    23  ****

SCT Error Recovery Control:
           Read: Disabled
          Write: Disabled

Device Statistics (GP Log 0x04)
Page  Offset Size        Value Flags Description
0x01  =====  =               =  ===  == General Statistics (rev 1) ==
0x01  0x008  4               3  ---  Lifetime Power-On Resets
0x01  0x010  4              75  ---  Power-on Hours
0x01  0x018  6               0  ---  Logical Sectors Written
0x01  0x020  6               0  ---  Number of Write Commands
0x01  0x028  6           22176  ---  Logical Sectors Read
0x01  0x030  6             450  ---  Number of Read Commands
0x01  0x038  6         1679000  ---  Date and Time TimeStamp
0x04  =====  =               =  ===  == General Errors Statistics (rev 1) ==
0x04  0x008  4               0  ---  Number of Reported Uncorrectable Errors
0x04  0x010  4               0  ---  Resets Between Cmd Acceptance and
Completion
0x05  =====  =               =  ===  == Temperature Statistics (rev 1) ==
0x05  0x008  1              23  ---  Current Temperature
0x05  0x020  1              40  ---  Highest Temperature
0x05  0x028  1              20  ---  Lowest Temperature
0x05  0x058  1              70  ---  Specified Maximum Operating Temperature
0x06  =====  =               =  ===  == Transport Statistics (rev 1) ==
0x06  0x008  4               4  ---  Number of Hardware Resets
0x06  0x010  4               0  ---  Number of ASR Events
0x06  0x018  4               0  ---  Number of Interface CRC Errors
0x07  =====  =               =  ===  == Solid State Device Statistics
(rev 1) ==
0x07  0x008  1               0  N--  Percentage Used Endurance Indicator
                                |||_ C monitored condition met
                                ||__ D supports DSN
                                |___ N normalized value

Pending Defects log (GP Log 0x0c) not supported

SATA Phy Event Counters (GP Log 0x11)
ID      Size     Value  Description
0x0001  2            0  Command failed due to ICRC error
0x0002  2            0  R_ERR response for data FIS
0x0003  2            0  R_ERR response for device-to-host data FIS
0x0004  2            0  R_ERR response for host-to-device data FIS
0x0005  2            0  R_ERR response for non-data FIS
0x0006  2            0  R_ERR response for device-to-host non-data FIS
0x0007  2            0  R_ERR response for host-to-device non-data FIS
0x0008  2            0  Device-to-host non-data FIS retries
0x0009  2            9  Transition from drive PhyRdy to drive PhyNRdy
0x000a  2            4  Device-to-host register FISes sent due to a COMRESET
0x000b  2            0  CRC errors within host-to-device FIS
0x000d  2            0  Non-CRC errors within host-to-device FIS
0x000f  2            0  R_ERR response for host-to-device data FIS, CRC
0x0010  2            0  R_ERR response for host-to-device data FIS, non-CRC
0x0012  2            0  R_ERR response for host-to-device non-data FIS, CRC
0x0013  2            0  R_ERR response for host-to-device non-data FIS,
non-CRC

root@fireball / #


You see any clues in there?  I'm thinking about just leaving it as the
default tho.  It seems to work for others.  Surely mine isn't that
unique.  lol 

Dale

:-)  :-) 

P. S.  I edited the serial number parts.  ;-) 


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-19 17:59             ` Dale
@ 2023-04-19 18:13               ` Mark Knecht
  2023-04-19 19:26                 ` Dale
  2023-04-20  9:46               ` Peter Humphrey
  1 sibling, 1 reply; 67+ messages in thread
From: Mark Knecht @ 2023-04-19 18:13 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1968 bytes --]

On Wed, Apr 19, 2023 at 10:59 AM Dale <rdalek1967@gmail.com> wrote:
>
> Peter Humphrey wrote:
> > On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:
> >
> >> With my HDD:
> >>
> >>    # smartctl -x /dev/sda | grep -i 'sector size'
> >>    Sector Sizes:     512 bytes logical, 4096 bytes physical
> > Or, with an NVMe drive:
> >
> > # smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
> > Supported LBA Sizes (NSID 0x1)
> > Id Fmt  Data  Metadt  Rel_Perf
> >  0 +     512       0         0
> >
> > :)
> >
>
> When I run that command, sdd is my SDD drive, ironic I know.  Anyway, it
> doesn't show block sizes.  It returns nothing.
>
> root@fireball / # smartctl -x /dev/sdd  | grep -A2 'Supported LBA Sizes'
> root@fireball / #

Note that all of these technologies, HDD, SDD, M.2, report different things
and don't always report them the same way. This is an SDD in my
Plex backup server:

mark@science:~$ sudo smartctl -x /dev/sdb
[sudo] password for mark:
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.0-69-generic] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Crucial/Micron Client SSDs
Device Model:     CT250MX500SSD1
Serial Number:    1905E1E79C72
LU WWN Device Id: 5 00a075 1e1e79c72
Firmware Version: M3CR023
User Capacity:    250,059,350,016 bytes [250 GB]
Sector Sizes:     512 bytes logical, 4096 bytes physical

In my case the physical block is 4096 bytes but
addressable in 512 byte blocks. It appears that
yours is 512 byte physical blocks.

[QUOTE]
=== START OF INFORMATION SECTION ===
Model Family:     Samsung based SSDs
Device Model:     Samsung SSD 870 EVO 500GB
Serial Number:    S6PWNXXXXXXXXXXX
LU WWN Device Id: 5 002538 XXXXXXXXXX
Firmware Version: SVT01B6Q
User Capacity:    500,107,862,016 bytes [500 GB]
Sector Size:      512 bytes logical/physica
[QUOTE]

[-- Attachment #2: Type: text/html, Size: 2520 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-19 18:13               ` Mark Knecht
@ 2023-04-19 19:26                 ` Dale
  2023-04-19 19:38                   ` Nikos Chantziaras
  0 siblings, 1 reply; 67+ messages in thread
From: Dale @ 2023-04-19 19:26 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 2482 bytes --]

Mark Knecht wrote:
>
>
> On Wed, Apr 19, 2023 at 10:59 AM Dale <rdalek1967@gmail.com
> <mailto:rdalek1967@gmail.com>> wrote:
> >
> > Peter Humphrey wrote:
> > > On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:
> > >
> > >> With my HDD:
> > >>
> > >>    # smartctl -x /dev/sda | grep -i 'sector size'
> > >>    Sector Sizes:     512 bytes logical, 4096 bytes physical
> > > Or, with an NVMe drive:
> > >
> > > # smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
> > > Supported LBA Sizes (NSID 0x1)
> > > Id Fmt  Data  Metadt  Rel_Perf
> > >  0 +     512       0         0
> > >
> > > :)
> > >
> >
> > When I run that command, sdd is my SDD drive, ironic I know.  Anyway, it
> > doesn't show block sizes.  It returns nothing.
> >
> > root@fireball / # smartctl -x /dev/sdd  | grep -A2 'Supported LBA Sizes'
> > root@fireball / #
>
> Note that all of these technologies, HDD, SDD, M.2, report different
> things
> and don't always report them the same way. This is an SDD in my 
> Plex backup server:
>
> mark@science:~$ sudo smartctl -x /dev/sdb
> [sudo] password for mark:  
> smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.0-69-generic] (local
> build)
> Copyright (C) 2002-20, Bruce Allen, Christian Franke,
> www.smartmontools.org <http://www.smartmontools.org>
>
> === START OF INFORMATION SECTION ===
> Model Family:     Crucial/Micron Client SSDs
> Device Model:     CT250MX500SSD1
> Serial Number:    1905E1E79C72
> LU WWN Device Id: 5 00a075 1e1e79c72
> Firmware Version: M3CR023
> User Capacity:    250,059,350,016 bytes [250 GB]
> Sector Sizes:     512 bytes logical, 4096 bytes physical
>
> In my case the physical block is 4096 bytes but 
> addressable in 512 byte blocks. It appears that
> yours is 512 byte physical blocks.
>
> [QUOTE]
> === START OF INFORMATION SECTION ===
> Model Family:     Samsung based SSDs
> Device Model:     Samsung SSD 870 EVO 500GB
> Serial Number:    S6PWNXXXXXXXXXXX
> LU WWN Device Id: 5 002538 XXXXXXXXXX
> Firmware Version: SVT01B6Q
> User Capacity:    500,107,862,016 bytes [500 GB]
> Sector Size:      512 bytes logical/physica
> [QUOTE]


So for future reference, let it format with the default?  I'm also
curious if when it creates the file system it will notice this and
adjust automatically. It might.  Maybe?

Dale

:-)  :-) 

P. S. Dang squirrels got in my greenhouse and dug up my seedlings. 
Squirrel hunting is next on my agenda.  :-@

[-- Attachment #2: Type: text/html, Size: 4153 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-19 19:26                 ` Dale
@ 2023-04-19 19:38                   ` Nikos Chantziaras
  2023-04-19 20:00                     ` Mark Knecht
  0 siblings, 1 reply; 67+ messages in thread
From: Nikos Chantziaras @ 2023-04-19 19:38 UTC (permalink / raw
  To: gentoo-user

On 19/04/2023 22:26, Dale wrote:
> So for future reference, let it format with the default?  I'm also 
> curious if when it creates the file system it will notice this and 
> adjust automatically. It might.  Maybe?

AFAIK, SSDs will internally convert to 4096 in their firmware even if 
they report a physical sector size of 512 through SMART. Just a 
compatibility thing. So formatting with 4096 is fine and gets rid of the 
internal conversion.

I believe Windows always uses 4096 by default and thus it's reasonable 
to assume that most SSDs are aware of that.



^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-19 19:38                   ` Nikos Chantziaras
@ 2023-04-19 20:00                     ` Mark Knecht
  2023-04-19 22:13                       ` Frank Steinmetzger
  0 siblings, 1 reply; 67+ messages in thread
From: Mark Knecht @ 2023-04-19 20:00 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 2532 bytes --]

On Wed, Apr 19, 2023 at 12:39 PM Nikos Chantziaras <realnc@gmail.com> wrote:
>
> On 19/04/2023 22:26, Dale wrote:
> > So for future reference, let it format with the default?  I'm also
> > curious if when it creates the file system it will notice this and
> > adjust automatically. It might.  Maybe?
>
> AFAIK, SSDs will internally convert to 4096 in their firmware even if
> they report a physical sector size of 512 through SMART. Just a
> compatibility thing. So formatting with 4096 is fine and gets rid of the
> internal conversion.

I suspect this is right, or has been mostly right in the past.

I think technically they default to the physical block size internally
and the earlier ones, attempting to be more compatible with HDDs,
had 4K blocks. Some of the newer chips now have 16K blocks but
still support 512B Logical Block Addressing.

All of these devices are essentially small computers. They have internal
controllers, DRAM caches usually in the 1-2GB sort of range but getting
larger. The bus speeds they quote is because data is moving for the most
part in and out of cache in the drive.

In Dale's case, if he has a 4K file system block size then it's going to
send
4K to the drive and the drive will write 8 512 byte writes to put it in
flash.

If I have the same 4K file system block size I send 4K to the drive but
my physical block size is 4K so it's a single write cycle to get it
into flash.

What I *think* is true is that any time your file system block size is
smaller than the physical block size on the storage element then
simplistically you have the risk of write amplification.

What I know I'm not sure about is how inodes factor into this.

For instance:

mark@science2:~$ ls -i
35790149  000_NOT_BACKED_UP
33320794  All_Files.txt
33337840  All_Sizes_2.txt
33337952  All_Sizes.txt
33329818  All_Sorted.txt
33306743  ardour_deps_install.sh
33309917  ardour_deps_remove.sh
33557560  Arena_Chess
33423859  Astro_Data
33560973  Astronomy
33423886  Astro_science
33307443 'Backup codes - Login.gov.pdf'
33329080  basic-install.sh
33558634  bin
33561132  biosim4_functions.txt
33316157  Boot_Config.txt
33560975  Builder
33338822  CFL_88_F_Bright_Syn.xsc

If the inodes are on the disk then how are they
stored? Does a single inode occupy a physical
block? A 512 byte LBA? Something else?

I have no clue.

>
> I believe Windows always uses 4096 by default and thus it's reasonable
> to assume that most SSDs are aware of that.
>

[-- Attachment #2: Type: text/html, Size: 3128 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-19 20:00                     ` Mark Knecht
@ 2023-04-19 22:13                       ` Frank Steinmetzger
  2023-04-19 23:32                         ` Dale
  0 siblings, 1 reply; 67+ messages in thread
From: Frank Steinmetzger @ 2023-04-19 22:13 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 3708 bytes --]

Am Wed, Apr 19, 2023 at 01:00:33PM -0700 schrieb Mark Knecht:


> I think technically they default to the physical block size internally
> and the earlier ones, attempting to be more compatible with HDDs,
> had 4K blocks. Some of the newer chips now have 16K blocks but
> still support 512B Logical Block Addressing.
> 
> All of these devices are essentially small computers. They have internal
> controllers, DRAM caches usually in the 1-2GB sort of range but getting
> larger.

Actually, cheap(er) SSDs don’t have an own DRAM, but rely on the host for 
this. There is an ongoing debate in tech forums whether that is a bad thing 
or not. A RAM cache can help optimise writes by caching many small writes 
and aggregating them into larger blocks.

> The bus speeds they quote is because data is moving for the most
> part in and out of cache in the drive.

Are you talking about the pseudo SLC cache? Because AFAIK the DRAM cache has 
no influence on read performance.

> What I know I'm not sure about is how inodes factor into this.
> 
> For instance:
> 
> mark@science2:~$ ls -i
> 35790149  000_NOT_BACKED_UP
> 33320794  All_Files.txt
> 33337840  All_Sizes_2.txt
> 33337952  All_Sizes.txt
> 33329818  All_Sorted.txt
> 33306743  ardour_deps_install.sh
> 33309917  ardour_deps_remove.sh
> 33557560  Arena_Chess
> 33423859  Astro_Data
> 33560973  Astronomy
> 33423886  Astro_science
> 33307443 'Backup codes - Login.gov.pdf'
> 33329080  basic-install.sh
> 33558634  bin
> 33561132  biosim4_functions.txt
> 33316157  Boot_Config.txt
> 33560975  Builder
> 33338822  CFL_88_F_Bright_Syn.xsc
> 
> If the inodes are on the disk then how are they
> stored? Does a single inode occupy a physical
> block? A 512 byte LBA? Something else?

man mkfs.ext4 says:
[…] the default inode size is 256 bytes for most file systems, except for 
small file systems where the inode size will be 128 bytes. […]

And if a file is small enough, it can actually fit inside the inode itself, 
saving the expense of another FS sector.


When formatting file systems, I usually lower the number of inodes from the 
default value to gain storage space. The default is one inode per 16 kB of 
FS size, which gives you 60 million inodes per TB. In practice, even one 
million per TB would be overkill in a use case like Dale’s media storage.¹ 
Removing 59 million inodes × 256 bytes ≈ 15 GB of net space for each TB, not 
counting extra control metadata and ext4 redundancies.

The defaults are set in /etc/mke2fs.conf. It also contains some alternative 
values of bytes-per-inode for certain usage types. The type largefile 
allocates one inode per 1 MB, giving you 1 million inodes per TB of space. 
Since ext4 is much more efficient with inodes than ext3, it is even content 
with 4 MB per inode (type largefile4), giving you 250 k inodes per TB.

For root partitions, I tend to allocate 1 million inodes, maybe some more 
for a full Gentoo-based desktop due to the portage tree’s sheer number of 
small files. My Surface Go’s root (Arch linux, KDE and some texlive) uses 
500 k right now.


¹ Assuming one inode equals one directory or unfragmented file on ext4.
I’m not sure what the allocation size limit for one inode is, but it is 
*very* large. Ext3 had a rather low limit, which is why it was so slow with 
big files. But that was one of the big improvements in ext4’s extended 
inodes, at the cost of double inode size to house the required metadata.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

FINE: Tax for doing wrong.  TAX: Fine for doing fine.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-19 22:13                       ` Frank Steinmetzger
@ 2023-04-19 23:32                         ` Dale
  2023-04-20  1:09                           ` Mark Knecht
  2023-04-20  8:52                           ` Frank Steinmetzger
  0 siblings, 2 replies; 67+ messages in thread
From: Dale @ 2023-04-19 23:32 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 4131 bytes --]

Frank Steinmetzger wrote:
> <<<SNIP>>>
>
> When formatting file systems, I usually lower the number of inodes from the 
> default value to gain storage space. The default is one inode per 16 kB of 
> FS size, which gives you 60 million inodes per TB. In practice, even one 
> million per TB would be overkill in a use case like Dale’s media storage.¹ 
> Removing 59 million inodes × 256 bytes ≈ 15 GB of net space for each TB, not 
> counting extra control metadata and ext4 redundancies.
>
> The defaults are set in /etc/mke2fs.conf. It also contains some alternative 
> values of bytes-per-inode for certain usage types. The type largefile 
> allocates one inode per 1 MB, giving you 1 million inodes per TB of space. 
> Since ext4 is much more efficient with inodes than ext3, it is even content 
> with 4 MB per inode (type largefile4), giving you 250 k inodes per TB.
>
> For root partitions, I tend to allocate 1 million inodes, maybe some more 
> for a full Gentoo-based desktop due to the portage tree’s sheer number of 
> small files. My Surface Go’s root (Arch linux, KDE and some texlive) uses 
> 500 k right now.
>
>
> ¹ Assuming one inode equals one directory or unfragmented file on ext4.
> I’m not sure what the allocation size limit for one inode is, but it is 
> *very* large. Ext3 had a rather low limit, which is why it was so slow with 
> big files. But that was one of the big improvements in ext4’s extended 
> inodes, at the cost of double inode size to house the required metadata.
>


This is interesting.  I have been buying 16TB drives recently.  After
all, with this fiber connection and me using torrents, I can fill up a
drive pretty fast, but I am slowing down as I'm no longer needing to
find more stuff to download.  Even 10GB per TB can add up.  For a 16TB
drive, that's 160GBs at least.  That's quite a few videos.  I didn't
realize it added up that fast.  Percentage wise it isn't a lot but given
the size of the drives, it does add up quick.  If I ever rearrange my
drives again and can change the file system, I may reduce the inodes at
least on the ones I only have large files on.  Still tho, given I use
LVM and all, maybe that isn't a great idea.  As I add drives with LVM, I
assume it increases the inodes as well.  If so, then reducing inodes
should be OK.  If not, I may increase drives until it has so many large
files it still runs out of inodes.  I suspect it adds inodes when I
expand the file system tho and I can adjust without worrying about it. 
I just have to set it when I first create the file system I guess.

This is my current drive setup. 


root@fireball / # pvs -O vg_name
  PV         VG     Fmt  Attr PSize    PFree
  /dev/sda7  OS     lvm2 a--  <124.46g 21.39g
  /dev/sdf1  backup lvm2 a--   698.63g     0
  /dev/sde1  crypt  lvm2 a--    14.55t     0
  /dev/sdb1  crypt  lvm2 a--    14.55t     0
  /dev/sdh1  datavg lvm2 a--    12.73t     0
  /dev/sdc1  datavg lvm2 a--    <9.10t     0
  /dev/sdi1  home   lvm2 a--    <7.28t     0
root@fireball / #


The one marked crypt is the one that is mostly large video files.  The
one marked datavg is where I store torrents.  Let's not delve to deep
into that tho.  ;-)  As you can see, crypt has two 16TB drives now and
I'm about 90% full.  I plan to expand next month if possible.  It'll be
another 16TB drive when I do.  So, that will be three 16TB drives. 
About 43TBs.  Little math, 430GB of space for inodes.  That added up
quick. 

I wonder.  Is there a way to find out the smallest size file in a
directory or sub directory, largest files, then maybe a average file
size???  I thought about du but given the number of files I have here,
it would be a really HUGE list of files.  Could take hours or more too. 
This is what KDE properties shows.

26.1 TiB (28,700,020,905,777)

55,619 files, 1,145 sub-folders

Little math. Average file size is 460MBs. So, I wonder what all could be
changed and not risk anything??? I wonder if that is accurate enough???

Interesting info.

Dale

:-) :-)


[-- Attachment #2: Type: text/html, Size: 5583 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-19 23:32                         ` Dale
@ 2023-04-20  1:09                           ` Mark Knecht
  2023-04-20  4:23                             ` Dale
  2023-04-20  8:55                             ` Frank Steinmetzger
  2023-04-20  8:52                           ` Frank Steinmetzger
  1 sibling, 2 replies; 67+ messages in thread
From: Mark Knecht @ 2023-04-20  1:09 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 718 bytes --]

> I wonder.  Is there a way to find out the smallest size file in a
directory or sub directory, largest files, then maybe a average file
size???  I thought about du but given the number of files I have here, it
would be a really HUGE list of files.  Could take hours or more too.  This
is what KDE properties shows.

I'm sure there are more accurate ways but

sudo ls -R / | wc

give you the number of lines returned from the ls command. It's not perfect
as there are blank lines in the ls but it's a start.

My desktop machine has about 2.2M files.

Again, there are going to be folks who can tell you how to remove blank
lines and other cruft but it's a start.

Only takes a minute to run on my Ryzen 9 5950X. YMMV.

[-- Attachment #2: Type: text/html, Size: 959 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-20  1:09                           ` Mark Knecht
@ 2023-04-20  4:23                             ` Dale
  2023-04-20  4:41                               ` eric
  2023-04-20 23:02                               ` Wol
  2023-04-20  8:55                             ` Frank Steinmetzger
  1 sibling, 2 replies; 67+ messages in thread
From: Dale @ 2023-04-20  4:23 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1452 bytes --]

Mark Knecht wrote:
>
> > I wonder.  Is there a way to find out the smallest size file in a
> directory or sub directory, largest files, then maybe a average file
> size???  I thought about du but given the number of files I have here,
> it would be a really HUGE list of files.  Could take hours or more
> too.  This is what KDE properties shows.
>
> I'm sure there are more accurate ways but 
>
> sudo ls -R / | wc
>
> give you the number of lines returned from the ls command. It's not
> perfect as there are blank lines in the ls but it's a start.
>
> My desktop machine has about 2.2M files.
>
> Again, there are going to be folks who can tell you how to remove
> blank lines and other cruft but it's a start.
>
> Only takes a minute to run on my Ryzen 9 5950X. YMMV.
>

I did a right click on the directory in Dolphin and selected
properties.  It told me there is a little over 55,000 files.  Some 1,100
directories, not sure if directories use inodes or not.  Basically,
there is a little over 56,000 somethings on that file system.  I was
curious what the smallest file is and the largest.  No idea how to find
that really.  Even du separates by directory not individual files
regardless of directory.  At least the way I use it anyway. 

If I ever have to move things around again, I'll likely start a thread
just for figuring out the setting for inodes.  I'll likely know more
about the number of files too. 

Dale

:-)  :-) 

[-- Attachment #2: Type: text/html, Size: 2595 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-20  4:23                             ` Dale
@ 2023-04-20  4:41                               ` eric
  2023-04-20  9:48                                 ` Dale
  2023-04-20 23:02                               ` Wol
  1 sibling, 1 reply; 67+ messages in thread
From: eric @ 2023-04-20  4:41 UTC (permalink / raw
  To: gentoo-user

On 4/19/23 21:23, Dale wrote:
> Mark Knecht wrote:
>>
>> > I wonder.  Is there a way to find out the smallest size file in a 
>> directory or sub directory, largest files, then maybe a average file 
>> size???  I thought about du but given the number of files I have here, 
>> it would be a really HUGE list of files. Could take hours or more 
>> too.  This is what KDE properties shows.
>>
>> I'm sure there are more accurate ways but
>>
>> sudo ls -R / | wc
>>
>> give you the number of lines returned from the ls command. It's not 
>> perfect as there are blank lines in the ls but it's a start.
>>
>> My desktop machine has about 2.2M files.
>>
>> Again, there are going to be folks who can tell you how to remove 
>> blank lines and other cruft but it's a start.
>>
>> Only takes a minute to run on my Ryzen 9 5950X. YMMV.
>>
> 
> I did a right click on the directory in Dolphin and selected 
> properties.  It told me there is a little over 55,000 files.  Some 1,100 
> directories, not sure if directories use inodes or not. Basically, there 
> is a little over 56,000 somethings on that file system.  I was curious 
> what the smallest file is and the largest. No idea how to find that 
> really.  Even du separates by directory not individual files regardless 
> of directory.  At least the way I use it anyway.
> 
> If I ever have to move things around again, I'll likely start a thread 
> just for figuring out the setting for inodes.  I'll likely know more 
> about the number of files too.
> 
> Dale
> 
> :-)  :-)

If you do not mind using graphical solutions, Filelight can help you 
easily visualize where your largest directories and files are residing.

https://packages.gentoo.org/packages/kde-apps/filelight

> Visualise disk usage with interactive map of concentric, segmented rings 

Eric


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-19 23:32                         ` Dale
  2023-04-20  1:09                           ` Mark Knecht
@ 2023-04-20  8:52                           ` Frank Steinmetzger
  2023-04-20  9:29                             ` Dale
  1 sibling, 1 reply; 67+ messages in thread
From: Frank Steinmetzger @ 2023-04-20  8:52 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 2273 bytes --]

Am Wed, Apr 19, 2023 at 06:32:45PM -0500 schrieb Dale:
> Frank Steinmetzger wrote:
> > <<<SNIP>>>
> >
> > When formatting file systems, I usually lower the number of inodes from the 
> > default value to gain storage space. The default is one inode per 16 kB of 
> > FS size, which gives you 60 million inodes per TB. In practice, even one 
> > million per TB would be overkill in a use case like Dale’s media storage.¹ 
> > Removing 59 million inodes × 256 bytes ≈ 15 GB of net space for each TB, not 
> > counting extra control metadata and ext4 redundancies.
> 
> If I ever rearrange my
> drives again and can change the file system, I may reduce the inodes at
> least on the ones I only have large files on.  Still tho, given I use
> LVM and all, maybe that isn't a great idea.  As I add drives with LVM, I
> assume it increases the inodes as well.

I remember from yesterday that the manpage says that inodes are added 
according to the bytes-per-inode value.

> I wonder.  Is there a way to find out the smallest size file in a
> directory or sub directory, largest files, then maybe a average file
> size???

The 20 smallest:
`find -type f -print0 | xargs -0 stat -c '%s %n' | sort -n | head -n 20`

The 20 largest: either use tail instead of head or reverse sorting with -r.
You can also first pipe the output of stat into a file so you can sort and 
analyse the list more efficiently, including calculating averages.

> I thought about du but given the number of files I have here,
> it would be a really HUGE list of files.  Could take hours or more too. 

I use a “cache” of text files with file listings of all my external drives. 
This allows me to glance over my entire data storage without having to plug 
in any drive. It uses tree underneath to get the list:

`tree -afx -DFins --dirsfirst --du --timefmt "%Y-%m-%d %T"`

This gives me a list of all directories and files, with their full path, 
date and size information and accumulated directory size in a concise 
format. Add -pug to also include permissions.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Computers are the most congenial product of human laziness to-date.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-20  1:09                           ` Mark Knecht
  2023-04-20  4:23                             ` Dale
@ 2023-04-20  8:55                             ` Frank Steinmetzger
  1 sibling, 0 replies; 67+ messages in thread
From: Frank Steinmetzger @ 2023-04-20  8:55 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1378 bytes --]

Am Wed, Apr 19, 2023 at 06:09:15PM -0700 schrieb Mark Knecht:
> > I wonder.  Is there a way to find out the smallest size file in a
> directory or sub directory, largest files, then maybe a average file
> size???  I thought about du but given the number of files I have here, it
> would be a really HUGE list of files.  Could take hours or more too.  This
> is what KDE properties shows.
> 
> I'm sure there are more accurate ways but
> 
> sudo ls -R / | wc

Number of directories (not accounting for symlinks):
find -type d | wc -l

Number of files (not accounting for symlinks):
find -type f | wc -l

> give you the number of lines returned from the ls command. It's not perfect
> as there are blank lines in the ls but it's a start.
> 
> My desktop machine has about 2.2M files.
> 
> Again, there are going to be folks who can tell you how to remove blank
> lines and other cruft but it's a start.

Or not produce them in the first place. ;-)

> Only takes a minute to run on my Ryzen 9 5950X. YMMV.

It’s not a question of the processor, but of the storage device. And if your 
cache, because the second run will probably not use the device at all.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Bosses are like timpani: the more hollow they are, the louder they sound.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-20  8:52                           ` Frank Steinmetzger
@ 2023-04-20  9:29                             ` Dale
  2023-04-20 10:08                               ` Peter Humphrey
  2023-04-20 12:23                               ` Frank Steinmetzger
  0 siblings, 2 replies; 67+ messages in thread
From: Dale @ 2023-04-20  9:29 UTC (permalink / raw
  To: gentoo-user

Frank Steinmetzger wrote:
> Am Wed, Apr 19, 2023 at 06:32:45PM -0500 schrieb Dale:
>> Frank Steinmetzger wrote:
>>> <<<SNIP>>>
>>>
>>> When formatting file systems, I usually lower the number of inodes from the 
>>> default value to gain storage space. The default is one inode per 16 kB of 
>>> FS size, which gives you 60 million inodes per TB. In practice, even one 
>>> million per TB would be overkill in a use case like Dale’s media storage.¹ 
>>> Removing 59 million inodes × 256 bytes ≈ 15 GB of net space for each TB, not 
>>> counting extra control metadata and ext4 redundancies.
>> If I ever rearrange my
>> drives again and can change the file system, I may reduce the inodes at
>> least on the ones I only have large files on.  Still tho, given I use
>> LVM and all, maybe that isn't a great idea.  As I add drives with LVM, I
>> assume it increases the inodes as well.
> I remember from yesterday that the manpage says that inodes are added 
> according to the bytes-per-inode value.
>
>> I wonder.  Is there a way to find out the smallest size file in a
>> directory or sub directory, largest files, then maybe a average file
>> size???
> The 20 smallest:
> `find -type f -print0 | xargs -0 stat -c '%s %n' | sort -n | head -n 20`
>
> The 20 largest: either use tail instead of head or reverse sorting with -r.
> You can also first pipe the output of stat into a file so you can sort and 
> analyse the list more efficiently, including calculating averages.

When I first run this while in / itself, it occurred to me that it
doesn't specify what directory.  I thought maybe changing to the
directory I want it to look at would work but get this: 


root@fireball /home/dale/Desktop/Crypt # `find -type f -print0 | xargs
-0 stat -c '%s %n' | sort -n | head -n 20`
-bash: 2: command not found
root@fireball /home/dale/Desktop/Crypt #


It works if I'm in the / directory but not when I'm cd'd to the
directory I want to know about.  I don't see a spot to change it.  Ideas.

>> I thought about du but given the number of files I have here,
>> it would be a really HUGE list of files.  Could take hours or more too. 
> I use a “cache” of text files with file listings of all my external drives. 
> This allows me to glance over my entire data storage without having to plug 
> in any drive. It uses tree underneath to get the list:
>
> `tree -afx -DFins --dirsfirst --du --timefmt "%Y-%m-%d %T"`
>
> This gives me a list of all directories and files, with their full path, 
> date and size information and accumulated directory size in a concise 
> format. Add -pug to also include permissions.
>

Save this for later use.  ;-)

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-19 17:59             ` Dale
  2023-04-19 18:13               ` Mark Knecht
@ 2023-04-20  9:46               ` Peter Humphrey
  2023-04-20  9:49                 ` Dale
  1 sibling, 1 reply; 67+ messages in thread
From: Peter Humphrey @ 2023-04-20  9:46 UTC (permalink / raw
  To: gentoo-user

On Wednesday, 19 April 2023 18:59:26 BST Dale wrote:
> Peter Humphrey wrote:
> > On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:
> >> With my HDD:
> >>    # smartctl -x /dev/sda | grep -i 'sector size'
> >>    Sector Sizes:     512 bytes logical, 4096 bytes physical
> > 
> > Or, with an NVMe drive:
> > 
> > # smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
> > Supported LBA Sizes (NSID 0x1)
> > Id Fmt  Data  Metadt  Rel_Perf
> > 
> >  0 +     512       0         0
> >  
> > :)
> 
> When I run that command, sdd is my SDD drive, ironic I know.  Anyway, it
> doesn't show block sizes.  It returns nothing.

I did say it was for an NVMe drive, Dale. If your drive was one of those, the 
kernel would have named it /dev/nvme0n1 or similar.

-- 
Regards,
Peter.





^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-20  4:41                               ` eric
@ 2023-04-20  9:48                                 ` Dale
  0 siblings, 0 replies; 67+ messages in thread
From: Dale @ 2023-04-20  9:48 UTC (permalink / raw
  To: gentoo-user

eric wrote:
> On 4/19/23 21:23, Dale wrote:
>> Mark Knecht wrote:
>>>
>>> > I wonder.  Is there a way to find out the smallest size file in a
>>> directory or sub directory, largest files, then maybe a average file
>>> size???  I thought about du but given the number of files I have
>>> here, it would be a really HUGE list of files. Could take hours or
>>> more too.  This is what KDE properties shows.
>>>
>>> I'm sure there are more accurate ways but
>>>
>>> sudo ls -R / | wc
>>>
>>> give you the number of lines returned from the ls command. It's not
>>> perfect as there are blank lines in the ls but it's a start.
>>>
>>> My desktop machine has about 2.2M files.
>>>
>>> Again, there are going to be folks who can tell you how to remove
>>> blank lines and other cruft but it's a start.
>>>
>>> Only takes a minute to run on my Ryzen 9 5950X. YMMV.
>>>
>>
>> I did a right click on the directory in Dolphin and selected
>> properties.  It told me there is a little over 55,000 files.  Some
>> 1,100 directories, not sure if directories use inodes or not.
>> Basically, there is a little over 56,000 somethings on that file
>> system.  I was curious what the smallest file is and the largest. No
>> idea how to find that really.  Even du separates by directory not
>> individual files regardless of directory.  At least the way I use it
>> anyway.
>>
>> If I ever have to move things around again, I'll likely start a
>> thread just for figuring out the setting for inodes.  I'll likely
>> know more about the number of files too.
>>
>> Dale
>>
>> :-)  :-)
>
> If you do not mind using graphical solutions, Filelight can help you
> easily visualize where your largest directories and files are residing.
>
> https://packages.gentoo.org/packages/kde-apps/filelight
>
>> Visualise disk usage with interactive map of concentric, segmented rings 
>
> Eric
>
> .
>

There used to be a KDE app that worked a bit like this.  I liked it but
I think it died.  I haven't seen it in ages, not long after the switch
from KDE3 to KDE4 I think.  Given the volume of files and the size of
the data, I wish I could zoom in sometimes.  Those little ones disappear. 

Thanks for that info. Nifty. 

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-20  9:46               ` Peter Humphrey
@ 2023-04-20  9:49                 ` Dale
  0 siblings, 0 replies; 67+ messages in thread
From: Dale @ 2023-04-20  9:49 UTC (permalink / raw
  To: gentoo-user

Peter Humphrey wrote:
> On Wednesday, 19 April 2023 18:59:26 BST Dale wrote:
>> Peter Humphrey wrote:
>>> On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:
>>>> With my HDD:
>>>>    # smartctl -x /dev/sda | grep -i 'sector size'
>>>>    Sector Sizes:     512 bytes logical, 4096 bytes physical
>>> Or, with an NVMe drive:
>>>
>>> # smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
>>> Supported LBA Sizes (NSID 0x1)
>>> Id Fmt  Data  Metadt  Rel_Perf
>>>
>>>  0 +     512       0         0
>>>  
>>> :)
>> When I run that command, sdd is my SDD drive, ironic I know.  Anyway, it
>> doesn't show block sizes.  It returns nothing.
> I did say it was for an NVMe drive, Dale. If your drive was one of those, the 
> kernel would have named it /dev/nvme0n1 or similar.
>

Well, I was hoping it would work on all SDD type drives.  ;-) 

Dale

:-)  :-)


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-20  9:29                             ` Dale
@ 2023-04-20 10:08                               ` Peter Humphrey
  2023-04-20 10:59                                 ` Dale
  2023-04-20 12:23                               ` Frank Steinmetzger
  1 sibling, 1 reply; 67+ messages in thread
From: Peter Humphrey @ 2023-04-20 10:08 UTC (permalink / raw
  To: gentoo-user

On Thursday, 20 April 2023 10:29:59 BST Dale wrote:
> Frank Steinmetzger wrote:
> > Am Wed, Apr 19, 2023 at 06:32:45PM -0500 schrieb Dale:
> >> Frank Steinmetzger wrote:
> >>> <<<SNIP>>>
> >>> 
> >>> When formatting file systems, I usually lower the number of inodes from
> >>> the
> >>> default value to gain storage space. The default is one inode per 16 kB
> >>> of
> >>> FS size, which gives you 60 million inodes per TB. In practice, even one
> >>> million per TB would be overkill in a use case like Dale’s media
> >>> storage.¹
> >>> Removing 59 million inodes × 256 bytes ≈ 15 GB of net space for each TB,
> >>> not counting extra control metadata and ext4 redundancies.
> >> 
> >> If I ever rearrange my
> >> drives again and can change the file system, I may reduce the inodes at
> >> least on the ones I only have large files on.  Still tho, given I use
> >> LVM and all, maybe that isn't a great idea.  As I add drives with LVM, I
> >> assume it increases the inodes as well.
> > 
> > I remember from yesterday that the manpage says that inodes are added
> > according to the bytes-per-inode value.
> > 
> >> I wonder.  Is there a way to find out the smallest size file in a
> >> directory or sub directory, largest files, then maybe a average file
> >> size???
> > 
> > The 20 smallest:
> > `find -type f -print0 | xargs -0 stat -c '%s %n' | sort -n | head -n 20`
> > 
> > The 20 largest: either use tail instead of head or reverse sorting with
> > -r.
> > You can also first pipe the output of stat into a file so you can sort and
> > analyse the list more efficiently, including calculating averages.
> 
> When I first run this while in / itself, it occurred to me that it
> doesn't specify what directory.  I thought maybe changing to the
> directory I want it to look at would work but get this: 
> 
> 
> root@fireball /home/dale/Desktop/Crypt # `find -type f -print0 | xargs
> -0 stat -c '%s %n' | sort -n | head -n 20`
> -bash: 2: command not found
> root@fireball /home/dale/Desktop/Crypt #
> 
> 
> It works if I'm in the / directory but not when I'm cd'd to the
> directory I want to know about.  I don't see a spot to change it.  Ideas.

In place of "find -type..." say "find / -type..."

-- 
Regards,
Peter.





^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-20 10:08                               ` Peter Humphrey
@ 2023-04-20 10:59                                 ` Dale
  2023-04-20 13:23                                   ` Nikos Chantziaras
  0 siblings, 1 reply; 67+ messages in thread
From: Dale @ 2023-04-20 10:59 UTC (permalink / raw
  To: gentoo-user

Peter Humphrey wrote:
> On Thursday, 20 April 2023 10:29:59 BST Dale wrote:
>> Frank Steinmetzger wrote:
>>> Am Wed, Apr 19, 2023 at 06:32:45PM -0500 schrieb Dale:
>>>> Frank Steinmetzger wrote:
>>>>> <<<SNIP>>>
>>>>>
>>>>> When formatting file systems, I usually lower the number of inodes from
>>>>> the
>>>>> default value to gain storage space. The default is one inode per 16 kB
>>>>> of
>>>>> FS size, which gives you 60 million inodes per TB. In practice, even one
>>>>> million per TB would be overkill in a use case like Dale’s media
>>>>> storage.¹
>>>>> Removing 59 million inodes × 256 bytes ≈ 15 GB of net space for each TB,
>>>>> not counting extra control metadata and ext4 redundancies.
>>>> If I ever rearrange my
>>>> drives again and can change the file system, I may reduce the inodes at
>>>> least on the ones I only have large files on.  Still tho, given I use
>>>> LVM and all, maybe that isn't a great idea.  As I add drives with LVM, I
>>>> assume it increases the inodes as well.
>>> I remember from yesterday that the manpage says that inodes are added
>>> according to the bytes-per-inode value.
>>>
>>>> I wonder.  Is there a way to find out the smallest size file in a
>>>> directory or sub directory, largest files, then maybe a average file
>>>> size???
>>> The 20 smallest:
>>> `find -type f -print0 | xargs -0 stat -c '%s %n' | sort -n | head -n 20`
>>>
>>> The 20 largest: either use tail instead of head or reverse sorting with
>>> -r.
>>> You can also first pipe the output of stat into a file so you can sort and
>>> analyse the list more efficiently, including calculating averages.
>> When I first run this while in / itself, it occurred to me that it
>> doesn't specify what directory.  I thought maybe changing to the
>> directory I want it to look at would work but get this: 
>>
>>
>> root@fireball /home/dale/Desktop/Crypt # `find -type f -print0 | xargs
>> -0 stat -c '%s %n' | sort -n | head -n 20`
>> -bash: 2: command not found
>> root@fireball /home/dale/Desktop/Crypt #
>>
>>
>> It works if I'm in the / directory but not when I'm cd'd to the
>> directory I want to know about.  I don't see a spot to change it.  Ideas.
> In place of "find -type..." say "find / -type..."
>


Ahhh, that worked.  I also realized I need to leave off the ' at the
beginning and end.  I thought I left those out.  I copy and paste a
lot.  lol 

It only took a couple dozen files to start getting up to some size. 
Most of the few small files are text files with little notes about a
video.  For example, if building something I will create a text file
that lists what is needed to build what is in the video.  Other than a
few of those, file size reaches a few 100MBs pretty quick.  So, the
number of small files is pretty small.  That is good to know. 

Thanks for the command.  I never was good with xargs, sed and such.  It
took me a while to get used to grep.  ROFL 

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-20  9:29                             ` Dale
  2023-04-20 10:08                               ` Peter Humphrey
@ 2023-04-20 12:23                               ` Frank Steinmetzger
  1 sibling, 0 replies; 67+ messages in thread
From: Frank Steinmetzger @ 2023-04-20 12:23 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 2890 bytes --]

Am Thu, Apr 20, 2023 at 04:29:59AM -0500 schrieb Dale:

> >> I wonder.  Is there a way to find out the smallest size file in a
> >> directory or sub directory, largest files, then maybe a average file
> >> size???
> > The 20 smallest:
> > `find -type f -print0 | xargs -0 stat -c '%s %n' | sort -n | head -n 20`
> >
> > The 20 largest: either use tail instead of head or reverse sorting with -r.
> > You can also first pipe the output of stat into a file so you can sort and 
> > analyse the list more efficiently, including calculating averages.
> 
> When I first run this while in / itself, it occurred to me that it
> doesn't specify what directory.  I thought maybe changing to the
> directory I want it to look at would work but get this: 

Yeah, either cd into the directory first, or pass it to find. But it’s like 
tar: I can never remember in which order I need to feed stuff to find. One 
relevant addition could be -xdev, to have find halt at file system 
boundaries. So:

find /path/to/dir -xdev -type f -! -type l …

> root@fireball /home/dale/Desktop/Crypt # `find -type f -print0 | xargs
> -0 stat -c '%s %n' | sort -n | head -n 20`
> -bash: 2: command not found
> root@fireball /home/dale/Desktop/Crypt #

I used the `` in the mail text as a kind of hint: “everything between is a 
command”. So when you paste that into the terminal, it is executed, and the 
result of it is substituted. Meaning: the command’s output is taken as the 
new input and executed. And since the first word of the output was “2”, you 
get that error message. Sorry about the confusion.

> >> I thought about du but given the number of files I have here,
> >> it would be a really HUGE list of files.  Could take hours or more too. 
> > I use a “cache” of text files with file listings of all my external drives. 
> > This allows me to glance over my entire data storage without having to plug 
> > in any drive. It uses tree underneath to get the list:
> >
> > `tree -afx -DFins --dirsfirst --du --timefmt "%Y-%m-%d %T"`
> >
> > This gives me a list of all directories and files, with their full path, 
> > date and size information and accumulated directory size in a concise 
> > format. Add -pug to also include permissions.
> >
> 
> Save this for later use.  ;-)

I built a wrapper script around it, to which I pass the directory I want to 
read (usually the root of a removable media). The script creates a new text 
file, with the current date and the dircetory in its name, and compresses it 
at the end. This allows me to diff those files in vim and see what changed 
over time. It also updates a symlink to the current version for quick access 
via bash alias.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

...llaw eht no rorrim ,rorriM

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-20 10:59                                 ` Dale
@ 2023-04-20 13:23                                   ` Nikos Chantziaras
  0 siblings, 0 replies; 67+ messages in thread
From: Nikos Chantziaras @ 2023-04-20 13:23 UTC (permalink / raw
  To: gentoo-user

On 20/04/2023 13:59, Dale wrote:
>> In place of "find -type..." say "find / -type..."
> 
> Ahhh, that worked.  I also realized I need to leave off the ' at the
> beginning and end.  I thought I left those out.  I copy and paste a
> lot.  lol

Btw, if you only want to do this for the root filesystem and exclude all 
other mounted filesystems, also use the -xdev option:

   find / -xdev -type ...



^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
  2023-04-20  4:23                             ` Dale
  2023-04-20  4:41                               ` eric
@ 2023-04-20 23:02                               ` Wol
  1 sibling, 0 replies; 67+ messages in thread
From: Wol @ 2023-04-20 23:02 UTC (permalink / raw
  To: gentoo-user

On 20/04/2023 05:23, Dale wrote:
> Some 1,100 directories, not sure if directories use inodes or not.

"Everything is a file".

A directory is just a data file with a certain structure that maps names 
to inodes.

It might still be there somewhere - I can't imagine it's been deleted, 
just forgotten - but I believe some editors (emacs probably) would let 
you open that file, so you could rename files by editing the line that 
defined them, you could unlink a file by deleting the line, etc etc.

Obviously a very dangerous mode, but Unix was always happy about handing 
out powerful footguns willy nilly.

Cheers,
Wol


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [gentoo-user] Finally got a SSD drive to put my OS on
  2023-04-16 23:16                       ` Frank Steinmetzger
  2023-04-17  1:14                         ` Dale
@ 2023-10-07  7:22                         ` Dale
  1 sibling, 0 replies; 67+ messages in thread
From: Dale @ 2023-10-07  7:22 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 3585 bytes --]

Frank Steinmetzger wrote:
> Am Sun, Apr 16, 2023 at 05:26:15PM -0500 schrieb Dale:
>
>>>> I'm wanting to be able to boot something from the hard drive in the
>>>> event the OS itself won't boot.  The other day I had to dig around and
>>>> find a bootable USB stick and also found a DVD.  Ended up with the DVD
>>>> working best.  I already have memtest on /boot.  Thing is, I very rarely
>>>> use it.  ;-)
>>> So in the scenario you are suggesting, is grub working, giving you a
>>> boot choice screen, and your new Gentoo install is not working so
>>> you want to choose Knoppix to repair whatever is wrong with 
>>> Gentoo? 
>> Given I have a 500GB drive, I got plenty of space.  Heck, a 10GB
>> partition each is more than enough for either Knoppix or LiveGUI.  I
>> could even store info on there about drive partitions and scripts that I
>> use a lot.  Jeez, that's a idea. 
> Back in the day, I was annoyed that whenever I needed $LIVE_SYSTEM, I had to 
> reformat an entire USB stick for that. In times when you don’t even get 
> sticks below 8 GB anymore, I found it a waste of material and useful storage 
> space.
>
> And then I found ventoy: https://www.ventoy.net/
>
> It is a mini-Bootloader which you install once to a USB device, kind-of a 
> live system of its own. But when booting it, it dynamically scans the 
> content of its device and creates a new boot menu from it. So you can put 
> many ISOs on one device as simple files, delete them, upgrade them, 
> whatever, and then you can select one to boot from. Plus, the rest of the 
> stick remains usable as storage, unlike sticks that were dd’ed with an ISO.
>
> -- Grüße | Greetings | Salut | Qapla’ Please do not share anything
> from, with or about me on any social network. The four elements:
> earth, air and firewater.

I made a USB stick with Ventoy on it but hadn't had a chance to boot
anything with it until a few days ago. I currently have the following on
ONE USB stick.


CUSTOMRESCUECD-x86_64-0.12.8.iso
KNOPPIX_V9.1DVD-2021-01-25-EN.iso
livegui-amd64-20230402T170151Z.iso
memtest86.iso
systemrescuecd-x86-5.3.2.iso


I'm having trouble with the Custom Rescue one but it could be bad since
all the others work.  CRC does try to boot but then fails.  If one were
to buy a 64GB stick, one could put a LOT of images on there.  Mine is
just a 32GB and look what I got on there. 

Anyone who boots using USB on occasion should really try this thing
out.  It is as easy as Frank says.  You install the Ventoy thing on it
and then just drop the files on there.  After that, it just works.  This
thing is awesome.  Whoever came up with this thing had a really good
slide rule. This one stick replaces five doing it the old way. 

Seriously, try this thing.  Thanks to Frank for posting about it.  I'd
never heard of it. 

Dale

:-)  :-) 

P. S. I put my old Gigabyte 770T mobo back together.  My old video card
died so I had to order one of those.  It also needed a new button
battery.  Now I'm waiting on a pack of hard drives to come in.  I
ordered a pack of five 320GB drives.  For a OS, plenty big enough.  I
also ordered a pair of USB keyboards since all I have laying around is
PS/2 types now.  On my main rig, I put in another 18GB hard drive.  I'm
replacing a 16TB on one VG and plan to use the 16TB on another VG to
replace a 8GB.  That will increase the size of both VGs.  Hopefully next
month I can order the case for the new rig.  Then I get to save up for
CPU, mobo and memory.  Just a little update for those following my puter
escapades.  LOL 

[-- Attachment #2: Type: text/html, Size: 4909 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

end of thread, other threads:[~2023-10-07  7:22 UTC | newest]

Thread overview: 67+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-15 22:47 [gentoo-user] Finally got a SSD drive to put my OS on Dale
2023-04-15 23:24 ` Mark Knecht
2023-04-15 23:44   ` thelma
2023-04-16  1:47 ` William Kenworthy
2023-04-16  7:18   ` Peter Humphrey
2023-04-16  8:43     ` William Kenworthy
2023-04-16 15:08       ` Mark Knecht
2023-04-16 15:29         ` Dale
2023-04-16 16:10           ` Mark Knecht
2023-04-16 16:54             ` Dale
2023-04-16 18:14               ` Mark Knecht
2023-04-16 18:53                 ` Dale
2023-04-16 19:30                   ` Mark Knecht
2023-04-16 22:26                     ` Dale
2023-04-16 23:16                       ` Frank Steinmetzger
2023-04-17  1:14                         ` Dale
2023-04-17  9:40                           ` Wols Lists
2023-04-17 17:45                             ` Mark Knecht
2023-04-18  0:35                               ` Dale
2023-04-18  8:03                               ` Frank Steinmetzger
2023-10-07  7:22                         ` Dale
2023-04-16 17:46         ` Jorge Almeida
2023-04-16 18:07         ` Frank Steinmetzger
2023-04-16 20:22           ` Mark Knecht
2023-04-16 22:17             ` Frank Steinmetzger
2023-04-17  0:34               ` Mark Knecht
2023-04-18 14:52 ` [gentoo-user] " Nikos Chantziaras
2023-04-18 15:05   ` Dale
2023-04-18 15:36     ` Nikos Chantziaras
2023-04-18 20:01       ` Dale
2023-04-18 20:53         ` Wol
2023-04-18 22:13           ` Frank Steinmetzger
2023-04-18 23:08             ` Wols Lists
2023-04-19  1:15               ` Dale
2023-04-18 20:57         ` Mark Knecht
2023-04-18 21:15           ` Dale
2023-04-18 21:25             ` Mark Knecht
2023-04-19  1:36               ` Dale
2023-04-18 22:18     ` Frank Steinmetzger
2023-04-18 22:41       ` Frank Steinmetzger
2023-04-19  1:45       ` Dale
2023-04-19  8:00         ` Nikos Chantziaras
2023-04-19  9:42           ` Dale
2023-04-19 10:34           ` Peter Humphrey
2023-04-19 17:14             ` Mark Knecht
2023-04-19 17:59             ` Dale
2023-04-19 18:13               ` Mark Knecht
2023-04-19 19:26                 ` Dale
2023-04-19 19:38                   ` Nikos Chantziaras
2023-04-19 20:00                     ` Mark Knecht
2023-04-19 22:13                       ` Frank Steinmetzger
2023-04-19 23:32                         ` Dale
2023-04-20  1:09                           ` Mark Knecht
2023-04-20  4:23                             ` Dale
2023-04-20  4:41                               ` eric
2023-04-20  9:48                                 ` Dale
2023-04-20 23:02                               ` Wol
2023-04-20  8:55                             ` Frank Steinmetzger
2023-04-20  8:52                           ` Frank Steinmetzger
2023-04-20  9:29                             ` Dale
2023-04-20 10:08                               ` Peter Humphrey
2023-04-20 10:59                                 ` Dale
2023-04-20 13:23                                   ` Nikos Chantziaras
2023-04-20 12:23                               ` Frank Steinmetzger
2023-04-20  9:46               ` Peter Humphrey
2023-04-20  9:49                 ` Dale
2023-04-18 17:52   ` Mark Knecht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox