Frank Steinmetzger wrote:
<<<SNIP>>>

When formatting file systems, I usually lower the number of inodes from the 
default value to gain storage space. The default is one inode per 16 kB of 
FS size, which gives you 60 million inodes per TB. In practice, even one 
million per TB would be overkill in a use case like Dale’s media storage.¹ 
Removing 59 million inodes × 256 bytes ≈ 15 GB of net space for each TB, not 
counting extra control metadata and ext4 redundancies.

The defaults are set in /etc/mke2fs.conf. It also contains some alternative 
values of bytes-per-inode for certain usage types. The type largefile 
allocates one inode per 1 MB, giving you 1 million inodes per TB of space. 
Since ext4 is much more efficient with inodes than ext3, it is even content 
with 4 MB per inode (type largefile4), giving you 250 k inodes per TB.

For root partitions, I tend to allocate 1 million inodes, maybe some more 
for a full Gentoo-based desktop due to the portage tree’s sheer number of 
small files. My Surface Go’s root (Arch linux, KDE and some texlive) uses 
500 k right now.


¹ Assuming one inode equals one directory or unfragmented file on ext4.
I’m not sure what the allocation size limit for one inode is, but it is 
*very* large. Ext3 had a rather low limit, which is why it was so slow with 
big files. But that was one of the big improvements in ext4’s extended 
inodes, at the cost of double inode size to house the required metadata.



This is interesting.  I have been buying 16TB drives recently.  After all, with this fiber connection and me using torrents, I can fill up a drive pretty fast, but I am slowing down as I'm no longer needing to find more stuff to download.  Even 10GB per TB can add up.  For a 16TB drive, that's 160GBs at least.  That's quite a few videos.  I didn't realize it added up that fast.  Percentage wise it isn't a lot but given the size of the drives, it does add up quick.  If I ever rearrange my drives again and can change the file system, I may reduce the inodes at least on the ones I only have large files on.  Still tho, given I use LVM and all, maybe that isn't a great idea.  As I add drives with LVM, I assume it increases the inodes as well.  If so, then reducing inodes should be OK.  If not, I may increase drives until it has so many large files it still runs out of inodes.  I suspect it adds inodes when I expand the file system tho and I can adjust without worrying about it.  I just have to set it when I first create the file system I guess.

This is my current drive setup. 


root@fireball / # pvs -O vg_name
  PV         VG     Fmt  Attr PSize    PFree
  /dev/sda7  OS     lvm2 a--  <124.46g 21.39g
  /dev/sdf1  backup lvm2 a--   698.63g     0
  /dev/sde1  crypt  lvm2 a--    14.55t     0
  /dev/sdb1  crypt  lvm2 a--    14.55t     0
  /dev/sdh1  datavg lvm2 a--    12.73t     0
  /dev/sdc1  datavg lvm2 a--    <9.10t     0
  /dev/sdi1  home   lvm2 a--    <7.28t     0
root@fireball / #


The one marked crypt is the one that is mostly large video files.  The one marked datavg is where I store torrents.  Let's not delve to deep into that tho.  ;-)  As you can see, crypt has two 16TB drives now and I'm about 90% full.  I plan to expand next month if possible.  It'll be another 16TB drive when I do.  So, that will be three 16TB drives.  About 43TBs.  Little math, 430GB of space for inodes.  That added up quick. 

I wonder.  Is there a way to find out the smallest size file in a directory or sub directory, largest files, then maybe a average file size???  I thought about du but given the number of files I have here, it would be a really HUGE list of files.  Could take hours or more too.  This is what KDE properties shows.

26.1 TiB (28,700,020,905,777)

55,619 files, 1,145 sub-folders

Little math. Average file size is 460MBs. So, I wonder what all could be changed and not risk anything??? I wonder if that is accurate enough???

Interesting info.

Dale

:-) :-)