From: Dale <rdalek1967@gmail.com>
To: gentoo-user@lists.gentoo.org
Subject: Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on
Date: Wed, 19 Apr 2023 18:32:45 -0500 [thread overview]
Message-ID: <9781e250-591d-d468-11f2-7e5c94ac7db3@gmail.com> (raw)
In-Reply-To: <ZEBnoszh_TfFL2mV@moby>
[-- Attachment #1: Type: text/plain, Size: 4131 bytes --]
Frank Steinmetzger wrote:
> <<<SNIP>>>
>
> When formatting file systems, I usually lower the number of inodes from the
> default value to gain storage space. The default is one inode per 16 kB of
> FS size, which gives you 60 million inodes per TB. In practice, even one
> million per TB would be overkill in a use case like Dale’s media storage.¹
> Removing 59 million inodes × 256 bytes ≈ 15 GB of net space for each TB, not
> counting extra control metadata and ext4 redundancies.
>
> The defaults are set in /etc/mke2fs.conf. It also contains some alternative
> values of bytes-per-inode for certain usage types. The type largefile
> allocates one inode per 1 MB, giving you 1 million inodes per TB of space.
> Since ext4 is much more efficient with inodes than ext3, it is even content
> with 4 MB per inode (type largefile4), giving you 250 k inodes per TB.
>
> For root partitions, I tend to allocate 1 million inodes, maybe some more
> for a full Gentoo-based desktop due to the portage tree’s sheer number of
> small files. My Surface Go’s root (Arch linux, KDE and some texlive) uses
> 500 k right now.
>
>
> ¹ Assuming one inode equals one directory or unfragmented file on ext4.
> I’m not sure what the allocation size limit for one inode is, but it is
> *very* large. Ext3 had a rather low limit, which is why it was so slow with
> big files. But that was one of the big improvements in ext4’s extended
> inodes, at the cost of double inode size to house the required metadata.
>
This is interesting. I have been buying 16TB drives recently. After
all, with this fiber connection and me using torrents, I can fill up a
drive pretty fast, but I am slowing down as I'm no longer needing to
find more stuff to download. Even 10GB per TB can add up. For a 16TB
drive, that's 160GBs at least. That's quite a few videos. I didn't
realize it added up that fast. Percentage wise it isn't a lot but given
the size of the drives, it does add up quick. If I ever rearrange my
drives again and can change the file system, I may reduce the inodes at
least on the ones I only have large files on. Still tho, given I use
LVM and all, maybe that isn't a great idea. As I add drives with LVM, I
assume it increases the inodes as well. If so, then reducing inodes
should be OK. If not, I may increase drives until it has so many large
files it still runs out of inodes. I suspect it adds inodes when I
expand the file system tho and I can adjust without worrying about it.
I just have to set it when I first create the file system I guess.
This is my current drive setup.
root@fireball / # pvs -O vg_name
PV VG Fmt Attr PSize PFree
/dev/sda7 OS lvm2 a-- <124.46g 21.39g
/dev/sdf1 backup lvm2 a-- 698.63g 0
/dev/sde1 crypt lvm2 a-- 14.55t 0
/dev/sdb1 crypt lvm2 a-- 14.55t 0
/dev/sdh1 datavg lvm2 a-- 12.73t 0
/dev/sdc1 datavg lvm2 a-- <9.10t 0
/dev/sdi1 home lvm2 a-- <7.28t 0
root@fireball / #
The one marked crypt is the one that is mostly large video files. The
one marked datavg is where I store torrents. Let's not delve to deep
into that tho. ;-) As you can see, crypt has two 16TB drives now and
I'm about 90% full. I plan to expand next month if possible. It'll be
another 16TB drive when I do. So, that will be three 16TB drives.
About 43TBs. Little math, 430GB of space for inodes. That added up
quick.
I wonder. Is there a way to find out the smallest size file in a
directory or sub directory, largest files, then maybe a average file
size??? I thought about du but given the number of files I have here,
it would be a really HUGE list of files. Could take hours or more too.
This is what KDE properties shows.
26.1 TiB (28,700,020,905,777)
55,619 files, 1,145 sub-folders
Little math. Average file size is 460MBs. So, I wonder what all could be
changed and not risk anything??? I wonder if that is accurate enough???
Interesting info.
Dale
:-) :-)
[-- Attachment #2: Type: text/html, Size: 5583 bytes --]
next prev parent reply other threads:[~2023-04-19 23:32 UTC|newest]
Thread overview: 67+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-15 22:47 [gentoo-user] Finally got a SSD drive to put my OS on Dale
2023-04-15 23:24 ` Mark Knecht
2023-04-15 23:44 ` thelma
2023-04-16 1:47 ` William Kenworthy
2023-04-16 7:18 ` Peter Humphrey
2023-04-16 8:43 ` William Kenworthy
2023-04-16 15:08 ` Mark Knecht
2023-04-16 15:29 ` Dale
2023-04-16 16:10 ` Mark Knecht
2023-04-16 16:54 ` Dale
2023-04-16 18:14 ` Mark Knecht
2023-04-16 18:53 ` Dale
2023-04-16 19:30 ` Mark Knecht
2023-04-16 22:26 ` Dale
2023-04-16 23:16 ` Frank Steinmetzger
2023-04-17 1:14 ` Dale
2023-04-17 9:40 ` Wols Lists
2023-04-17 17:45 ` Mark Knecht
2023-04-18 0:35 ` Dale
2023-04-18 8:03 ` Frank Steinmetzger
2023-10-07 7:22 ` Dale
2023-04-16 17:46 ` Jorge Almeida
2023-04-16 18:07 ` Frank Steinmetzger
2023-04-16 20:22 ` Mark Knecht
2023-04-16 22:17 ` Frank Steinmetzger
2023-04-17 0:34 ` Mark Knecht
2023-04-18 14:52 ` [gentoo-user] " Nikos Chantziaras
2023-04-18 15:05 ` Dale
2023-04-18 15:36 ` Nikos Chantziaras
2023-04-18 20:01 ` Dale
2023-04-18 20:53 ` Wol
2023-04-18 22:13 ` Frank Steinmetzger
2023-04-18 23:08 ` Wols Lists
2023-04-19 1:15 ` Dale
2023-04-18 20:57 ` Mark Knecht
2023-04-18 21:15 ` Dale
2023-04-18 21:25 ` Mark Knecht
2023-04-19 1:36 ` Dale
2023-04-18 22:18 ` Frank Steinmetzger
2023-04-18 22:41 ` Frank Steinmetzger
2023-04-19 1:45 ` Dale
2023-04-19 8:00 ` Nikos Chantziaras
2023-04-19 9:42 ` Dale
2023-04-19 10:34 ` Peter Humphrey
2023-04-19 17:14 ` Mark Knecht
2023-04-19 17:59 ` Dale
2023-04-19 18:13 ` Mark Knecht
2023-04-19 19:26 ` Dale
2023-04-19 19:38 ` Nikos Chantziaras
2023-04-19 20:00 ` Mark Knecht
2023-04-19 22:13 ` Frank Steinmetzger
2023-04-19 23:32 ` Dale [this message]
2023-04-20 1:09 ` Mark Knecht
2023-04-20 4:23 ` Dale
2023-04-20 4:41 ` eric
2023-04-20 9:48 ` Dale
2023-04-20 23:02 ` Wol
2023-04-20 8:55 ` Frank Steinmetzger
2023-04-20 8:52 ` Frank Steinmetzger
2023-04-20 9:29 ` Dale
2023-04-20 10:08 ` Peter Humphrey
2023-04-20 10:59 ` Dale
2023-04-20 13:23 ` Nikos Chantziaras
2023-04-20 12:23 ` Frank Steinmetzger
2023-04-20 9:46 ` Peter Humphrey
2023-04-20 9:49 ` Dale
2023-04-18 17:52 ` Mark Knecht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9781e250-591d-d468-11f2-7e5c94ac7db3@gmail.com \
--to=rdalek1967@gmail.com \
--cc=gentoo-user@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox