* [gentoo-user] Hard drive and maximum data percentage. @ 2025-01-26 18:15 Dale 2025-02-01 21:15 ` Dale 2025-02-01 21:55 ` Rich Freeman 0 siblings, 2 replies; 11+ messages in thread From: Dale @ 2025-01-26 18:15 UTC (permalink / raw To: gentoo-user Howdy, As most know, I store a lot of data here. This is the two main file systems. %USED USED AVAILABLE TOTAL MOUNTED ON 87.5% 36.3T 5.2T 41.5T /home/dale/Desktop/Data 75.9% 35.8T 11.3T 47.1T /home/dale/Desktop/Crypt The top one has a drive on the way, well, will ship Monday and be here Wednesday or Thursday. Should complete tests and be online by Friday or so. I try not to go over 90%. I start saving money when it hits 80% or so. I've read that 80% is needing expansion. I notice that df and dfc show anything above 70% as red. I have went above 90% on occasion, usually when moving data around to balance out the volume. Thing is, most info I find is more about much smaller file systems. As you can see above, there is several TBs of space for the file system to do any housekeeping such as defragmenting and such. Still, given the large file systems in use, where should I draw the line and remain safe data wise? Can I go to 90% if needed? 95%? Is that to much despite the large amount of space remaining? Does the percentage really matter? Is it more about having enough space for the file system to handle its internal needs? Is for example 1 or 2TBs of free space enough for the file system to work just fine? By the way, data up there is about to be 4 hard drives. Crypt is still 3 but . . . you know how it is. I might add, my backup drive set is 4 drives, each smaller. It will likely be 5 before long. I split up some files in data. Some goes onto a single drive. I have six drives for backups in total. I'm trying to replace smaller drives with either 14TB or 16TB drives. Fewer drives plus I can use smaller drives for backups and such. Rarely buy a drive now less than 12TB. Thoughts on when it is time to start expanding and keep data safe. One of these days, I may have to ask about the limits on file system size. O_O Thanks. Dale :-) :-) ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [gentoo-user] Hard drive and maximum data percentage. 2025-01-26 18:15 [gentoo-user] Hard drive and maximum data percentage Dale @ 2025-02-01 21:15 ` Dale 2025-02-01 22:10 ` Mark Knecht 2025-02-01 21:55 ` Rich Freeman 1 sibling, 1 reply; 11+ messages in thread From: Dale @ 2025-02-01 21:15 UTC (permalink / raw To: gentoo-user Dale wrote: > Howdy, > > As most know, I store a lot of data here. This is the two main file > systems. > > %USED USED AVAILABLE TOTAL MOUNTED ON > 87.5% 36.3T 5.2T 41.5T /home/dale/Desktop/Data > 75.9% 35.8T 11.3T 47.1T /home/dale/Desktop/Crypt > > The top one has a drive on the way, well, will ship Monday and be here > Wednesday or Thursday. Should complete tests and be online by Friday or > so. I try not to go over 90%. I start saving money when it hits 80% or > so. I've read that 80% is needing expansion. I notice that df and dfc > show anything above 70% as red. I have went above 90% on occasion, > usually when moving data around to balance out the volume. Thing is, > most info I find is more about much smaller file systems. As you can > see above, there is several TBs of space for the file system to do any > housekeeping such as defragmenting and such. Still, given the large > file systems in use, where should I draw the line and remain safe data > wise? Can I go to 90% if needed? 95%? Is that to much despite the > large amount of space remaining? Does the percentage really matter? Is > it more about having enough space for the file system to handle its > internal needs? Is for example 1 or 2TBs of free space enough for the > file system to work just fine? > > By the way, data up there is about to be 4 hard drives. Crypt is still > 3 but . . . you know how it is. I might add, my backup drive set is 4 > drives, each smaller. It will likely be 5 before long. I split up some > files in data. Some goes onto a single drive. I have six drives for > backups in total. I'm trying to replace smaller drives with either 14TB > or 16TB drives. Fewer drives plus I can use smaller drives for backups > and such. Rarely buy a drive now less than 12TB. > > Thoughts on when it is time to start expanding and keep data safe. One > of these days, I may have to ask about the limits on file system size. > O_O > > Thanks. > > Dale > > :-) :-) > Hard to believe no one has more up to date info on what is safe given drives are so large now and file system improvements. I'd think having a TB or two would be plenty, regardless of percentage, but not real sure. Don't want to risk data testing the theory either. Update: The new drive came in. It passed all the tests and is online. dfc looks like this now for Data. %USED USED AVAILABLE TOTAL MOUNTED ON 66.4% 37.1T 18.8T 55.9T /home/dale/Desktop/Data May have to start adding two drives at a time. o_O 18Tb and larger drives are to pricey still. Dale :-) :-) ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [gentoo-user] Hard drive and maximum data percentage. 2025-02-01 21:15 ` Dale @ 2025-02-01 22:10 ` Mark Knecht 2025-02-01 23:51 ` Dale 0 siblings, 1 reply; 11+ messages in thread From: Mark Knecht @ 2025-02-01 22:10 UTC (permalink / raw To: gentoo-user [-- Attachment #1: Type: text/plain, Size: 2046 bytes --] On Sat, Feb 1, 2025 at 2:16 PM Dale <rdalek1967@gmail.com> wrote: > > <SNIP> > > Hard to believe no one has more up to date info on what is safe given > drives are so large now and file system improvements. I'd think having > a TB or two would be plenty, regardless of percentage, but not real > sure. Don't want to risk data testing the theory either. > > Update: The new drive came in. It passed all the tests and is online. > dfc looks like this now for Data. > OK, I hate to even try to answer this, and first, I have no storage design experience but I suspect it depends a lot on YOUR usage. I see Rich provided an answer while I was writing this so you'll want to follow any advice he might have given. He's smart. I'm not. My guess is that while you 'store' a lot of data you don't actually 'change' a lot of data. For instance, in the past, you seemed to be download YouTube videos. If you've saved them, never to watch them or change them, then other than protecting yourself from losing them, they go onto the disk and never move. If that's your usage model I don't know why you can't go right up to 100% minus just a little. (Say 10x the size of your average file) After all, you could always remove a few files to temp storage, optimize the disk and then re-add the files. On the other hand, if you're deleting files in the middle of the drive I could see cases where new files get fragmented and stuff you put on late in life gets strewn around the drive which doesn't sound great. In a completely different usage case, like you're running a bunch of databases that are filling your drive, removing old records, adding new records all the time, then depending on how your disk optimizations run you might need a huge amount of space to gather the databases back together. However even in that case you could move a complete database to a temp location, optimize the drive and then re-add the database. So, as is often the case, in my mind...IT DEPENDS! ;-) Best wishes, Mark [-- Attachment #2: Type: text/html, Size: 2556 bytes --] ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [gentoo-user] Hard drive and maximum data percentage. 2025-02-01 22:10 ` Mark Knecht @ 2025-02-01 23:51 ` Dale 0 siblings, 0 replies; 11+ messages in thread From: Dale @ 2025-02-01 23:51 UTC (permalink / raw To: gentoo-user [-- Attachment #1: Type: text/plain, Size: 2944 bytes --] Mark Knecht wrote: > > > On Sat, Feb 1, 2025 at 2:16 PM Dale <rdalek1967@gmail.com > <mailto:rdalek1967@gmail.com>> wrote: > > > > <SNIP> > > > > Hard to believe no one has more up to date info on what is safe given > > drives are so large now and file system improvements. I'd think having > > a TB or two would be plenty, regardless of percentage, but not real > > sure. Don't want to risk data testing the theory either. > > > > Update: The new drive came in. It passed all the tests and is online. > > dfc looks like this now for Data. > > > > OK, I hate to even try to answer this, and first, I have no storage design > experience but I suspect it depends a lot on YOUR usage. I see Rich > provided an answer while I was writing this so you'll want to follow any > advice he might have given. He's smart. I'm not. > > My guess is that while you 'store' a lot of data you don't actually > 'change' > a lot of data. For instance, in the past, you seemed to be download > YouTube > videos. If you've saved them, never to watch them or change them, then > other than protecting yourself from losing them, they go onto the disk and > never move. If that's your usage model I don't know why you can't go > right up to 100% minus just a little. (Say 10x the size of your > average file) > After all, you could always remove a few files to temp storage, optimize > the disk and then re-add the files. > > On the other hand, if you're deleting files in the middle of the drive > I could > see cases where new files get fragmented and stuff you put on late in > life gets strewn around the drive which doesn't sound great. > > In a completely different usage case, like you're running a bunch of > databases > that are filling your drive, removing old records, adding new records > all the > time, then depending on how your disk optimizations run you might need > a huge amount of space to gather the databases back together. However > even in that case you could move a complete database to a temp location, > optimize the drive and then re-add the database. > > So, as is often the case, in my mind...IT DEPENDS! ;-) > > Best wishes, > Mark This is sort of my thinking as well. I do update/delete/move files on occasion but it is usually done in small chunks and mostly by hand. I use ext4 and given the slow speed of this, I'm sure the file system has more than enough time to rearrange things. I have in the past ran the ext defrag tool. It usually reports back a low score, usually 0. Most files that are fragmented are really small and usually only 4 or 5 of them. I still plan to expand before reaching 90%. Thing is, something could happen that makes me have to wait. I was just curious as to how long I could wait. If going past a certain point would/could cause data problems, I wanted to know what that limit was. Now to see what Rich thinks. I bet he has some ideas. ;-) Dale :-) :-) [-- Attachment #2: Type: text/html, Size: 4613 bytes --] ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [gentoo-user] Hard drive and maximum data percentage. 2025-01-26 18:15 [gentoo-user] Hard drive and maximum data percentage Dale 2025-02-01 21:15 ` Dale @ 2025-02-01 21:55 ` Rich Freeman 2025-02-02 0:15 ` Dale 1 sibling, 1 reply; 11+ messages in thread From: Rich Freeman @ 2025-02-01 21:55 UTC (permalink / raw To: gentoo-user On Sun, Jan 26, 2025 at 1:15 PM Dale <rdalek1967@gmail.com> wrote: > Still, given the large > file systems in use, where should I draw the line and remain safe data > wise? Can I go to 90% if needed? 95%? Is that to much despite the > large amount of space remaining? Does the percentage really matter? Is > it more about having enough space for the file system to handle its > internal needs? Is for example 1 or 2TBs of free space enough for the > file system to work just fine? It really depends on the filesystem. For ext4 you can go all the way to zero bytes free with no issues, other than application issues from being unable to create files. (If you're talking about the root filesystem, then the "application" includes the OS so that isn't ideal.) Some filesystems don't handle running out of space well. These are usually filesystems that handle redundancy internally, but you really need to look into the specifics. Are you running something other than ext4 here? The space free almost always takes into account filesystem overhead. The issue is generally whether the filesystem can actually free up space once it runs out completely (COW might want to allocate space just to delete things, due to the need to not overwrite metadata in place). -- Rich ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [gentoo-user] Hard drive and maximum data percentage. 2025-02-01 21:55 ` Rich Freeman @ 2025-02-02 0:15 ` Dale 2025-02-02 0:29 ` Rich Freeman 0 siblings, 1 reply; 11+ messages in thread From: Dale @ 2025-02-02 0:15 UTC (permalink / raw To: gentoo-user Rich Freeman wrote: > On Sun, Jan 26, 2025 at 1:15 PM Dale <rdalek1967@gmail.com> wrote: >> Still, given the large >> file systems in use, where should I draw the line and remain safe data >> wise? Can I go to 90% if needed? 95%? Is that to much despite the >> large amount of space remaining? Does the percentage really matter? Is >> it more about having enough space for the file system to handle its >> internal needs? Is for example 1 or 2TBs of free space enough for the >> file system to work just fine? > It really depends on the filesystem. For ext4 you can go all the way > to zero bytes free with no issues, other than application issues from > being unable to create files. (If you're talking about the root > filesystem, then the "application" includes the OS so that isn't > ideal.) > > Some filesystems don't handle running out of space well. These are > usually filesystems that handle redundancy internally, but you really > need to look into the specifics. Are you running something other than > ext4 here? > > The space free almost always takes into account filesystem overhead. > The issue is generally whether the filesystem can actually free up > space once it runs out completely (COW might want to allocate space > just to delete things, due to the need to not overwrite metadata in > place). > > -- > Rich > > . > On the root file system, I have it set to some reserved for root, admin if you will, same for /var I think. I always leave that at the default for things OS related. I think I left a little for /home itself but not sure. However, for Data, Crypt and some others, I set the reserve to 0. Nothing on those are used by root or any OS related data. It's the -m option for ext4, I think. My thinking, even if I went to 95%, it should be OK given my usage. It might even be OK at 99%. Thing is, I know at some point, something is going to happen. I just been wondering what that point is and what it will do. Oh, I do use ext4. I might add, when something goes weird and the messages file is getting wrote to a lot and fills up /var, even using all the reserve, it still works but it can't update with new data. It doesn't crash my system or anything bad but it can't be good to try to write to a file when there is no space left. I know it would be different for Data and Crypt but still, when full, something has to happen. Most modern file systems I think can handle this a lot better than when the drives were smaller and file systems were not as robust. Heck, ext4, btfs <sp?>, zfs and others are designed nowadays to handle some pretty bad situations but I still don't want to push my luck to much. Thanks for the info. I can't believe I didn't mention before that I was using ext4. o_O Dale :-) :-) ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [gentoo-user] Hard drive and maximum data percentage. 2025-02-02 0:15 ` Dale @ 2025-02-02 0:29 ` Rich Freeman 2025-02-02 1:40 ` Dale 0 siblings, 1 reply; 11+ messages in thread From: Rich Freeman @ 2025-02-02 0:29 UTC (permalink / raw To: gentoo-user On Sat, Feb 1, 2025 at 7:15 PM Dale <rdalek1967@gmail.com> wrote: > > My thinking, even if I went to 95%, it should be OK given my usage. It > might even be OK at 99%. Thing is, I know at some point, something is > going to happen. I just been wondering what that point is and what it > will do. Oh, I do use ext4. If you're using ext4 and this is a dedicated filesystem that no critical applications need to be able to write to, then there really are no bad consequences for filling the entire disk. With ext4 if you want to get the same back you just need to rm some file. Really the only downside for you in this use case is not being able to cram something onto it when you want to. Now, if you were running btrfs or cephfs or some other exotic filesystems, then it would be a whole different matter, as those struggle to recover space when they get too full. Something like ceph also trades free space for failover space if you lose a disk, so if you want the cluster to self-heal you need free space for it to work with (and you want it to still be 85% free or so even after losing a disk), -- Rich ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [gentoo-user] Hard drive and maximum data percentage. 2025-02-02 0:29 ` Rich Freeman @ 2025-02-02 1:40 ` Dale 2025-02-02 2:07 ` Rich Freeman 0 siblings, 1 reply; 11+ messages in thread From: Dale @ 2025-02-02 1:40 UTC (permalink / raw To: gentoo-user Rich Freeman wrote: > On Sat, Feb 1, 2025 at 7:15 PM Dale <rdalek1967@gmail.com> wrote: >> My thinking, even if I went to 95%, it should be OK given my usage. It >> might even be OK at 99%. Thing is, I know at some point, something is >> going to happen. I just been wondering what that point is and what it >> will do. Oh, I do use ext4. > If you're using ext4 and this is a dedicated filesystem that no > critical applications need to be able to write to, then there really > are no bad consequences for filling the entire disk. With ext4 if you > want to get the same back you just need to rm some file. Really the > only downside for you in this use case is not being able to cram > something onto it when you want to. > > Now, if you were running btrfs or cephfs or some other exotic > filesystems, then it would be a whole different matter, as those > struggle to recover space when they get too full. Something like ceph > also trades free space for failover space if you lose a disk, so if > you want the cluster to self-heal you need free space for it to work > with (and you want it to still be 85% free or so even after losing a > disk), > Sounds like ext4 is the best file system for what I'm doing then. It's well maintained plus can handle being full. I'm surprised to hear that other what I consider to be newer and better file systems aren't able to handle that sort of thing tho. I wasn't expecting that. I could see some RAID systems having issues but not some of the more advanced file systems that are designed to handle large amounts of data. Thanks again. At least I have a much better idea of where I stand and if needed, how far I can push things. I still plan to prevent going above 90% but if life happens, I can go longer. Dale :-) :-) ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [gentoo-user] Hard drive and maximum data percentage. 2025-02-02 1:40 ` Dale @ 2025-02-02 2:07 ` Rich Freeman 2025-02-02 11:00 ` Michael 0 siblings, 1 reply; 11+ messages in thread From: Rich Freeman @ 2025-02-02 2:07 UTC (permalink / raw To: gentoo-user On Sat, Feb 1, 2025 at 8:40 PM Dale <rdalek1967@gmail.com> wrote: > > Rich Freeman wrote: > > > > Now, if you were running btrfs or cephfs or some other exotic > > filesystems, then it would be a whole different matter, > > I could see > some RAID systems having issues but not some of the more advanced file > systems that are designed to handle large amounts of data. Those are "RAID-like" systems, which is part of why they struggle when full. Unlike traditional RAID they also don't require identical drives for replication, which can also make it tricky when they start to get full and finding blocks that meet the replication requirements is difficult. With a COW approach like btrfs you also have the issue that altering the metadata requires free space. To delete a file you first write new metadata that deallocates the space for the file, then you update the pointers to make it part of the disk metadata. Since the metadata is stored in a tree, updating a leaf node requires modifying all of its parents up to the root, which requires making new copies of them. It isn't until the entire branch of the tree is copied that you can delete the old version of it. The advantage of this approach is that it is very safe, and accomplishes the equivalent of full data journaling without actually having to make more than one write of things. If that operation is aborted the tree just points at the old metadata and the in-progress copies are inside of free space, ignored by the filesystem, and thus they just get overwritten the next time the operation is attempted. For something like ceph it isn't really much of a downside since this it is intended to be professionally managed. For something like btrfs it seems like more of an issue as it was intended to be a general-purpose filesystem for desktops/etc, and so it would be desirable to make it less likely to break when it runs low on space. However, that's just one of many ways to break btrfs, so... :) In any case, running out of space is one of those things that becomes more of an issue the more complicated the metadata gets. For something simple like ext4 that just overwrites stuff in place by default it isn't a big deal at all. -- Rich ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [gentoo-user] Hard drive and maximum data percentage. 2025-02-02 2:07 ` Rich Freeman @ 2025-02-02 11:00 ` Michael 2025-02-02 18:08 ` Dale 0 siblings, 1 reply; 11+ messages in thread From: Michael @ 2025-02-02 11:00 UTC (permalink / raw To: gentoo-user [-- Attachment #1: Type: text/plain, Size: 3503 bytes --] On Sunday 2 February 2025 02:07:07 Greenwich Mean Time Rich Freeman wrote: > On Sat, Feb 1, 2025 at 8:40 PM Dale <rdalek1967@gmail.com> wrote: > > Rich Freeman wrote: > > > Now, if you were running btrfs or cephfs or some other exotic > > > filesystems, then it would be a whole different matter, > > > > I could see > > some RAID systems having issues but not some of the more advanced file > > systems that are designed to handle large amounts of data. > > Those are "RAID-like" systems, which is part of why they struggle when > full. Unlike traditional RAID they also don't require identical > drives for replication, which can also make it tricky when they start > to get full and finding blocks that meet the replication requirements > is difficult. > > With a COW approach like btrfs you also have the issue that altering > the metadata requires free space. To delete a file you first write > new metadata that deallocates the space for the file, then you update > the pointers to make it part of the disk metadata. Since the metadata > is stored in a tree, updating a leaf node requires modifying all of > its parents up to the root, which requires making new copies of them. > It isn't until the entire branch of the tree is copied that you can > delete the old version of it. The advantage of this approach is that > it is very safe, and accomplishes the equivalent of full data > journaling without actually having to make more than one write of > things. If that operation is aborted the tree just points at the old > metadata and the in-progress copies are inside of free space, ignored > by the filesystem, and thus they just get overwritten the next time > the operation is attempted. > > For something like ceph it isn't really much of a downside since this > it is intended to be professionally managed. For something like btrfs > it seems like more of an issue as it was intended to be a > general-purpose filesystem for desktops/etc, and so it would be > desirable to make it less likely to break when it runs low on space. > However, that's just one of many ways to break btrfs, so... :) > > In any case, running out of space is one of those things that becomes > more of an issue the more complicated the metadata gets. For > something simple like ext4 that just overwrites stuff in place by > default it isn't a big deal at all. I've had /var/cache/distfiles on ext4 filling up more than a dozen times, because I forgot to run eclean-dist and didn't get a chance to tweak partitions to accommodate a larger fs in time. Similarly, I've also had / on ext4 filling up on a number of occasions over the years. Both of my ext4 filesystems mentioned above were created with default options. Hence -m, the reserved blocks % for the OS, would have been 5%. I cannot recall ever losing data or ending up with a corrupted fs. Removing some file(s) to create empty space allowed the file which didn't fit before to be written successfully and that was that. Resuming whatever process was stopped (typically emerge) allowed it to complete. I also had smaller single btrfs partitions binding up a couple of times. I didn't lose any data, but then again these were stand alone filesystems not part of some ill advised buggy btrfs RAID5 configuration. I don't deal with data volumes of the size Dale is playing with, so I can't comment on suitability of different filesystems for such a use case. [-- Attachment #2: This is a digitally signed message part. --] [-- Type: application/pgp-signature, Size: 833 bytes --] ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [gentoo-user] Hard drive and maximum data percentage. 2025-02-02 11:00 ` Michael @ 2025-02-02 18:08 ` Dale 0 siblings, 0 replies; 11+ messages in thread From: Dale @ 2025-02-02 18:08 UTC (permalink / raw To: gentoo-user Michael wrote: > On Sunday 2 February 2025 02:07:07 Greenwich Mean Time Rich Freeman wrote: >> On Sat, Feb 1, 2025 at 8:40 PM Dale <rdalek1967@gmail.com> wrote: >>> Rich Freeman wrote: >>>> Now, if you were running btrfs or cephfs or some other exotic >>>> filesystems, then it would be a whole different matter, >>> I could see >>> some RAID systems having issues but not some of the more advanced file >>> systems that are designed to handle large amounts of data. >> Those are "RAID-like" systems, which is part of why they struggle when >> full. Unlike traditional RAID they also don't require identical >> drives for replication, which can also make it tricky when they start >> to get full and finding blocks that meet the replication requirements >> is difficult. >> >> With a COW approach like btrfs you also have the issue that altering >> the metadata requires free space. To delete a file you first write >> new metadata that deallocates the space for the file, then you update >> the pointers to make it part of the disk metadata. Since the metadata >> is stored in a tree, updating a leaf node requires modifying all of >> its parents up to the root, which requires making new copies of them. >> It isn't until the entire branch of the tree is copied that you can >> delete the old version of it. The advantage of this approach is that >> it is very safe, and accomplishes the equivalent of full data >> journaling without actually having to make more than one write of >> things. If that operation is aborted the tree just points at the old >> metadata and the in-progress copies are inside of free space, ignored >> by the filesystem, and thus they just get overwritten the next time >> the operation is attempted. >> >> For something like ceph it isn't really much of a downside since this >> it is intended to be professionally managed. For something like btrfs >> it seems like more of an issue as it was intended to be a >> general-purpose filesystem for desktops/etc, and so it would be >> desirable to make it less likely to break when it runs low on space. >> However, that's just one of many ways to break btrfs, so... :) >> >> In any case, running out of space is one of those things that becomes >> more of an issue the more complicated the metadata gets. For >> something simple like ext4 that just overwrites stuff in place by >> default it isn't a big deal at all. > I've had /var/cache/distfiles on ext4 filling up more than a dozen times, > because I forgot to run eclean-dist and didn't get a chance to tweak > partitions to accommodate a larger fs in time. Similarly, I've also had / on > ext4 filling up on a number of occasions over the years. Both of my ext4 > filesystems mentioned above were created with default options. Hence -m, the > reserved blocks % for the OS, would have been 5%. I cannot recall ever losing > data or ending up with a corrupted fs. Removing some file(s) to create empty > space allowed the file which didn't fit before to be written successfully and > that was that. Resuming whatever process was stopped (typically emerge) > allowed it to complete. > > I also had smaller single btrfs partitions binding up a couple of times. I > didn't lose any data, but then again these were stand alone filesystems not > part of some ill advised buggy btrfs RAID5 configuration. > > I don't deal with data volumes of the size Dale is playing with, so I can't > comment on suitability of different filesystems for such a use case. > This is all good info. It's funny, I think me using ext4 was the best thing for this situation. It works well for storing all the files I have plus I can fill it up pretty good if I have too. Thing is, I could stop the download of some videos if needed. At least until I can get a new drive to expand with. I'll be getting another drive for Crypt next. Then I should be good for a while. Dale :-) :-) ^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2025-02-02 18:09 UTC | newest] Thread overview: 11+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-01-26 18:15 [gentoo-user] Hard drive and maximum data percentage Dale 2025-02-01 21:15 ` Dale 2025-02-01 22:10 ` Mark Knecht 2025-02-01 23:51 ` Dale 2025-02-01 21:55 ` Rich Freeman 2025-02-02 0:15 ` Dale 2025-02-02 0:29 ` Rich Freeman 2025-02-02 1:40 ` Dale 2025-02-02 2:07 ` Rich Freeman 2025-02-02 11:00 ` Michael 2025-02-02 18:08 ` Dale
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox