* [gentoo-user] USB crucial file recovery
@ 2016-08-28 18:49 Grant
2016-08-28 18:57 ` Neil Bothwick
` (2 more replies)
0 siblings, 3 replies; 70+ messages in thread
From: Grant @ 2016-08-28 18:49 UTC (permalink / raw
To: Gentoo mailing list
I have a USB stick with a crucial file on it (and only an old backup
elsewhere). It's formatted NTFS because I wanted to be able to open
the file on various Gentoo systems and my research indicated that NTFS
was the best solution.
I decided to copy a 10GB file from a USB hard disk directly to the USB
stick this morning and I ran into errors so I canceled the operation
and now the file manager (thunar) has been stuck for well over an hour
and I'm getting errors like these over and over:
[ 2794.535814] Buffer I/O error on dev sdc1, logical block 2134893,
lost async page write
[ 2794.535819] Buffer I/O error on dev sdc1, logical block 2134894,
lost async page write
[ 2794.535822] Buffer I/O error on dev sdc1, logical block 2134895,
lost async page write
[ 2794.535824] Buffer I/O error on dev sdc1, logical block 2134896,
lost async page write
[ 2794.535826] Buffer I/O error on dev sdc1, logical block 2134897,
lost async page write
[ 2794.535828] Buffer I/O error on dev sdc1, logical block 2134898,
lost async page write
[ 2794.535830] Buffer I/O error on dev sdc1, logical block 2134899,
lost async page write
[ 2794.535832] Buffer I/O error on dev sdc1, logical block 2134900,
lost async page write
[ 2794.535835] Buffer I/O error on dev sdc1, logical block 2134901,
lost async page write
[ 2794.535837] Buffer I/O error on dev sdc1, logical block 2134902,
lost async page write
[ 2842.568843] sd 9:0:0:0: [sdc] tag#0 FAILED Result:
hostbyte=DID_ERROR driverbyte=DRIVER_SENSE
[ 2842.568849] sd 9:0:0:0: [sdc] tag#0 Sense Key : Hardware Error [current]
[ 2842.568852] sd 9:0:0:0: [sdc] tag#0 Add. Sense: No additional sense
information
[ 2842.568857] sd 9:0:0:0: [sdc] tag#0 CDB: Write(10) 2a 00 01 04 a4
58 00 00 f0 00
[ 2842.568859] blk_update_request: I/O error, dev sdc, sector 17081432
[ 2842.568862] buffer_io_error: 20 callbacks suppressed
nmon says sdc is 100% busy but doesn't show any reading or writing. I
once pulled the USB stick in a situation like this and I ended up
having to reformat it which I need to avoid this time since the file
is crucial. What should I do?
- Grant
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-28 18:49 [gentoo-user] USB crucial file recovery Grant
@ 2016-08-28 18:57 ` Neil Bothwick
2016-08-28 19:12 ` Mick
2016-08-30 19:31 ` [gentoo-user] " R0b0t1
2 siblings, 0 replies; 70+ messages in thread
From: Neil Bothwick @ 2016-08-28 18:57 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 1675 bytes --]
On Sun, 28 Aug 2016 11:49:44 -0700, Grant wrote:
> I have a USB stick with a crucial file on it (and only an old backup
> elsewhere). It's formatted NTFS because I wanted to be able to open
> the file on various Gentoo systems and my research indicated that NTFS
> was the best solution.
If it's only to be used on Gentoo (or Linux) systems, why not ext2?
> [ 2794.535837] Buffer I/O error on dev sdc1, logical block 2134902,
> lost async page write
> [ 2842.568843] sd 9:0:0:0: [sdc] tag#0 FAILED Result:
> hostbyte=DID_ERROR driverbyte=DRIVER_SENSE
> [ 2842.568849] sd 9:0:0:0: [sdc] tag#0 Sense Key : Hardware Error
> [current] [ 2842.568852] sd 9:0:0:0: [sdc] tag#0 Add. Sense: No
> additional sense information
> [ 2842.568857] sd 9:0:0:0: [sdc] tag#0 CDB: Write(10) 2a 00 01 04 a4
> 58 00 00 f0 00
> [ 2842.568859] blk_update_request: I/O error, dev sdc, sector 17081432
> [ 2842.568862] buffer_io_error: 20 callbacks suppressed
>
> nmon says sdc is 100% busy but doesn't show any reading or writing. I
> once pulled the USB stick in a situation like this and I ended up
> having to reformat it which I need to avoid this time since the file
> is crucial. What should I do?
That looks horribly like a hardware failure. The first step should be to
see if you can dd the stick to an image file (without mounting it). Then
you can work on the image file, or a copy of it, and try things like
testdisk.
If dd also gives hardware errors, ddrecsue may be able to recover some of
it, hopefully including the important bits, but I'm not holding my breath
for you :(
--
Neil Bothwick
When there's a will, I want to be in it.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 163 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-28 18:49 [gentoo-user] USB crucial file recovery Grant
2016-08-28 18:57 ` Neil Bothwick
@ 2016-08-28 19:12 ` Mick
2016-08-29 1:24 ` Grant
2016-08-30 19:31 ` [gentoo-user] " R0b0t1
2 siblings, 1 reply; 70+ messages in thread
From: Mick @ 2016-08-28 19:12 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 3281 bytes --]
On Sunday 28 Aug 2016 11:49:44 Grant wrote:
> I have a USB stick with a crucial file on it (and only an old backup
> elsewhere). It's formatted NTFS because I wanted to be able to open
> the file on various Gentoo systems and my research indicated that NTFS
> was the best solution.
>
> I decided to copy a 10GB file from a USB hard disk directly to the USB
> stick this morning and I ran into errors so I canceled the operation
> and now the file manager (thunar) has been stuck for well over an hour
> and I'm getting errors like these over and over:
>
> [ 2794.535814] Buffer I/O error on dev sdc1, logical block 2134893,
> lost async page write
> [ 2794.535819] Buffer I/O error on dev sdc1, logical block 2134894,
> lost async page write
> [ 2794.535822] Buffer I/O error on dev sdc1, logical block 2134895,
> lost async page write
> [ 2794.535824] Buffer I/O error on dev sdc1, logical block 2134896,
> lost async page write
> [ 2794.535826] Buffer I/O error on dev sdc1, logical block 2134897,
> lost async page write
> [ 2794.535828] Buffer I/O error on dev sdc1, logical block 2134898,
> lost async page write
> [ 2794.535830] Buffer I/O error on dev sdc1, logical block 2134899,
> lost async page write
> [ 2794.535832] Buffer I/O error on dev sdc1, logical block 2134900,
> lost async page write
> [ 2794.535835] Buffer I/O error on dev sdc1, logical block 2134901,
> lost async page write
> [ 2794.535837] Buffer I/O error on dev sdc1, logical block 2134902,
> lost async page write
> [ 2842.568843] sd 9:0:0:0: [sdc] tag#0 FAILED Result:
> hostbyte=DID_ERROR driverbyte=DRIVER_SENSE
> [ 2842.568849] sd 9:0:0:0: [sdc] tag#0 Sense Key : Hardware Error [current]
> [ 2842.568852] sd 9:0:0:0: [sdc] tag#0 Add. Sense: No additional sense
> information
> [ 2842.568857] sd 9:0:0:0: [sdc] tag#0 CDB: Write(10) 2a 00 01 04 a4
> 58 00 00 f0 00
> [ 2842.568859] blk_update_request: I/O error, dev sdc, sector 17081432
> [ 2842.568862] buffer_io_error: 20 callbacks suppressed
>
> nmon says sdc is 100% busy but doesn't show any reading or writing. I
> once pulled the USB stick in a situation like this and I ended up
> having to reformat it which I need to avoid this time since the file
> is crucial. What should I do?
>
> - Grant
Whatever you do, do NOT try to unplug the USB you were writing to unless you
first manage to successfully unmount it. If you do pull the USB stick
regardless of its current I/O state you will likely corrupt whatever file you
were writing onto, or potentially more.
You could well have a hardware failure here, which manifested itself during
your copying operation. Things you could try:
Run lsof to find out which process is trying to access the USB fs and kill it.
Then see if you can remount it read-only.
Run dd, dcfldd, ddrescue to make an image of the complete USB stick on your
hard drive and then try to recover any lost files with testdisk.
Use any low level recovery tools the manufacturer may offer - they will likely
require MSWindows.
Shut down your OS and disconnect the USB stick, then reboot and try again to
access it, although I would first create an image of the device using any of
the above tools.
Good luck.
--
Regards,
Mick
[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-28 19:12 ` Mick
@ 2016-08-29 1:24 ` Grant
2016-08-29 3:28 ` J. Roeleveld
0 siblings, 1 reply; 70+ messages in thread
From: Grant @ 2016-08-29 1:24 UTC (permalink / raw
To: Gentoo mailing list
>> I have a USB stick with a crucial file on it (and only an old backup
>> elsewhere). It's formatted NTFS because I wanted to be able to open
>> the file on various Gentoo systems and my research indicated that NTFS
>> was the best solution.
>>
>> I decided to copy a 10GB file from a USB hard disk directly to the USB
>> stick this morning and I ran into errors so I canceled the operation
>> and now the file manager (thunar) has been stuck for well over an hour
>> and I'm getting errors like these over and over:
>>
>> [ 2794.535814] Buffer I/O error on dev sdc1, logical block 2134893,
>> lost async page write
>> [ 2794.535819] Buffer I/O error on dev sdc1, logical block 2134894,
>> lost async page write
>> [ 2794.535822] Buffer I/O error on dev sdc1, logical block 2134895,
>> lost async page write
>> [ 2794.535824] Buffer I/O error on dev sdc1, logical block 2134896,
>> lost async page write
>> [ 2794.535826] Buffer I/O error on dev sdc1, logical block 2134897,
>> lost async page write
>> [ 2794.535828] Buffer I/O error on dev sdc1, logical block 2134898,
>> lost async page write
>> [ 2794.535830] Buffer I/O error on dev sdc1, logical block 2134899,
>> lost async page write
>> [ 2794.535832] Buffer I/O error on dev sdc1, logical block 2134900,
>> lost async page write
>> [ 2794.535835] Buffer I/O error on dev sdc1, logical block 2134901,
>> lost async page write
>> [ 2794.535837] Buffer I/O error on dev sdc1, logical block 2134902,
>> lost async page write
>> [ 2842.568843] sd 9:0:0:0: [sdc] tag#0 FAILED Result:
>> hostbyte=DID_ERROR driverbyte=DRIVER_SENSE
>> [ 2842.568849] sd 9:0:0:0: [sdc] tag#0 Sense Key : Hardware Error [current]
>> [ 2842.568852] sd 9:0:0:0: [sdc] tag#0 Add. Sense: No additional sense
>> information
>> [ 2842.568857] sd 9:0:0:0: [sdc] tag#0 CDB: Write(10) 2a 00 01 04 a4
>> 58 00 00 f0 00
>> [ 2842.568859] blk_update_request: I/O error, dev sdc, sector 17081432
>> [ 2842.568862] buffer_io_error: 20 callbacks suppressed
>>
>> nmon says sdc is 100% busy but doesn't show any reading or writing. I
>> once pulled the USB stick in a situation like this and I ended up
>> having to reformat it which I need to avoid this time since the file
>> is crucial. What should I do?
>>
>> - Grant
>
> Whatever you do, do NOT try to unplug the USB you were writing to unless you
> first manage to successfully unmount it. If you do pull the USB stick
> regardless of its current I/O state you will likely corrupt whatever file you
> were writing onto, or potentially more.
>
> You could well have a hardware failure here, which manifested itself during
> your copying operation. Things you could try:
>
> Run lsof to find out which process is trying to access the USB fs and kill it.
> Then see if you can remount it read-only.
>
> Run dd, dcfldd, ddrescue to make an image of the complete USB stick on your
> hard drive and then try to recover any lost files with testdisk.
>
> Use any low level recovery tools the manufacturer may offer - they will likely
> require MSWindows.
>
> Shut down your OS and disconnect the USB stick, then reboot and try again to
> access it, although I would first create an image of the device using any of
> the above tools.
dd errored and lsof returned no output at all so I tried to reboot but
even that hung after the first 2 lines of output so I did a hard reset
after waiting an hour or so. Once it came back up I tried to do 'dd
if=/dev/sdb1 of=usbstick' but I got this after awhile:
dd: error reading ‘/dev/sdb1’: Input/output error
16937216+0 records in
16937216+0 records out
8671854592 bytes (8.7 GB) copied, 230.107 s, 37.7 MB/s
It's a 64GB USB stick. dmesg says:
[ 744.729873] sd 8:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK
driverbyte=DRIVER_SENSE
[ 744.729879] sd 8:0:0:0: [sdb] tag#0 Sense Key : Medium Error [current]
[ 744.729883] sd 8:0:0:0: [sdb] tag#0 Add. Sense: Unrecovered read error
[ 744.729886] sd 8:0:0:0: [sdb] tag#0 CDB: Read(10) 28 00 01 02 78 e0
00 00 f0 00
[ 744.729889] blk_update_request: critical medium error, dev sdb,
sector 16939232
[ 744.763468] sd 8:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK
driverbyte=DRIVER_SENSE
[ 744.763472] sd 8:0:0:0: [sdb] tag#0 Sense Key : Medium Error [current]
[ 744.763474] sd 8:0:0:0: [sdb] tag#0 Add. Sense: Unrecovered read error
[ 744.763478] sd 8:0:0:0: [sdb] tag#0 CDB: Read(10) 28 00 01 02 79 00
00 00 04 00
[ 744.763480] blk_update_request: critical medium error, dev sdb,
sector 16939264
[ 744.763482] Buffer I/O error on dev sdb1, logical block 4234304,
async page read
[ 744.786743] sd 8:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK
driverbyte=DRIVER_SENSE
[ 744.786747] sd 8:0:0:0: [sdb] tag#0 Sense Key : Medium Error [current]
[ 744.786750] sd 8:0:0:0: [sdb] tag#0 Add. Sense: Unrecovered read error
[ 744.786753] sd 8:0:0:0: [sdb] tag#0 CDB: Read(10) 28 00 01 02 79 04
00 00 04 00
[ 744.786755] blk_update_request: critical medium error, dev sdb,
sector 16939268
[ 744.786758] Buffer I/O error on dev sdb1, logical block 4234305,
async page read
I haven't tried to mount it yet. Any suggestions?
- Grant
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-29 1:24 ` Grant
@ 2016-08-29 3:28 ` J. Roeleveld
2016-08-30 0:40 ` Grant
0 siblings, 1 reply; 70+ messages in thread
From: J. Roeleveld @ 2016-08-29 3:28 UTC (permalink / raw
To: gentoo-user
On August 29, 2016 3:24:18 AM GMT+02:00, Grant <emailgrant@gmail.com> wrote:
>>> I have a USB stick with a crucial file on it (and only an old backup
>>> elsewhere). It's formatted NTFS because I wanted to be able to open
>>> the file on various Gentoo systems and my research indicated that
>NTFS
>>> was the best solution.
>>>
>>> I decided to copy a 10GB file from a USB hard disk directly to the
>USB
>>> stick this morning and I ran into errors so I canceled the operation
>>> and now the file manager (thunar) has been stuck for well over an
>hour
>>> and I'm getting errors like these over and over:
>>>
>>> [ 2794.535814] Buffer I/O error on dev sdc1, logical block 2134893,
>>> lost async page write
>>> [ 2794.535819] Buffer I/O error on dev sdc1, logical block 2134894,
>>> lost async page write
>>> [ 2794.535822] Buffer I/O error on dev sdc1, logical block 2134895,
>>> lost async page write
>>> [ 2794.535824] Buffer I/O error on dev sdc1, logical block 2134896,
>>> lost async page write
>>> [ 2794.535826] Buffer I/O error on dev sdc1, logical block 2134897,
>>> lost async page write
>>> [ 2794.535828] Buffer I/O error on dev sdc1, logical block 2134898,
>>> lost async page write
>>> [ 2794.535830] Buffer I/O error on dev sdc1, logical block 2134899,
>>> lost async page write
>>> [ 2794.535832] Buffer I/O error on dev sdc1, logical block 2134900,
>>> lost async page write
>>> [ 2794.535835] Buffer I/O error on dev sdc1, logical block 2134901,
>>> lost async page write
>>> [ 2794.535837] Buffer I/O error on dev sdc1, logical block 2134902,
>>> lost async page write
>>> [ 2842.568843] sd 9:0:0:0: [sdc] tag#0 FAILED Result:
>>> hostbyte=DID_ERROR driverbyte=DRIVER_SENSE
>>> [ 2842.568849] sd 9:0:0:0: [sdc] tag#0 Sense Key : Hardware Error
>[current]
>>> [ 2842.568852] sd 9:0:0:0: [sdc] tag#0 Add. Sense: No additional
>sense
>>> information
>>> [ 2842.568857] sd 9:0:0:0: [sdc] tag#0 CDB: Write(10) 2a 00 01 04 a4
>>> 58 00 00 f0 00
>>> [ 2842.568859] blk_update_request: I/O error, dev sdc, sector
>17081432
>>> [ 2842.568862] buffer_io_error: 20 callbacks suppressed
>>>
>>> nmon says sdc is 100% busy but doesn't show any reading or writing.
>I
>>> once pulled the USB stick in a situation like this and I ended up
>>> having to reformat it which I need to avoid this time since the file
>>> is crucial. What should I do?
>>>
>>> - Grant
>>
>> Whatever you do, do NOT try to unplug the USB you were writing to
>unless you
>> first manage to successfully unmount it. If you do pull the USB
>stick
>> regardless of its current I/O state you will likely corrupt whatever
>file you
>> were writing onto, or potentially more.
>>
>> You could well have a hardware failure here, which manifested itself
>during
>> your copying operation. Things you could try:
>>
>> Run lsof to find out which process is trying to access the USB fs and
>kill it.
>> Then see if you can remount it read-only.
>>
>> Run dd, dcfldd, ddrescue to make an image of the complete USB stick
>on your
>> hard drive and then try to recover any lost files with testdisk.
>>
>> Use any low level recovery tools the manufacturer may offer - they
>will likely
>> require MSWindows.
>>
>> Shut down your OS and disconnect the USB stick, then reboot and try
>again to
>> access it, although I would first create an image of the device using
>any of
>> the above tools.
>
>
>dd errored and lsof returned no output at all so I tried to reboot but
>even that hung after the first 2 lines of output so I did a hard reset
>after waiting an hour or so. Once it came back up I tried to do 'dd
>if=/dev/sdb1 of=usbstick' but I got this after awhile:
>
>dd: error reading ‘/dev/sdb1’: Input/output error
>16937216+0 records in
>16937216+0 records out
>8671854592 bytes (8.7 GB) copied, 230.107 s, 37.7 MB/s
>
>It's a 64GB USB stick. dmesg says:
>
>[ 744.729873] sd 8:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK
>driverbyte=DRIVER_SENSE
>[ 744.729879] sd 8:0:0:0: [sdb] tag#0 Sense Key : Medium Error
>[current]
>[ 744.729883] sd 8:0:0:0: [sdb] tag#0 Add. Sense: Unrecovered read
>error
>[ 744.729886] sd 8:0:0:0: [sdb] tag#0 CDB: Read(10) 28 00 01 02 78 e0
>00 00 f0 00
>[ 744.729889] blk_update_request: critical medium error, dev sdb,
>sector 16939232
>[ 744.763468] sd 8:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK
>driverbyte=DRIVER_SENSE
>[ 744.763472] sd 8:0:0:0: [sdb] tag#0 Sense Key : Medium Error
>[current]
>[ 744.763474] sd 8:0:0:0: [sdb] tag#0 Add. Sense: Unrecovered read
>error
>[ 744.763478] sd 8:0:0:0: [sdb] tag#0 CDB: Read(10) 28 00 01 02 79 00
>00 00 04 00
>[ 744.763480] blk_update_request: critical medium error, dev sdb,
>sector 16939264
>[ 744.763482] Buffer I/O error on dev sdb1, logical block 4234304,
>async page read
>[ 744.786743] sd 8:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK
>driverbyte=DRIVER_SENSE
>[ 744.786747] sd 8:0:0:0: [sdb] tag#0 Sense Key : Medium Error
>[current]
>[ 744.786750] sd 8:0:0:0: [sdb] tag#0 Add. Sense: Unrecovered read
>error
>[ 744.786753] sd 8:0:0:0: [sdb] tag#0 CDB: Read(10) 28 00 01 02 79 04
>00 00 04 00
>[ 744.786755] blk_update_request: critical medium error, dev sdb,
>sector 16939268
>[ 744.786758] Buffer I/O error on dev sdb1, logical block 4234305,
>async page read
>
>I haven't tried to mount it yet. Any suggestions?
>
>- Grant
Try ddrescue.
It tries to skip the really bad parts.
Then try data recovery on copies of the resulting image file.
--
Joost
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-29 3:28 ` J. Roeleveld
@ 2016-08-30 0:40 ` Grant
2016-08-30 0:51 ` Grant
0 siblings, 1 reply; 70+ messages in thread
From: Grant @ 2016-08-30 0:40 UTC (permalink / raw
To: Gentoo mailing list
>>>> I have a USB stick with a crucial file on it (and only an old backup
>>>> elsewhere). It's formatted NTFS because I wanted to be able to open
>>>> the file on various Gentoo systems and my research indicated that
>>NTFS
>>>> was the best solution.
>>>>
>>>> I decided to copy a 10GB file from a USB hard disk directly to the
>>USB
>>>> stick this morning and I ran into errors so I canceled the operation
>>>> and now the file manager (thunar) has been stuck for well over an
>>hour
>>>> and I'm getting errors like these over and over:
>>>>
>>>> [ 2794.535814] Buffer I/O error on dev sdc1, logical block 2134893,
>>>> lost async page write
>>>> [ 2794.535819] Buffer I/O error on dev sdc1, logical block 2134894,
>>>> lost async page write
>>>> [ 2794.535822] Buffer I/O error on dev sdc1, logical block 2134895,
>>>> lost async page write
>>>> [ 2794.535824] Buffer I/O error on dev sdc1, logical block 2134896,
>>>> lost async page write
>>>> [ 2794.535826] Buffer I/O error on dev sdc1, logical block 2134897,
>>>> lost async page write
>>>> [ 2794.535828] Buffer I/O error on dev sdc1, logical block 2134898,
>>>> lost async page write
>>>> [ 2794.535830] Buffer I/O error on dev sdc1, logical block 2134899,
>>>> lost async page write
>>>> [ 2794.535832] Buffer I/O error on dev sdc1, logical block 2134900,
>>>> lost async page write
>>>> [ 2794.535835] Buffer I/O error on dev sdc1, logical block 2134901,
>>>> lost async page write
>>>> [ 2794.535837] Buffer I/O error on dev sdc1, logical block 2134902,
>>>> lost async page write
>>>> [ 2842.568843] sd 9:0:0:0: [sdc] tag#0 FAILED Result:
>>>> hostbyte=DID_ERROR driverbyte=DRIVER_SENSE
>>>> [ 2842.568849] sd 9:0:0:0: [sdc] tag#0 Sense Key : Hardware Error
>>[current]
>>>> [ 2842.568852] sd 9:0:0:0: [sdc] tag#0 Add. Sense: No additional
>>sense
>>>> information
>>>> [ 2842.568857] sd 9:0:0:0: [sdc] tag#0 CDB: Write(10) 2a 00 01 04 a4
>>>> 58 00 00 f0 00
>>>> [ 2842.568859] blk_update_request: I/O error, dev sdc, sector
>>17081432
>>>> [ 2842.568862] buffer_io_error: 20 callbacks suppressed
>>>>
>>>> nmon says sdc is 100% busy but doesn't show any reading or writing.
>>I
>>>> once pulled the USB stick in a situation like this and I ended up
>>>> having to reformat it which I need to avoid this time since the file
>>>> is crucial. What should I do?
>>>>
>>>> - Grant
>>>
>>> Whatever you do, do NOT try to unplug the USB you were writing to
>>unless you
>>> first manage to successfully unmount it. If you do pull the USB
>>stick
>>> regardless of its current I/O state you will likely corrupt whatever
>>file you
>>> were writing onto, or potentially more.
>>>
>>> You could well have a hardware failure here, which manifested itself
>>during
>>> your copying operation. Things you could try:
>>>
>>> Run lsof to find out which process is trying to access the USB fs and
>>kill it.
>>> Then see if you can remount it read-only.
>>>
>>> Run dd, dcfldd, ddrescue to make an image of the complete USB stick
>>on your
>>> hard drive and then try to recover any lost files with testdisk.
>>>
>>> Use any low level recovery tools the manufacturer may offer - they
>>will likely
>>> require MSWindows.
>>>
>>> Shut down your OS and disconnect the USB stick, then reboot and try
>>again to
>>> access it, although I would first create an image of the device using
>>any of
>>> the above tools.
>>
>>
>>dd errored and lsof returned no output at all so I tried to reboot but
>>even that hung after the first 2 lines of output so I did a hard reset
>>after waiting an hour or so. Once it came back up I tried to do 'dd
>>if=/dev/sdb1 of=usbstick' but I got this after awhile:
>>
>>dd: error reading ‘/dev/sdb1’: Input/output error
>>16937216+0 records in
>>16937216+0 records out
>>8671854592 bytes (8.7 GB) copied, 230.107 s, 37.7 MB/s
>>
>>It's a 64GB USB stick. dmesg says:
>>
>>[ 744.729873] sd 8:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK
>>driverbyte=DRIVER_SENSE
>>[ 744.729879] sd 8:0:0:0: [sdb] tag#0 Sense Key : Medium Error
>>[current]
>>[ 744.729883] sd 8:0:0:0: [sdb] tag#0 Add. Sense: Unrecovered read
>>error
>>[ 744.729886] sd 8:0:0:0: [sdb] tag#0 CDB: Read(10) 28 00 01 02 78 e0
>>00 00 f0 00
>>[ 744.729889] blk_update_request: critical medium error, dev sdb,
>>sector 16939232
>>[ 744.763468] sd 8:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK
>>driverbyte=DRIVER_SENSE
>>[ 744.763472] sd 8:0:0:0: [sdb] tag#0 Sense Key : Medium Error
>>[current]
>>[ 744.763474] sd 8:0:0:0: [sdb] tag#0 Add. Sense: Unrecovered read
>>error
>>[ 744.763478] sd 8:0:0:0: [sdb] tag#0 CDB: Read(10) 28 00 01 02 79 00
>>00 00 04 00
>>[ 744.763480] blk_update_request: critical medium error, dev sdb,
>>sector 16939264
>>[ 744.763482] Buffer I/O error on dev sdb1, logical block 4234304,
>>async page read
>>[ 744.786743] sd 8:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK
>>driverbyte=DRIVER_SENSE
>>[ 744.786747] sd 8:0:0:0: [sdb] tag#0 Sense Key : Medium Error
>>[current]
>>[ 744.786750] sd 8:0:0:0: [sdb] tag#0 Add. Sense: Unrecovered read
>>error
>>[ 744.786753] sd 8:0:0:0: [sdb] tag#0 CDB: Read(10) 28 00 01 02 79 04
>>00 00 04 00
>>[ 744.786755] blk_update_request: critical medium error, dev sdb,
>>sector 16939268
>>[ 744.786758] Buffer I/O error on dev sdb1, logical block 4234305,
>>async page read
>>
>>I haven't tried to mount it yet. Any suggestions?
>>
>>- Grant
>
> Try ddrescue.
> It tries to skip the really bad parts.
>
> Then try data recovery on copies of the resulting image file.
I did:
# ddrescue -d -r3 /dev/sdb usb.img usb.log
GNU ddrescue 1.20
Press Ctrl-C to interrupt
rescued: 62742 MB, errsize: 32768 B, current rate: 0 B/s
ipos: 8672 MB, errors: 1, average rate: 14878 kB/s
opos: 8672 MB, run time: 1h 10m 17s, remaining time: n/a
time since last successful read: 5s
# mount -o loop,ro usb.img /mnt/usbstick
mount: wrong fs type, bad option, bad superblock on /dev/loop0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
# mount -o loop,ro -t ntfs usb.img /mnt/usbstick
NTFS signature is missing.
Failed to mount '/dev/loop0': Invalid argument
The device '/dev/loop0' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
How else can I get my file from the ddrescue image of the USB stick?
- Grant
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 0:40 ` Grant
@ 2016-08-30 0:51 ` Grant
2016-08-30 5:35 ` Azamat Hackimov
` (4 more replies)
0 siblings, 5 replies; 70+ messages in thread
From: Grant @ 2016-08-30 0:51 UTC (permalink / raw
To: Gentoo mailing list
>>>>> I have a USB stick with a crucial file on it (and only an old backup
>>>>> elsewhere). It's formatted NTFS because I wanted to be able to open
>>>>> the file on various Gentoo systems and my research indicated that
>>>NTFS
>>>>> was the best solution.
>>>>>
>>>>> I decided to copy a 10GB file from a USB hard disk directly to the
>>>USB
>>>>> stick this morning and I ran into errors so I canceled the operation
>>>>> and now the file manager (thunar) has been stuck for well over an
>>>hour
>>>>> and I'm getting errors like these over and over:
>>>>>
>>>>> [ 2794.535814] Buffer I/O error on dev sdc1, logical block 2134893,
>>>>> lost async page write
>>>>> [ 2794.535819] Buffer I/O error on dev sdc1, logical block 2134894,
>>>>> lost async page write
>>>>> [ 2794.535822] Buffer I/O error on dev sdc1, logical block 2134895,
>>>>> lost async page write
>>>>> [ 2794.535824] Buffer I/O error on dev sdc1, logical block 2134896,
>>>>> lost async page write
>>>>> [ 2794.535826] Buffer I/O error on dev sdc1, logical block 2134897,
>>>>> lost async page write
>>>>> [ 2794.535828] Buffer I/O error on dev sdc1, logical block 2134898,
>>>>> lost async page write
>>>>> [ 2794.535830] Buffer I/O error on dev sdc1, logical block 2134899,
>>>>> lost async page write
>>>>> [ 2794.535832] Buffer I/O error on dev sdc1, logical block 2134900,
>>>>> lost async page write
>>>>> [ 2794.535835] Buffer I/O error on dev sdc1, logical block 2134901,
>>>>> lost async page write
>>>>> [ 2794.535837] Buffer I/O error on dev sdc1, logical block 2134902,
>>>>> lost async page write
>>>>> [ 2842.568843] sd 9:0:0:0: [sdc] tag#0 FAILED Result:
>>>>> hostbyte=DID_ERROR driverbyte=DRIVER_SENSE
>>>>> [ 2842.568849] sd 9:0:0:0: [sdc] tag#0 Sense Key : Hardware Error
>>>[current]
>>>>> [ 2842.568852] sd 9:0:0:0: [sdc] tag#0 Add. Sense: No additional
>>>sense
>>>>> information
>>>>> [ 2842.568857] sd 9:0:0:0: [sdc] tag#0 CDB: Write(10) 2a 00 01 04 a4
>>>>> 58 00 00 f0 00
>>>>> [ 2842.568859] blk_update_request: I/O error, dev sdc, sector
>>>17081432
>>>>> [ 2842.568862] buffer_io_error: 20 callbacks suppressed
>>>>>
>>>>> nmon says sdc is 100% busy but doesn't show any reading or writing.
>>>I
>>>>> once pulled the USB stick in a situation like this and I ended up
>>>>> having to reformat it which I need to avoid this time since the file
>>>>> is crucial. What should I do?
>>>>>
>>>>> - Grant
>>>>
>>>> Whatever you do, do NOT try to unplug the USB you were writing to
>>>unless you
>>>> first manage to successfully unmount it. If you do pull the USB
>>>stick
>>>> regardless of its current I/O state you will likely corrupt whatever
>>>file you
>>>> were writing onto, or potentially more.
>>>>
>>>> You could well have a hardware failure here, which manifested itself
>>>during
>>>> your copying operation. Things you could try:
>>>>
>>>> Run lsof to find out which process is trying to access the USB fs and
>>>kill it.
>>>> Then see if you can remount it read-only.
>>>>
>>>> Run dd, dcfldd, ddrescue to make an image of the complete USB stick
>>>on your
>>>> hard drive and then try to recover any lost files with testdisk.
>>>>
>>>> Use any low level recovery tools the manufacturer may offer - they
>>>will likely
>>>> require MSWindows.
>>>>
>>>> Shut down your OS and disconnect the USB stick, then reboot and try
>>>again to
>>>> access it, although I would first create an image of the device using
>>>any of
>>>> the above tools.
>>>
>>>
>>>dd errored and lsof returned no output at all so I tried to reboot but
>>>even that hung after the first 2 lines of output so I did a hard reset
>>>after waiting an hour or so. Once it came back up I tried to do 'dd
>>>if=/dev/sdb1 of=usbstick' but I got this after awhile:
>>>
>>>dd: error reading ‘/dev/sdb1’: Input/output error
>>>16937216+0 records in
>>>16937216+0 records out
>>>8671854592 bytes (8.7 GB) copied, 230.107 s, 37.7 MB/s
>>>
>>>It's a 64GB USB stick. dmesg says:
>>>
>>>[ 744.729873] sd 8:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK
>>>driverbyte=DRIVER_SENSE
>>>[ 744.729879] sd 8:0:0:0: [sdb] tag#0 Sense Key : Medium Error
>>>[current]
>>>[ 744.729883] sd 8:0:0:0: [sdb] tag#0 Add. Sense: Unrecovered read
>>>error
>>>[ 744.729886] sd 8:0:0:0: [sdb] tag#0 CDB: Read(10) 28 00 01 02 78 e0
>>>00 00 f0 00
>>>[ 744.729889] blk_update_request: critical medium error, dev sdb,
>>>sector 16939232
>>>[ 744.763468] sd 8:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK
>>>driverbyte=DRIVER_SENSE
>>>[ 744.763472] sd 8:0:0:0: [sdb] tag#0 Sense Key : Medium Error
>>>[current]
>>>[ 744.763474] sd 8:0:0:0: [sdb] tag#0 Add. Sense: Unrecovered read
>>>error
>>>[ 744.763478] sd 8:0:0:0: [sdb] tag#0 CDB: Read(10) 28 00 01 02 79 00
>>>00 00 04 00
>>>[ 744.763480] blk_update_request: critical medium error, dev sdb,
>>>sector 16939264
>>>[ 744.763482] Buffer I/O error on dev sdb1, logical block 4234304,
>>>async page read
>>>[ 744.786743] sd 8:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK
>>>driverbyte=DRIVER_SENSE
>>>[ 744.786747] sd 8:0:0:0: [sdb] tag#0 Sense Key : Medium Error
>>>[current]
>>>[ 744.786750] sd 8:0:0:0: [sdb] tag#0 Add. Sense: Unrecovered read
>>>error
>>>[ 744.786753] sd 8:0:0:0: [sdb] tag#0 CDB: Read(10) 28 00 01 02 79 04
>>>00 00 04 00
>>>[ 744.786755] blk_update_request: critical medium error, dev sdb,
>>>sector 16939268
>>>[ 744.786758] Buffer I/O error on dev sdb1, logical block 4234305,
>>>async page read
>>>
>>>I haven't tried to mount it yet. Any suggestions?
>>>
>>>- Grant
>>
>> Try ddrescue.
>> It tries to skip the really bad parts.
>>
>> Then try data recovery on copies of the resulting image file.
>
>
> I did:
>
> # ddrescue -d -r3 /dev/sdb usb.img usb.log
> GNU ddrescue 1.20
> Press Ctrl-C to interrupt
> rescued: 62742 MB, errsize: 32768 B, current rate: 0 B/s
> ipos: 8672 MB, errors: 1, average rate: 14878 kB/s
> opos: 8672 MB, run time: 1h 10m 17s, remaining time: n/a
> time since last successful read: 5s
>
> # mount -o loop,ro usb.img /mnt/usbstick
> mount: wrong fs type, bad option, bad superblock on /dev/loop0,
> missing codepage or helper program, or other error
>
> In some cases useful info is found in syslog - try
> dmesg | tail or so.
>
> # mount -o loop,ro -t ntfs usb.img /mnt/usbstick
> NTFS signature is missing.
> Failed to mount '/dev/loop0': Invalid argument
> The device '/dev/loop0' doesn't seem to have a valid NTFS.
> Maybe the wrong device is used? Or the whole disk instead of a
> partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
>
> How else can I get my file from the ddrescue image of the USB stick?
>
> - Grant
Ah, I got it, I just needed to specify the offset when mounting.
Thank you so much everyone. Many hours of work went into the file I
just recovered.
So I'm done with NTFS forever. Will ext2 somehow allow me to use the
USB stick across Gentoo systems without permission/ownership problems?
- Grant
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 0:51 ` Grant
@ 2016-08-30 5:35 ` Azamat Hackimov
2016-08-30 14:09 ` Rich Freeman
2016-08-30 5:46 ` Mick
` (3 subsequent siblings)
4 siblings, 1 reply; 70+ messages in thread
From: Azamat Hackimov @ 2016-08-30 5:35 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 482 bytes --]
2016-08-30 5:51 GMT+05:00 Grant <emailgrant@gmail.com>:
> Ah, I got it, I just needed to specify the offset when mounting.
> Thank you so much everyone. Many hours of work went into the file I
> just recovered.
>
> So I'm done with NTFS forever. Will ext2 somehow allow me to use the
> USB stick across Gentoo systems without permission/ownership problems?
>
> - Grant
>
>
I would recommend to use F2FS filesystem, since you have only Linux systems.
--
From Siberia with Love!
[-- Attachment #2: Type: text/html, Size: 1068 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 0:51 ` Grant
2016-08-30 5:35 ` Azamat Hackimov
@ 2016-08-30 5:46 ` Mick
2016-08-30 7:38 ` Neil Bothwick
2016-08-30 7:41 ` [gentoo-user] " Neil Bothwick
` (2 subsequent siblings)
4 siblings, 1 reply; 70+ messages in thread
From: Mick @ 2016-08-30 5:46 UTC (permalink / raw
To: gentoo-user
On Monday 29 Aug 2016 17:51:19 Grant wrote:
> >>>>> I have a USB stick with a crucial file on it (and only an old backup
> >>>>> elsewhere). It's formatted NTFS because I wanted to be able to open
> >>>>> the file on various Gentoo systems and my research indicated that
> >>>
> >>>NTFS
> >>>
> >>>>> was the best solution.
> >>>>>
> >>>>> I decided to copy a 10GB file from a USB hard disk directly to the
> >>>
> >>>USB
> >>>
> >>>>> stick this morning and I ran into errors so I canceled the operation
> >>>>> and now the file manager (thunar) has been stuck for well over an
> >>>
> >>>hour
> >>>
> >>>>> and I'm getting errors like these over and over:
> >>>>>
> >>>>> [ 2794.535814] Buffer I/O error on dev sdc1, logical block 2134893,
> >>>>> lost async page write
> >>>>> [ 2794.535819] Buffer I/O error on dev sdc1, logical block 2134894,
> >>>>> lost async page write
> >>>>> [ 2794.535822] Buffer I/O error on dev sdc1, logical block 2134895,
> >>>>> lost async page write
> >>>>> [ 2794.535824] Buffer I/O error on dev sdc1, logical block 2134896,
> >>>>> lost async page write
> >>>>> [ 2794.535826] Buffer I/O error on dev sdc1, logical block 2134897,
> >>>>> lost async page write
> >>>>> [ 2794.535828] Buffer I/O error on dev sdc1, logical block 2134898,
> >>>>> lost async page write
> >>>>> [ 2794.535830] Buffer I/O error on dev sdc1, logical block 2134899,
> >>>>> lost async page write
> >>>>> [ 2794.535832] Buffer I/O error on dev sdc1, logical block 2134900,
> >>>>> lost async page write
> >>>>> [ 2794.535835] Buffer I/O error on dev sdc1, logical block 2134901,
> >>>>> lost async page write
> >>>>> [ 2794.535837] Buffer I/O error on dev sdc1, logical block 2134902,
> >>>>> lost async page write
> >>>>> [ 2842.568843] sd 9:0:0:0: [sdc] tag#0 FAILED Result:
> >>>>> hostbyte=DID_ERROR driverbyte=DRIVER_SENSE
> >>>>> [ 2842.568849] sd 9:0:0:0: [sdc] tag#0 Sense Key : Hardware Error
> >>>
> >>>[current]
> >>>
> >>>>> [ 2842.568852] sd 9:0:0:0: [sdc] tag#0 Add. Sense: No additional
> >>>
> >>>sense
> >>>
> >>>>> information
> >>>>> [ 2842.568857] sd 9:0:0:0: [sdc] tag#0 CDB: Write(10) 2a 00 01 04 a4
> >>>>> 58 00 00 f0 00
> >>>>> [ 2842.568859] blk_update_request: I/O error, dev sdc, sector
> >>>
> >>>17081432
> >>>
> >>>>> [ 2842.568862] buffer_io_error: 20 callbacks suppressed
> >>>>>
> >>>>> nmon says sdc is 100% busy but doesn't show any reading or writing.
> >>>
> >>>I
> >>>
> >>>>> once pulled the USB stick in a situation like this and I ended up
> >>>>> having to reformat it which I need to avoid this time since the file
> >>>>> is crucial. What should I do?
> >>>>>
> >>>>> - Grant
> >>>>
> >>>> Whatever you do, do NOT try to unplug the USB you were writing to
> >>>
> >>>unless you
> >>>
> >>>> first manage to successfully unmount it. If you do pull the USB
> >>>
> >>>stick
> >>>
> >>>> regardless of its current I/O state you will likely corrupt whatever
> >>>
> >>>file you
> >>>
> >>>> were writing onto, or potentially more.
> >>>>
> >>>> You could well have a hardware failure here, which manifested itself
> >>>
> >>>during
> >>>
> >>>> your copying operation. Things you could try:
> >>>>
> >>>> Run lsof to find out which process is trying to access the USB fs and
> >>>
> >>>kill it.
> >>>
> >>>> Then see if you can remount it read-only.
> >>>>
> >>>> Run dd, dcfldd, ddrescue to make an image of the complete USB stick
> >>>
> >>>on your
> >>>
> >>>> hard drive and then try to recover any lost files with testdisk.
> >>>>
> >>>> Use any low level recovery tools the manufacturer may offer - they
> >>>
> >>>will likely
> >>>
> >>>> require MSWindows.
> >>>>
> >>>> Shut down your OS and disconnect the USB stick, then reboot and try
> >>>
> >>>again to
> >>>
> >>>> access it, although I would first create an image of the device using
> >>>
> >>>any of
> >>>
> >>>> the above tools.
> >>>
> >>>dd errored and lsof returned no output at all so I tried to reboot but
> >>>even that hung after the first 2 lines of output so I did a hard reset
> >>>after waiting an hour or so. Once it came back up I tried to do 'dd
> >>>if=/dev/sdb1 of=usbstick' but I got this after awhile:
> >>>
> >>>dd: error reading ‘/dev/sdb1’: Input/output error
> >>>16937216+0 records in
> >>>16937216+0 records out
> >>>8671854592 bytes (8.7 GB) copied, 230.107 s, 37.7 MB/s
> >>>
> >>>It's a 64GB USB stick. dmesg says:
> >>>
> >>>[ 744.729873] sd 8:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK
> >>>driverbyte=DRIVER_SENSE
> >>>[ 744.729879] sd 8:0:0:0: [sdb] tag#0 Sense Key : Medium Error
> >>>[current]
> >>>[ 744.729883] sd 8:0:0:0: [sdb] tag#0 Add. Sense: Unrecovered read
> >>>error
> >>>[ 744.729886] sd 8:0:0:0: [sdb] tag#0 CDB: Read(10) 28 00 01 02 78 e0
> >>>00 00 f0 00
> >>>[ 744.729889] blk_update_request: critical medium error, dev sdb,
> >>>sector 16939232
> >>>[ 744.763468] sd 8:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK
> >>>driverbyte=DRIVER_SENSE
> >>>[ 744.763472] sd 8:0:0:0: [sdb] tag#0 Sense Key : Medium Error
> >>>[current]
> >>>[ 744.763474] sd 8:0:0:0: [sdb] tag#0 Add. Sense: Unrecovered read
> >>>error
> >>>[ 744.763478] sd 8:0:0:0: [sdb] tag#0 CDB: Read(10) 28 00 01 02 79 00
> >>>00 00 04 00
> >>>[ 744.763480] blk_update_request: critical medium error, dev sdb,
> >>>sector 16939264
> >>>[ 744.763482] Buffer I/O error on dev sdb1, logical block 4234304,
> >>>async page read
> >>>[ 744.786743] sd 8:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK
> >>>driverbyte=DRIVER_SENSE
> >>>[ 744.786747] sd 8:0:0:0: [sdb] tag#0 Sense Key : Medium Error
> >>>[current]
> >>>[ 744.786750] sd 8:0:0:0: [sdb] tag#0 Add. Sense: Unrecovered read
> >>>error
> >>>[ 744.786753] sd 8:0:0:0: [sdb] tag#0 CDB: Read(10) 28 00 01 02 79 04
> >>>00 00 04 00
> >>>[ 744.786755] blk_update_request: critical medium error, dev sdb,
> >>>sector 16939268
> >>>[ 744.786758] Buffer I/O error on dev sdb1, logical block 4234305,
> >>>async page read
> >>>
> >>>I haven't tried to mount it yet. Any suggestions?
> >>>
> >>>- Grant
> >>>
> >> Try ddrescue.
> >> It tries to skip the really bad parts.
> >>
> >> Then try data recovery on copies of the resulting image file.
> >
> > I did:
> >
> > # ddrescue -d -r3 /dev/sdb usb.img usb.log
> > GNU ddrescue 1.20
> > Press Ctrl-C to interrupt
> > rescued: 62742 MB, errsize: 32768 B, current rate: 0 B/s
> >
> > ipos: 8672 MB, errors: 1, average rate: 14878 kB/s
> > opos: 8672 MB, run time: 1h 10m 17s, remaining time: n/a
> >
> > time since last successful read: 5s
> >
> > # mount -o loop,ro usb.img /mnt/usbstick
> > mount: wrong fs type, bad option, bad superblock on /dev/loop0,
> >
> > missing codepage or helper program, or other error
> >
> > In some cases useful info is found in syslog - try
> > dmesg | tail or so.
> >
> > # mount -o loop,ro -t ntfs usb.img /mnt/usbstick
> > NTFS signature is missing.
> > Failed to mount '/dev/loop0': Invalid argument
> > The device '/dev/loop0' doesn't seem to have a valid NTFS.
> > Maybe the wrong device is used? Or the whole disk instead of a
> > partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
> >
> > How else can I get my file from the ddrescue image of the USB stick?
> >
> > - Grant
>
> Ah, I got it, I just needed to specify the offset when mounting.
> Thank you so much everyone. Many hours of work went into the file I
> just recovered.
>
> So I'm done with NTFS forever. Will ext2 somehow allow me to use the
> USB stick across Gentoo systems without permission/ownership problems?
>
> - Grant
ext2 will work, but you'll have to mount it or chmod -R 0777, or only root
will be able to access it. There are also vfat and exfat fs, the latter can
be accessed with sys-fs/fuse-exfat on Linux. I think if you're only storing
files less than 4GB you better stay with vfat, which has a more space efficient
12KB cluster size.
--
Regards,
Mick
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 5:46 ` Mick
@ 2016-08-30 7:38 ` Neil Bothwick
2016-09-01 21:50 ` [gentoo-user] " Kai Krakow
0 siblings, 1 reply; 70+ messages in thread
From: Neil Bothwick @ 2016-08-30 7:38 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 695 bytes --]
On Tue, 30 Aug 2016 06:46:54 +0100, Mick wrote:
> > So I'm done with NTFS forever. Will ext2 somehow allow me to use the
> > USB stick across Gentoo systems without permission/ownership problems?
> >
> > - Grant
>
> ext2 will work, but you'll have to mount it or chmod -R 0777, or only
> root will be able to access it.
That's not true. Whoever owns the files and directories will be able to
access then, even if root mounted the stick, just like a hard drive. If
you have the same UID on all your systems, chown -R
youruser: /mount/point will make everything available on all systems.
--
Neil Bothwick
As long as you do not move you can still choose any direction.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 163 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 0:51 ` Grant
2016-08-30 5:35 ` Azamat Hackimov
2016-08-30 5:46 ` Mick
@ 2016-08-30 7:41 ` Neil Bothwick
2016-08-30 8:29 ` Alarig Le Lay
2016-09-01 21:48 ` [gentoo-user] " Kai Krakow
4 siblings, 0 replies; 70+ messages in thread
From: Neil Bothwick @ 2016-08-30 7:41 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 921 bytes --]
On Mon, 29 Aug 2016 17:51:19 -0700, Grant wrote:
> # ddrescue -d -r3 /dev/sdb usb.img usb.log
> [...]
> Ah, I got it, I just needed to specify the offset when mounting.
Tht's because you ran ddrescue on the whole stick and not the partition
containing the filesystem.
> Thank you so much everyone. Many hours of work went into the file I
> just recovered.
>
> So I'm done with NTFS forever. Will ext2 somehow allow me to use the
> USB stick across Gentoo systems without permission/ownership problems?
That depends on whether you use the same UID on each system. If not,
mounting with umask=0 will make everything world writeable.
--
Neil Bothwick
Windows95: <win-doz-nin-te-fiv> n. 32 bit extensions and a graphical
shell for a 16 bit patch to an 8 bit operating system originally coded
for a 4 bit microprocessor, written by a 2 bit company, that can't stand
1 bit of competition.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 163 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 0:51 ` Grant
` (2 preceding siblings ...)
2016-08-30 7:41 ` [gentoo-user] " Neil Bothwick
@ 2016-08-30 8:29 ` Alarig Le Lay
2016-08-30 9:40 ` Neil Bothwick
2016-09-01 21:48 ` [gentoo-user] " Kai Krakow
4 siblings, 1 reply; 70+ messages in thread
From: Alarig Le Lay @ 2016-08-30 8:29 UTC (permalink / raw
To: gentoo-user
On Mon Aug 29 17:51:19 2016, Grant wrote:
> So I'm done with NTFS forever. Will ext2 somehow allow me to use the
> USB stick across Gentoo systems without permission/ownership problems?
I always use pmount for USB and other flash devices to have it
mounted with my user permissions at all times.
Anyway, I don’t recommend you to use a journalised filesystem on a USB
stick, as it liable to write wearing. You will deteriorate it
prematurely.
--
alarig
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 8:29 ` Alarig Le Lay
@ 2016-08-30 9:40 ` Neil Bothwick
2016-08-30 9:43 ` Alarig Le Lay
0 siblings, 1 reply; 70+ messages in thread
From: Neil Bothwick @ 2016-08-30 9:40 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 654 bytes --]
On Tue, 30 Aug 2016 10:29:00 +0200, Alarig Le Lay wrote:
> > So I'm done with NTFS forever. Will ext2 somehow allow me to use the
> > USB stick across Gentoo systems without permission/ownership
> > problems?
>
> I always use pmount for USB and other flash devices to have it
> mounted with my user permissions at all times.
> Anyway, I don’t recommend you to use a journalised filesystem on a USB
> stick, as it liable to write wearing. You will deteriorate it
> prematurely.
ext2 doesn't have a journal, that's why I suggested it in the first place.
--
Neil Bothwick
What Aussies lack in Humour they make up for in Beer!
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 163 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 9:40 ` Neil Bothwick
@ 2016-08-30 9:43 ` Alarig Le Lay
2016-08-30 10:08 ` Alan McKinnon
2016-08-30 12:01 ` Neil Bothwick
0 siblings, 2 replies; 70+ messages in thread
From: Alarig Le Lay @ 2016-08-30 9:43 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 250 bytes --]
On Tue Aug 30 10:40:01 2016, Neil Bothwick wrote:
> ext2 doesn't have a journal, that's why I suggested it in the first place.
My point was against all the journalised filesystems (that includes
NTFS), not against your advice ;)
--
alarig
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 9:43 ` Alarig Le Lay
@ 2016-08-30 10:08 ` Alan McKinnon
2016-08-30 12:04 ` Neil Bothwick
2016-08-30 12:01 ` Neil Bothwick
1 sibling, 1 reply; 70+ messages in thread
From: Alan McKinnon @ 2016-08-30 10:08 UTC (permalink / raw
To: gentoo-user
On 30/08/2016 11:43, Alarig Le Lay wrote:
> On Tue Aug 30 10:40:01 2016, Neil Bothwick wrote:
>> ext2 doesn't have a journal, that's why I suggested it in the first place.
>
> My point was against all the journalised filesystems (that includes
> NTFS), not against your advice ;)
>
OP is looking for an fs to put on a memory stick that will work everywhere:
- vfat
- exfat
--
Alan McKinnon
alan.mckinnon@gmail.com
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 9:43 ` Alarig Le Lay
2016-08-30 10:08 ` Alan McKinnon
@ 2016-08-30 12:01 ` Neil Bothwick
1 sibling, 0 replies; 70+ messages in thread
From: Neil Bothwick @ 2016-08-30 12:01 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 564 bytes --]
On Tue, 30 Aug 2016 11:43:13 +0200, Alarig Le Lay wrote:
> On Tue Aug 30 10:40:01 2016, Neil Bothwick wrote:
> > ext2 doesn't have a journal, that's why I suggested it in the first
> > place.
>
> My point was against all the journalised filesystems (that includes
> NTFS), not against your advice ;)
That's a good point. For compatibility, exFAT seems a good choice,
although I'm not sure how mature/stable it is on Linux. I've used it a
little with no issues.
--
Neil Bothwick
In plumbing, a straight flush is better than a full house.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 163 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 10:08 ` Alan McKinnon
@ 2016-08-30 12:04 ` Neil Bothwick
2016-08-30 18:12 ` Alan McKinnon
0 siblings, 1 reply; 70+ messages in thread
From: Neil Bothwick @ 2016-08-30 12:04 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 542 bytes --]
On Tue, 30 Aug 2016 12:08:13 +0200, Alan McKinnon wrote:
> >> ext2 doesn't have a journal, that's why I suggested it in the first
> >> place.
> >
> > My point was against all the journalised filesystems (that includes
> > NTFS), not against your advice ;)
> >
>
>
> OP is looking for an fs to put on a memory stick that will work
> everywhere:
>
> - vfat
> - exfat
He asked for something that would work "across Gentoo systems".
--
Neil Bothwick
I don't suffer from insanity. I enjoy every minute of it.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 163 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 5:35 ` Azamat Hackimov
@ 2016-08-30 14:09 ` Rich Freeman
0 siblings, 0 replies; 70+ messages in thread
From: Rich Freeman @ 2016-08-30 14:09 UTC (permalink / raw
To: gentoo-user
On Tue, Aug 30, 2016 at 1:35 AM, Azamat Hackimov
<azamat.hackimov@gmail.com> wrote:
>
> I would recommend to use F2FS filesystem, since you have only Linux systems.
>
As a user of immature filesystems, I would not recommend F2FS unless
you want to be a user of immature filesystems. Remember how he got
into this situation in the first place...
But, sure, F2FS is fairly ideally-suited to a USB drive in theory,
assuming the drive doesn't try to be overly clever with how it maps
all the writes.
--
Rich
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 12:04 ` Neil Bothwick
@ 2016-08-30 18:12 ` Alan McKinnon
2016-08-30 18:58 ` Volker Armin Hemmann
2016-08-30 22:36 ` [gentoo-user] " Neil Bothwick
0 siblings, 2 replies; 70+ messages in thread
From: Alan McKinnon @ 2016-08-30 18:12 UTC (permalink / raw
To: gentoo-user
On 30/08/2016 14:04, Neil Bothwick wrote:
> On Tue, 30 Aug 2016 12:08:13 +0200, Alan McKinnon wrote:
>
>>>> ext2 doesn't have a journal, that's why I suggested it in the first
>>>> place.
>>>
>>> My point was against all the journalised filesystems (that includes
>>> NTFS), not against your advice ;)
>>>
>>
>>
>> OP is looking for an fs to put on a memory stick that will work
>> everywhere:
>>
>> - vfat
>> - exfat
>
> He asked for something that would work "across Gentoo systems".
>
>
How does exfat not fulfil that?
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 18:12 ` Alan McKinnon
@ 2016-08-30 18:58 ` Volker Armin Hemmann
2016-08-30 19:14 ` J. Roeleveld
2016-08-30 22:36 ` [gentoo-user] " Neil Bothwick
1 sibling, 1 reply; 70+ messages in thread
From: Volker Armin Hemmann @ 2016-08-30 18:58 UTC (permalink / raw
To: gentoo-user
Am 30.08.2016 um 20:12 schrieb Alan McKinnon:
> On 30/08/2016 14:04, Neil Bothwick wrote:
>> On Tue, 30 Aug 2016 12:08:13 +0200, Alan McKinnon wrote:
>>
>>>>> ext2 doesn't have a journal, that's why I suggested it in the first
>>>>> place.
>>>>
>>>> My point was against all the journalised filesystems (that includes
>>>> NTFS), not against your advice ;)
>>>>
>>>
>>>
>>> OP is looking for an fs to put on a memory stick that will work
>>> everywhere:
>>>
>>> - vfat
>>> - exfat
>>
>> He asked for something that would work "across Gentoo systems".
>>
>>
>
> How does exfat not fulfil that?
>
>
because exfat does not work across gentoo systems. ext2 does.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 18:58 ` Volker Armin Hemmann
@ 2016-08-30 19:14 ` J. Roeleveld
2016-08-30 20:27 ` Volker Armin Hemmann
0 siblings, 1 reply; 70+ messages in thread
From: J. Roeleveld @ 2016-08-30 19:14 UTC (permalink / raw
To: gentoo-user
On August 30, 2016 8:58:17 PM GMT+02:00, Volker Armin Hemmann <volkerarmin@googlemail.com> wrote:
>Am 30.08.2016 um 20:12 schrieb Alan McKinnon:
>> On 30/08/2016 14:04, Neil Bothwick wrote:
>>> On Tue, 30 Aug 2016 12:08:13 +0200, Alan McKinnon wrote:
>>>
>>>>>> ext2 doesn't have a journal, that's why I suggested it in the
>first
>>>>>> place.
>>>>>
>>>>> My point was against all the journalised filesystems (that
>includes
>>>>> NTFS), not against your advice ;)
>>>>>
>>>>
>>>>
>>>> OP is looking for an fs to put on a memory stick that will work
>>>> everywhere:
>>>>
>>>> - vfat
>>>> - exfat
>>>
>>> He asked for something that would work "across Gentoo systems".
>>>
>>>
>>
>> How does exfat not fulfil that?
>>
>>
>
>because exfat does not work across gentoo systems. ext2 does.
Exfat works when the drivers are installed.
Same goes for ext2.
It is possible to not have support for ext2/3 or 4 and still have a fully functional system. (Btrfs or zfs for the full system for instance)
When using UEFI boot, a vfat partition with support is required.
--
Joost
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-28 18:49 [gentoo-user] USB crucial file recovery Grant
2016-08-28 18:57 ` Neil Bothwick
2016-08-28 19:12 ` Mick
@ 2016-08-30 19:31 ` R0b0t1
2 siblings, 0 replies; 70+ messages in thread
From: R0b0t1 @ 2016-08-30 19:31 UTC (permalink / raw
To: gentoo-user
On Sun, Aug 28, 2016 at 1:49 PM, Grant <emailgrant@gmail.com> wrote:
> I decided to copy a 10GB file from a USB hard disk directly to the USB
> stick this morning and I ran into errors so I canceled the operation
> and now the file manager (thunar) has been stuck for well over an hour
> and I'm getting errors like these over and over:
As mentioned try ddrescue and then recovery software on the resulting
images. As a last option it is possible to desolder the flash and
access it directly.
Good information to lead with is how much the file cost to produce.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 19:14 ` J. Roeleveld
@ 2016-08-30 20:27 ` Volker Armin Hemmann
2016-08-30 20:32 ` Grant
` (2 more replies)
0 siblings, 3 replies; 70+ messages in thread
From: Volker Armin Hemmann @ 2016-08-30 20:27 UTC (permalink / raw
To: gentoo-user
Am 30.08.2016 um 21:14 schrieb J. Roeleveld:
> On August 30, 2016 8:58:17 PM GMT+02:00, Volker Armin Hemmann <volkerarmin@googlemail.com> wrote:
>> Am 30.08.2016 um 20:12 schrieb Alan McKinnon:
>>> On 30/08/2016 14:04, Neil Bothwick wrote:
>>>> On Tue, 30 Aug 2016 12:08:13 +0200, Alan McKinnon wrote:
>>>>
>>>>>>> ext2 doesn't have a journal, that's why I suggested it in the
>> first
>>>>>>> place.
>>>>>> My point was against all the journalised filesystems (that
>> includes
>>>>>> NTFS), not against your advice ;)
>>>>>>
>>>>>
>>>>> OP is looking for an fs to put on a memory stick that will work
>>>>> everywhere:
>>>>>
>>>>> - vfat
>>>>> - exfat
>>>> He asked for something that would work "across Gentoo systems".
>>>>
>>>>
>>> How does exfat not fulfil that?
>>>
>>>
>> because exfat does not work across gentoo systems. ext2 does.
> Exfat works when the drivers are installed.
> Same goes for ext2.
>
> It is possible to not have support for ext2/3 or 4 and still have a fully functional system. (Btrfs or zfs for the full system for instance)
>
> When using UEFI boot, a vfat partition with support is required.
>
> --
> Joost
ext2 is on every system, exfat not. ext2 is very stable, tested and well
aged. exfat is some fuse something crap. New, hardly tested and unstable
as it gets.
And why use exfat if you use linux? It is just not needed at all.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 20:27 ` Volker Armin Hemmann
@ 2016-08-30 20:32 ` Grant
2016-08-30 20:43 ` Rich Freeman
` (2 more replies)
2016-08-30 20:42 ` [gentoo-user] " Grant Edwards
2016-09-01 22:05 ` Kai Krakow
2 siblings, 3 replies; 70+ messages in thread
From: Grant @ 2016-08-30 20:32 UTC (permalink / raw
To: Gentoo mailing list
>>>>>>> ext2 doesn't have a journal, that's why I suggested it in the
>>> first
>>>>>>>> place.
>>>>>>> My point was against all the journalised filesystems (that
>>> includes
>>>>>>> NTFS), not against your advice ;)
>>>>>>>
>>>>>>
>>>>>> OP is looking for an fs to put on a memory stick that will work
>>>>>> everywhere:
>>>>>>
>>>>>> - vfat
>>>>>> - exfat
>>>>> He asked for something that would work "across Gentoo systems".
>>>>>
>>>>>
>>>> How does exfat not fulfil that?
>>>>
>>>>
>>> because exfat does not work across gentoo systems. ext2 does.
>> Exfat works when the drivers are installed.
>> Same goes for ext2.
>>
>> It is possible to not have support for ext2/3 or 4 and still have a fully functional system. (Btrfs or zfs for the full system for instance)
>>
>> When using UEFI boot, a vfat partition with support is required.
>>
>> --
>> Joost
>
> ext2 is on every system, exfat not. ext2 is very stable, tested and well
> aged. exfat is some fuse something crap. New, hardly tested and unstable
> as it gets.
>
> And why use exfat if you use linux? It is just not needed at all.
If I use ext2 on the USB stick, can I mount and use it as any user on
any Gentoo system from within a file manager like thunar?
Should I consider ext3/4 with journaling disabled?
- Grant
^ permalink raw reply [flat|nested] 70+ messages in thread
* [gentoo-user] Re: USB crucial file recovery
2016-08-30 20:27 ` Volker Armin Hemmann
2016-08-30 20:32 ` Grant
@ 2016-08-30 20:42 ` Grant Edwards
2016-08-30 20:46 ` Rich Freeman
2016-08-30 22:42 ` Neil Bothwick
2016-09-01 22:05 ` Kai Krakow
2 siblings, 2 replies; 70+ messages in thread
From: Grant Edwards @ 2016-08-30 20:42 UTC (permalink / raw
To: gentoo-user
On 2016-08-30, Volker Armin Hemmann <volkerarmin@googlemail.com> wrote:
> ext2 is on every system,
Unless it isn't.
There's nothing in Gentoo that guarantees everybody has ext2 support
in their kernels. That said, I agree that ext2 (or perhaps ext3 with
journalling disabled -- I've always been a bit fuzzy on whether that's
exactly the same thing or not) is as close to a "universal" Linux
filesystem as you're going to get. I would guess that the percentage
of Gentoo systems that suport ext2/3 is _way_ _way_ higher than the
percentage that support exfat.
> exfat not. ext2 is very stable, tested and well aged. exfat is some
> fuse something crap. New, hardly tested and unstable as it gets.
>
> And why use exfat if you use linux? It is just not needed at all.
I agree. If you want to transport something between Linux systems,
use ext2/3 and use "mount" options to handle the permission issues.
--
Grant Edwards grant.b.edwards Yow! JAPAN is a WONDERFUL
at planet -- I wonder if we'll
gmail.com ever reach their level of
COMPARATIVE SHOPPING ...
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 20:32 ` Grant
@ 2016-08-30 20:43 ` Rich Freeman
2016-08-30 20:53 ` Volker Armin Hemmann
2016-08-30 22:38 ` Neil Bothwick
2 siblings, 0 replies; 70+ messages in thread
From: Rich Freeman @ 2016-08-30 20:43 UTC (permalink / raw
To: gentoo-user
On Tue, Aug 30, 2016 at 4:32 PM, Grant <emailgrant@gmail.com> wrote:
>>
>> ext2 is on every system, exfat not. ext2 is very stable, tested and well
>> aged. exfat is some fuse something crap. New, hardly tested and unstable
>> as it gets.
>>
>
> If I use ext2 on the USB stick, can I mount and use it as any user on
> any Gentoo system from within a file manager like thunar?
>
Ext2 will work on any Gentoo system that has it enabled in the kernel,
the same as just about any filesystem that has been created.
Ext2 support is fairly likely to be enabled due to its ubiquity, but
you can certainly run a linux kernel without it, as has already been
pointed out. If you stick your USB drive in an Android phone, for
example, you might just find that it lacks support for the filesystem
(no doubt many/most Android systems using it for the OS, but since it
is a closed ecosystem with 100% flash they might also use f2fs, yaffs,
or even btrfs (which can be quite stable if you carefully control how
it is used, especially on read-only filesystems)).
--
Rich
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-30 20:42 ` [gentoo-user] " Grant Edwards
@ 2016-08-30 20:46 ` Rich Freeman
2016-08-30 20:58 ` Volker Armin Hemmann
2016-08-30 22:42 ` Neil Bothwick
1 sibling, 1 reply; 70+ messages in thread
From: Rich Freeman @ 2016-08-30 20:46 UTC (permalink / raw
To: gentoo-user
On Tue, Aug 30, 2016 at 4:42 PM, Grant Edwards
<grant.b.edwards@gmail.com> wrote:
>
> There's nothing in Gentoo that guarantees everybody has ext2 support
> in their kernels. That said, I agree that ext2 (or perhaps ext3 with
> journalling disabled -- I've always been a bit fuzzy on whether that's
> exactly the same thing or not)
Sorry, I just wanted to chime in on one thing. While a journal
probably will cause more flash wear, it also potentially adds data
integrity.
Now consider that the original message that started this whole thread
was important files being stored on flash and being corrupted.
Getting rid of the journal might not be the best move.
Unless you have a LOT of writes to flash you're not going to wear it
out, especially with wear-leveling algorithms.
Oh, and finally, if it matters that much, have a backup...
--
Rich
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 20:32 ` Grant
2016-08-30 20:43 ` Rich Freeman
@ 2016-08-30 20:53 ` Volker Armin Hemmann
2016-08-30 22:38 ` Neil Bothwick
2 siblings, 0 replies; 70+ messages in thread
From: Volker Armin Hemmann @ 2016-08-30 20:53 UTC (permalink / raw
To: gentoo-user
Am 30.08.2016 um 22:32 schrieb Grant:
>>>>>>>> ext2 doesn't have a journal, that's why I suggested it in the
>>>> first
>>>>>>>>> place.
>>>>>>>> My point was against all the journalised filesystems (that
>>>> includes
>>>>>>>> NTFS), not against your advice ;)
>>>>>>>>
>>>>>>> OP is looking for an fs to put on a memory stick that will work
>>>>>>> everywhere:
>>>>>>>
>>>>>>> - vfat
>>>>>>> - exfat
>>>>>> He asked for something that would work "across Gentoo systems".
>>>>>>
>>>>>>
>>>>> How does exfat not fulfil that?
>>>>>
>>>>>
>>>> because exfat does not work across gentoo systems. ext2 does.
>>> Exfat works when the drivers are installed.
>>> Same goes for ext2.
>>>
>>> It is possible to not have support for ext2/3 or 4 and still have a fully functional system. (Btrfs or zfs for the full system for instance)
>>>
>>> When using UEFI boot, a vfat partition with support is required.
>>>
>>> --
>>> Joost
>> ext2 is on every system, exfat not. ext2 is very stable, tested and well
>> aged. exfat is some fuse something crap. New, hardly tested and unstable
>> as it gets.
>>
>> And why use exfat if you use linux? It is just not needed at all.
>
> If I use ext2 on the USB stick, can I mount and use it as any user on
> any Gentoo system from within a file manager like thunar?
>
> Should I consider ext3/4 with journaling disabled?
>
> - Grant
>
>
kde and lxde never had any problems on my systems.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-30 20:46 ` Rich Freeman
@ 2016-08-30 20:58 ` Volker Armin Hemmann
2016-08-30 21:59 ` Rich Freeman
0 siblings, 1 reply; 70+ messages in thread
From: Volker Armin Hemmann @ 2016-08-30 20:58 UTC (permalink / raw
To: gentoo-user
Am 30.08.2016 um 22:46 schrieb Rich Freeman:
> On Tue, Aug 30, 2016 at 4:42 PM, Grant Edwards
> <grant.b.edwards@gmail.com> wrote:
>> There's nothing in Gentoo that guarantees everybody has ext2 support
>> in their kernels. That said, I agree that ext2 (or perhaps ext3 with
>> journalling disabled -- I've always been a bit fuzzy on whether that's
>> exactly the same thing or not)
> Sorry, I just wanted to chime in on one thing. While a journal
> probably will cause more flash wear, it also potentially adds data
> integrity.
>
> Now consider that the original message that started this whole thread
> was important files being stored on flash and being corrupted.
> Getting rid of the journal might not be the best move.
>
> Unless you have a LOT of writes to flash you're not going to wear it
> out, especially with wear-leveling algorithms.
>
> Oh, and finally, if it matters that much, have a backup...
>
the journal does not add any data integrity benefits at all. It just
makes it more likely that the fs is in a sane state if there is a crash.
Likely. Not a guarantee. Your data? No one cares.
If you want an fs that cares about your data: zfs.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-30 20:58 ` Volker Armin Hemmann
@ 2016-08-30 21:59 ` Rich Freeman
2016-08-30 22:12 ` Volker Armin Hemmann
2016-09-01 22:35 ` Kai Krakow
0 siblings, 2 replies; 70+ messages in thread
From: Rich Freeman @ 2016-08-30 21:59 UTC (permalink / raw
To: gentoo-user
On Tue, Aug 30, 2016 at 4:58 PM, Volker Armin Hemmann
<volkerarmin@googlemail.com> wrote:
>
> the journal does not add any data integrity benefits at all. It just
> makes it more likely that the fs is in a sane state if there is a crash.
> Likely. Not a guarantee. Your data? No one cares.
>
That depends on the mode of operation. In journal=data I believe
everything gets written twice, which should make it fairly immune to
most forms of corruption.
f2fs would also have this benefit. Data is not overwritten in-place
in a log-based filesystem; they're essentially journaled by their
design (actually, they're basically what you get if you ditch the
regular part of the filesystem and keep nothing but the journal).
> If you want an fs that cares about your data: zfs.
>
I won't argue that the COW filesystems have better data security
features. It will be nice when they're stable in the main kernel.
--
Rich
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-30 21:59 ` Rich Freeman
@ 2016-08-30 22:12 ` Volker Armin Hemmann
2016-08-31 14:33 ` Michael Mol
2016-09-01 22:35 ` Kai Krakow
1 sibling, 1 reply; 70+ messages in thread
From: Volker Armin Hemmann @ 2016-08-30 22:12 UTC (permalink / raw
To: gentoo-user
Am 30.08.2016 um 23:59 schrieb Rich Freeman:
> On Tue, Aug 30, 2016 at 4:58 PM, Volker Armin Hemmann
> <volkerarmin@googlemail.com> wrote:
>> the journal does not add any data integrity benefits at all. It just
>> makes it more likely that the fs is in a sane state if there is a crash.
>> Likely. Not a guarantee. Your data? No one cares.
>>
> That depends on the mode of operation. In journal=data I believe
> everything gets written twice, which should make it fairly immune to
> most forms of corruption.
nope. Crash at the wrong time, data gone. FS hopefully sane.
>
> f2fs would also have this benefit. Data is not overwritten in-place
> in a log-based filesystem; they're essentially journaled by their
> design (actually, they're basically what you get if you ditch the
> regular part of the filesystem and keep nothing but the journal).
>
>> If you want an fs that cares about your data: zfs.
>>
> I won't argue that the COW filesystems have better data security
> features. It will be nice when they're stable in the main kernel.
>
it is not so much about cow, but integrity checks all the way from the
moment the cpu spends some cycles on it. Caught some silent file
corruptions that way. Switched to ECC ram and never saw them again.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 18:12 ` Alan McKinnon
2016-08-30 18:58 ` Volker Armin Hemmann
@ 2016-08-30 22:36 ` Neil Bothwick
1 sibling, 0 replies; 70+ messages in thread
From: Neil Bothwick @ 2016-08-30 22:36 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 755 bytes --]
On Tue, 30 Aug 2016 20:12:12 +0200, Alan McKinnon wrote:
> >> OP is looking for an fs to put on a memory stick that will work
> >> everywhere:
> >>
> >> - vfat
> >> - exfat
> >
> > He asked for something that would work "across Gentoo systems".
> >
> >
>
> How does exfat not fulfil that?
It does fulfil it, but so does ext2, which you were ruling out by saying
it had to work everywhere. My main issue with exfat is its lack of
maturity. I use it for video files but I'm not sure I'd trust something
critical to it. There again, I'd rather not trust anything critical to a
USB stick regardless of filesystem.
--
Neil Bothwick
A real programmer never documents his code.
It was hard to make, it should be hard to read
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 163 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-30 20:32 ` Grant
2016-08-30 20:43 ` Rich Freeman
2016-08-30 20:53 ` Volker Armin Hemmann
@ 2016-08-30 22:38 ` Neil Bothwick
2 siblings, 0 replies; 70+ messages in thread
From: Neil Bothwick @ 2016-08-30 22:38 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 470 bytes --]
On Tue, 30 Aug 2016 13:32:19 -0700, Grant wrote:
> If I use ext2 on the USB stick, can I mount and use it as any user on
> any Gentoo system from within a file manager like thunar?
No, because ext2 uses proper Linux file permissions.
> Should I consider ext3/4 with journaling disabled?
That's basically ext2, unless you want some of the more advanced options
of ext4.
--
Neil Bothwick
I've got a Mickey Mouse PC with a Goofy operating system.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 163 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-30 20:42 ` [gentoo-user] " Grant Edwards
2016-08-30 20:46 ` Rich Freeman
@ 2016-08-30 22:42 ` Neil Bothwick
2016-08-30 23:06 ` Grant Edwards
2016-08-31 0:08 ` Grant
1 sibling, 2 replies; 70+ messages in thread
From: Neil Bothwick @ 2016-08-30 22:42 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 496 bytes --]
On Tue, 30 Aug 2016 20:42:05 +0000 (UTC), Grant Edwards wrote:
> > And why use exfat if you use linux? It is just not needed at all.
>
> I agree. If you want to transport something between Linux systems,
> use ext2/3 and use "mount" options to handle the permission issues.
You can't control ownership and permissions of existing files with mount
options on a Linux filesystem. See man mount.
--
Neil Bothwick
The people who are wrapped up in themselves are overdressed.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 163 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* [gentoo-user] Re: USB crucial file recovery
2016-08-30 22:42 ` Neil Bothwick
@ 2016-08-30 23:06 ` Grant Edwards
2016-08-30 23:54 ` Alan McKinnon
2016-08-31 0:08 ` Grant
1 sibling, 1 reply; 70+ messages in thread
From: Grant Edwards @ 2016-08-30 23:06 UTC (permalink / raw
To: gentoo-user
On 2016-08-30, Neil Bothwick <neil@digimed.co.uk> wrote:
> On Tue, 30 Aug 2016 20:42:05 +0000 (UTC), Grant Edwards wrote:
>
>> > And why use exfat if you use linux? It is just not needed at all.
>>
>> I agree. If you want to transport something between Linux systems,
>> use ext2/3 and use "mount" options to handle the permission issues.
>
> You can't control ownership and permissions of existing files with mount
> options on a Linux filesystem. See man mount.
Oops, you're right. I guess the options I was thinking of don't work
for ext2/3. They do work for fat, cifs, hfs, hpfs, ntfs, iso9660, and
various others.
I very rarely put a writable filesystem on a USB flash drive. I treat
them either as a CD/DVD for installation ISO images, or I use them as
"tapes" and just tar stuff to/from them.
I do make a point of using consistent UID/GID values across multiple
installations, so on the rare occasions I do put a writable filesystem
on a flash drive, it "just works".
--
Grant
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-30 23:06 ` Grant Edwards
@ 2016-08-30 23:54 ` Alan McKinnon
0 siblings, 0 replies; 70+ messages in thread
From: Alan McKinnon @ 2016-08-30 23:54 UTC (permalink / raw
To: gentoo-user
On 31/08/2016 01:06, Grant Edwards wrote:
> On 2016-08-30, Neil Bothwick <neil@digimed.co.uk> wrote:
>> On Tue, 30 Aug 2016 20:42:05 +0000 (UTC), Grant Edwards wrote:
>>
>>>> And why use exfat if you use linux? It is just not needed at all.
>>>
>>> I agree. If you want to transport something between Linux systems,
>>> use ext2/3 and use "mount" options to handle the permission issues.
>>
>> You can't control ownership and permissions of existing files with mount
>> options on a Linux filesystem. See man mount.
>
> Oops, you're right. I guess the options I was thinking of don't work
> for ext2/3. They do work for fat, cifs, hfs, hpfs, ntfs, iso9660, and
> various others.
>
> I very rarely put a writable filesystem on a USB flash drive. I treat
> them either as a CD/DVD for installation ISO images, or I use them as
> "tapes" and just tar stuff to/from them.
>
> I do make a point of using consistent UID/GID values across multiple
> installations, so on the rare occasions I do put a writable filesystem
> on a flash drive, it "just works".
>
Something intrigues me about this thread:
If the file in question is so valuable and expensive, why don't you make
another copy of the original onto a new USB stick?
--
Alan McKinnon
alan.mckinnon@gmail.com
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-30 22:42 ` Neil Bothwick
2016-08-30 23:06 ` Grant Edwards
@ 2016-08-31 0:08 ` Grant
2016-08-31 0:32 ` Alan McKinnon
2016-08-31 7:45 ` Neil Bothwick
1 sibling, 2 replies; 70+ messages in thread
From: Grant @ 2016-08-31 0:08 UTC (permalink / raw
To: Gentoo mailing list
>> > And why use exfat if you use linux? It is just not needed at all.
>>
>> I agree. If you want to transport something between Linux systems,
>> use ext2/3 and use "mount" options to handle the permission issues.
>
> You can't control ownership and permissions of existing files with mount
> options on a Linux filesystem. See man mount.
So in order to use a USB stick between multiple Gentoo systems with
ext2, I need to make sure my users have matching UIDs/GIDs? I think
this is how I ended up on NTFS in the first place. Is there a
filesystem that will make that unnecessary and exhibit better
reliability than NTFS?
- Grant
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-31 0:08 ` Grant
@ 2016-08-31 0:32 ` Alan McKinnon
2016-08-31 15:25 ` Grant
2016-09-01 22:56 ` [gentoo-user] " Kai Krakow
2016-08-31 7:45 ` Neil Bothwick
1 sibling, 2 replies; 70+ messages in thread
From: Alan McKinnon @ 2016-08-31 0:32 UTC (permalink / raw
To: gentoo-user
On 31/08/2016 02:08, Grant wrote:
>>>> And why use exfat if you use linux? It is just not needed at all.
>>>
>>> I agree. If you want to transport something between Linux systems,
>>> use ext2/3 and use "mount" options to handle the permission issues.
>>
>> You can't control ownership and permissions of existing files with mount
>> options on a Linux filesystem. See man mount.
>
>
> So in order to use a USB stick between multiple Gentoo systems with
> ext2, I need to make sure my users have matching UIDs/GIDs?
Yes
The uids/gids/modes in the inodes themselves are the owners and perms,
you cannot override them.
So unless you have mode=666, you will need matching UIDs/GIDs (which is
a royal massive pain in the butt to bring about without NIS or similar
> I think
> this is how I ended up on NTFS in the first place.
Didn't we have this discussion about a year ago? Sounds familiar now
> Is there a
> filesystem that will make that unnecessary and exhibit better
> reliability than NTFS?
Yes, FAT. It works and works well.
Or exFAT which is Microsoft's solution to the problem of very large
files on FAT.
Which NTFS system are you using?
ntfs kernel module? It's quite dodgy and unsafe with writes
ntfs-ng on fuse? I find that one quite solid
ntfs-ng does have an annoyance that has bitten me more than once. When
ntfs-nf writes to an FS, it can get marked dirty. Somehow, when used in
a Windows machine the driver there has issues with the FS. Remount it in
Linux again and all is good.
The cynic in me says that Microsoft didn'y implement their own FS spec
properly whereas ntfs-ng did :-)
--
Alan McKinnon
alan.mckinnon@gmail.com
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-31 0:08 ` Grant
2016-08-31 0:32 ` Alan McKinnon
@ 2016-08-31 7:45 ` Neil Bothwick
2016-08-31 7:47 ` Neil Bothwick
1 sibling, 1 reply; 70+ messages in thread
From: Neil Bothwick @ 2016-08-31 7:45 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 935 bytes --]
On Tue, 30 Aug 2016 17:08:26 -0700, Grant wrote:
> > You can't control ownership and permissions of existing files with
> > mount options on a Linux filesystem. See man mount.
>
> So in order to use a USB stick between multiple Gentoo systems with
> ext2, I need to make sure my users have matching UIDs/GIDs?
Yes, I said that when I first mentioned ext2.
> I think
> this is how I ended up on NTFS in the first place. Is there a
> filesystem that will make that unnecessary and exhibit better
> reliability than NTFS?
FAT is tried and tested as long as you can live with the file size
limitations. But USB sticks are not that reliable to start with, so
relying on the filesystem to preserve your important files is not enough.
You have spent far more time on this than you would have spent making
backups of the file!
--
Neil Bothwick
Use Colgate toothpaste or end up with teeth like a Ferengi.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 163 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-31 7:45 ` Neil Bothwick
@ 2016-08-31 7:47 ` Neil Bothwick
2016-08-31 10:30 ` Alarig Le Lay
2016-08-31 17:09 ` waltdnes
0 siblings, 2 replies; 70+ messages in thread
From: Neil Bothwick @ 2016-08-31 7:47 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 524 bytes --]
On Wed, 31 Aug 2016 08:45:22 +0100, Neil Bothwick wrote:
> USB sticks are not that reliable to start with, so
> relying on the filesystem to preserve your important files is not
> enough. You have spent far more time on this than you would have spent
> making backups of the file!
Have you considered using cloud storage for the files instead? That also
gives you the option of version control with some services.
--
Neil Bothwick
Remember that the Titanic was built by experts, and the Ark by a newbie
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 163 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-31 7:47 ` Neil Bothwick
@ 2016-08-31 10:30 ` Alarig Le Lay
2016-08-31 11:44 ` Rich Freeman
2016-08-31 12:21 ` Neil Bothwick
2016-08-31 17:09 ` waltdnes
1 sibling, 2 replies; 70+ messages in thread
From: Alarig Le Lay @ 2016-08-31 10:30 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 528 bytes --]
On Wed Aug 31 08:47:11 2016, Neil Bothwick wrote:
> Have you considered using cloud storage for the files instead? That also
> gives you the option of version control with some services.
Seriously, why cloud? The Cloud is basically a marketing term that
define “Internet, like before, but cooler”, so it’s just someone else
computer. I think that almost everybody here have more than on computer,
or at least more than one hard disk drive. So… Why not using it? You
will know who own your data.
--
alarig
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-31 10:30 ` Alarig Le Lay
@ 2016-08-31 11:44 ` Rich Freeman
2016-08-31 12:21 ` Neil Bothwick
1 sibling, 0 replies; 70+ messages in thread
From: Rich Freeman @ 2016-08-31 11:44 UTC (permalink / raw
To: gentoo-user
On Wed, Aug 31, 2016 at 6:30 AM, Alarig Le Lay <alarig@swordarmor.fr> wrote:
> On Wed Aug 31 08:47:11 2016, Neil Bothwick wrote:
>> Have you considered using cloud storage for the files instead? That also
>> gives you the option of version control with some services.
>
> Seriously, why cloud? The Cloud is basically a marketing term that
> define “Internet, like before, but cooler”, so it’s just someone else
> computer. I think that almost everybody here have more than on computer,
> or at least more than one hard disk drive. So… Why not using it? You
> will know who own your data.
It might have something to do with the fact that cloud services at
least run backups of their servers.
I'd be the first to agree that it is possible to do a better job
yourself at providing the sorts of services you find on dropbox,
google drive, lastpass, and so on. However, the reality is that most
people don't actually do a better job with it, which is why I see the
occassional post on Facebook about how some relative lost all their
files when their hard drive crashed, or when some ransomware came
along. Most who have "backups" just have a USB hard drive with some
software that came with it, which is probably always mounted.
If you know how to professionally manage a server, then sure, feel
free to DIY. Though, you might be surprised at how many people who do
know how to professionally manage servers still use cloud services.
The plethora of clients make them convenient for some things (though I
always back them up). And I store all my important backups encrypted
on S3 (I don't care if they lose them as long as it isn't on the same
day that I need them, and if they want to try to data mine files that
have gone through gpg I wish them good luck).
--
Rich
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-31 10:30 ` Alarig Le Lay
2016-08-31 11:44 ` Rich Freeman
@ 2016-08-31 12:21 ` Neil Bothwick
1 sibling, 0 replies; 70+ messages in thread
From: Neil Bothwick @ 2016-08-31 12:21 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 983 bytes --]
On Wed, 31 Aug 2016 12:30:42 +0200, Alarig Le Lay wrote:
> On Wed Aug 31 08:47:11 2016, Neil Bothwick wrote:
> > Have you considered using cloud storage for the files instead? That
> > also gives you the option of version control with some services.
>
> Seriously, why cloud? The Cloud is basically a marketing term that
> define “Internet, like before, but cooler”, so it’s just someone else
> computer.
Not necessarily, it's a catch-all term for network storage, it would be
ownCloud running on a LAN. However, professionally provided services are
many orders of magnitude safer than storing important files on a no-name
USB stick using a reverse engineered filesystem running through a
userspace layer.
Or you could simply use a shared folder synced with something like
SyncThing for everyone to access the files. Then you have safer hard
drive storage and some level of backup.
--
Neil Bothwick
DCE seeks DTE for mutual exchange of data.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 163 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-30 22:12 ` Volker Armin Hemmann
@ 2016-08-31 14:33 ` Michael Mol
2016-08-31 16:43 ` Rich Freeman
2016-09-01 18:09 ` Volker Armin Hemmann
0 siblings, 2 replies; 70+ messages in thread
From: Michael Mol @ 2016-08-31 14:33 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 4741 bytes --]
On Wednesday, August 31, 2016 12:12:15 AM Volker Armin Hemmann wrote:
> Am 30.08.2016 um 23:59 schrieb Rich Freeman:
> > On Tue, Aug 30, 2016 at 4:58 PM, Volker Armin Hemmann
> >
> > <volkerarmin@googlemail.com> wrote:
> >> the journal does not add any data integrity benefits at all. It just
> >> makes it more likely that the fs is in a sane state if there is a crash.
> >> Likely. Not a guarantee. Your data? No one cares.
> >
> > That depends on the mode of operation. In journal=data I believe
> > everything gets written twice, which should make it fairly immune to
> > most forms of corruption.
>
> nope. Crash at the wrong time, data gone. FS hopefully sane.
No, seriously. Mount with data=ordered. Per ext4(5):
data={journal|ordered|writeback}
Specifies the journaling mode for file data. Metadata is always
journaled. To use modes other than ordered on the root filesystem, pass the
mode to the kernel as boot parameter, e.g. rootflags=data=journal.
journal
All data is committed into the journal prior to being
written into the main filesystem.
ordered
This is the default mode. All data is forced directly
out to the main file system prior to its metadata being committed to the
journal.
writeback
Data ordering is not preserved – data may be written into
the main filesystem after its metadata has been committed to the journal. This
is rumoured to be the highest-throughput option. It guarantees internal
filesystem integrity, however it can allow old data to appear in files after a
crash and journal recovery.
In writeback mode, only filesystem metadata goes through the journal. This
guarantees that the filesystem's structure itself will remain intact in the
event of a crash.
In data=journal mode, the contents of files pass through the journal as well,
ensuring that, at least as far as the filesystem's responsibility is concerned,
the data will be intact in the event of a crash.
Now, I can still think of ways you can lose data in data=journal mode:
* You mounted the filesystem with barrier=0 or with nobarrier; this can result
in data writes going to disk out of order, if the I/O stack supports barriers.
If you say "my file is ninety bytes" "here are ninety bytes of data, all 9s",
"my file is now thirty bytes", "here are thirty bytes of data, all 3s", then in
the end you should have a thirty-byte file filled with 3s. If you have barriers
enabled and you crash halfway through the whole process, you should find a file
of ninety bytes, all 9s. But if you have barriers disabled, the data may hit
disk as though you'd said "my file is ninety bytes, here are ninety bytes of
data, all 9s, here are thirty bytes of data, all 3s, now my file is thirty
bytes." If that happens, and you crash partway through the commit to disk, you
may see a ninety-byte file consisting of thirty 3s and sixty 9s. Or things may
landthat you see a thirty-byte file of 9s.
* Your application didn't flush its writes to disk when it should have.
* Your vm.dirty_bytes or vm.dirty_ratio are too high, you've been writing a
lot to disk, and the kernel still has a lot of data buffered waiting to be
written. (Well, that can always lead to data loss regardless of how high those
settings are, which is why applications should flush their writes.)
* You've used hdparm to enable write buffers in your hard disks, and your hard
disks lose power while their buffers have data waiting to be written.
* You're using a buggy disk device that does a poor job of handling power
loss. Such as some SSDs which don't have large enough capacitors for their own
write reordering. Or just about any flash drive.
* There's a bug in some code, somewhere.
>
> > f2fs would also have this benefit. Data is not overwritten in-place
> > in a log-based filesystem; they're essentially journaled by their
> > design (actually, they're basically what you get if you ditch the
> > regular part of the filesystem and keep nothing but the journal).
> >
> >> If you want an fs that cares about your data: zfs.
> >
> > I won't argue that the COW filesystems have better data security
> > features. It will be nice when they're stable in the main kernel.
>
> it is not so much about cow, but integrity checks all the way from the
> moment the cpu spends some cycles on it. Caught some silent file
> corruptions that way. Switched to ECC ram and never saw them again.
In-memory corruption of a data is a universal hazard. ECC should be the norm,
not the exception, honestly.
--
:wq
[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-31 0:32 ` Alan McKinnon
@ 2016-08-31 15:25 ` Grant
2016-08-31 21:45 ` Alan McKinnon
2016-09-01 14:36 ` [gentoo-user] " Stroller
2016-09-01 22:56 ` [gentoo-user] " Kai Krakow
1 sibling, 2 replies; 70+ messages in thread
From: Grant @ 2016-08-31 15:25 UTC (permalink / raw
To: Gentoo mailing list
>> Is there a
>> filesystem that will make that unnecessary and exhibit better
>> reliability than NTFS?
>
> Yes, FAT. It works and works well.
> Or exFAT which is Microsoft's solution to the problem of very large
> files on FAT.
FAT32 won't work for me since I need to use files larger than 4GB. I
know it's beta software but should exfat be more reliable than ntfs?
> Which NTFS system are you using?
>
> ntfs kernel module? It's quite dodgy and unsafe with writes
> ntfs-ng on fuse? I find that one quite solid
I'm using ntfs-ng as opposed to the kernel option(s).
- Grant
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-31 14:33 ` Michael Mol
@ 2016-08-31 16:43 ` Rich Freeman
2016-09-01 18:09 ` Volker Armin Hemmann
1 sibling, 0 replies; 70+ messages in thread
From: Rich Freeman @ 2016-08-31 16:43 UTC (permalink / raw
To: gentoo-user
On Wed, Aug 31, 2016 at 10:33 AM, Michael Mol <mikemol@gmail.com> wrote:
> On Wednesday, August 31, 2016 12:12:15 AM Volker Armin Hemmann wrote:
>> Am 30.08.2016 um 23:59 schrieb Rich Freeman:
>> >
>> > That depends on the mode of operation. In journal=data I believe
>> > everything gets written twice, which should make it fairly immune to
>> > most forms of corruption.
>>
>> nope. Crash at the wrong time, data gone. FS hopefully sane.
>
> In data=journal mode, the contents of files pass through the journal as well,
> ensuring that, at least as far as the filesystem's responsibility is concerned,
> the data will be intact in the event of a crash.
>
Correct. As with any other sane filesystem if you're using
data=journal mode with ext4 then your filesystem will always reflect
the state of data and metadata on a transaction boundary.
If you write something to disk and pull the power after fsck the disk
will either contain the contents of your files before you did the
write, or after the write was completed, and never anything
in-between.
This is barring silent corruption, which ext4 does not protect against.
> Now, I can still think of ways you can lose data in data=journal mode:
Agree, though all of those concerns apply to any filesystem. If you
unplug an unmounted device, or pull the power when writes are pending,
or never hit save, or whatever, then your data won't end up on disk.
Now, a good filesystem should ensure that the data which is on disk is
completely consistent. That is, you won't get half of a write, just
all or nothing.
>> >> If you want an fs that cares about your data: zfs.
>> >
>> > I won't argue that the COW filesystems have better data security
>> > features. It will be nice when they're stable in the main kernel.
>>
>> it is not so much about cow, but integrity checks all the way from the
>> moment the cpu spends some cycles on it.
What COW does get you is the security of data=journal without the
additional cost of writing it twice. Since data is not overwritten in
place you ensure that on an fsck the system can either roll the write
completely forwards or backwards.
With data=ordered on ext4 there is always the risk of a
half-overwritten file if you are overwriting in place.
But I agree that many of the zfs/btrfs data integrity features could
be implemented on a non-cow filesystem. Maybe ext5 will have some of
them, though I'm not sure how much work is going into that vs just
fixing btrfs, or begging Oracle to re-license zfs.
>> Caught some silent file
>> corruptions that way. Switched to ECC ram and never saw them again.
>
> In-memory corruption of a data is a universal hazard. ECC should be the norm,
> not the exception, honestly.
>
Couldn't agree more here. The hardware vendors aren't helping here
though, in their quest to try to make more money from those sensitive
to such things. I believe Intel disables ECC on anything less than an
i7. As I understand it most of the mainline AMD offerings support it
(basically anything over $80 or so), but it isn't clear to me what
motherboard support is required and the vendors almost never make a
mention of it on anything reasonably-priced.
If your RAM gets hosed than any filesystem is going to store bad data
or metadata for a multitude of reasons. The typical x86+ arch wasn't
designed to handle hardware failures around anything associated with
cpu/ram.
The ZFS folks tend to make a really big deal out of ECC, but as far as
I'm aware it isn't any more important for ZFS than anything else. I
think ZFS just tends to draw people really concerned with data
integrity, and once you've controlled everything that happens after
the data gets sent to the hard drive you tend to start thinking about
what happens to it beforehand. I had to completely reinstall a
windows system not long ago due to memory failure and drive
corruption. Wasn't that big a deal since I don't keep anything on a
windows box that isn't disposable, or backed up to something else.
--
Rich
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-31 7:47 ` Neil Bothwick
2016-08-31 10:30 ` Alarig Le Lay
@ 2016-08-31 17:09 ` waltdnes
2016-08-31 18:30 ` Neil Bothwick
1 sibling, 1 reply; 70+ messages in thread
From: waltdnes @ 2016-08-31 17:09 UTC (permalink / raw
To: gentoo-user
On Wed, Aug 31, 2016 at 08:47:11AM +0100, Neil Bothwick wrote
> On Wed, 31 Aug 2016 08:45:22 +0100, Neil Bothwick wrote:
>
> > USB sticks are not that reliable to start with, so
> > relying on the filesystem to preserve your important files is not
> > enough. You have spent far more time on this than you would have spent
> > making backups of the file!
>
> Have you considered using cloud storage for the files instead? That also
> gives you the option of version control with some services.
The initial backup of my hard drives would easily burn through my
monthly gigabytes allotment. Until evrybody gets truly unlimited
bandwidth, forget about it.
--
Walter Dnes <waltdnes@waltdnes.org>
I don't run "desktop environments"; I run useful applications
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-31 17:09 ` waltdnes
@ 2016-08-31 18:30 ` Neil Bothwick
0 siblings, 0 replies; 70+ messages in thread
From: Neil Bothwick @ 2016-08-31 18:30 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 766 bytes --]
On Wed, 31 Aug 2016 13:09:43 -0400, waltdnes@waltdnes.org wrote:
> > Have you considered using cloud storage for the files instead? That
> > also gives you the option of version control with some services.
>
> The initial backup of my hard drives would easily burn through my
> monthly gigabytes allotment. Until evrybody gets truly unlimited
> bandwidth, forget about it.
Who mentioned using it for backups? I suggested as an alternative to
a USB stick for sharing a file or two between machines.
What is your monthly gigabyte allotment for your LAN? Keeping the
files on a personal cloud, or NS storage, may well be the better
alternative.
--
Neil Bothwick
"Time is the best teacher....., unfortunately it kills all the students"
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 163 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-31 15:25 ` Grant
@ 2016-08-31 21:45 ` Alan McKinnon
2016-09-01 3:42 ` J. Roeleveld
` (2 more replies)
2016-09-01 14:36 ` [gentoo-user] " Stroller
1 sibling, 3 replies; 70+ messages in thread
From: Alan McKinnon @ 2016-08-31 21:45 UTC (permalink / raw
To: gentoo-user
On 31/08/2016 17:25, Grant wrote:
>>> Is there a
>>> filesystem that will make that unnecessary and exhibit better
>>> reliability than NTFS?
>>
>> Yes, FAT. It works and works well.
>> Or exFAT which is Microsoft's solution to the problem of very large
>> files on FAT.
>
>
> FAT32 won't work for me since I need to use files larger than 4GB. I
> know it's beta software but should exfat be more reliable than ntfs?
It doesn't do all the fancy journalling that ntfs does, so based solely
on complexity, it ought to be more reliable.
None of us have done real tests and mentioned it here, so we really
don't know how it pans out in the real world.
Do a bunch of tests yourself and decide
>
>
>> Which NTFS system are you using?
>>
>> ntfs kernel module? It's quite dodgy and unsafe with writes
>> ntfs-ng on fuse? I find that one quite solid
>
>
> I'm using ntfs-ng as opposed to the kernel option(s).
I'm offering 10 to 1 odds that your problems came from a faulty USB
stick, or maybe one that you yanked too soon
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-31 21:45 ` Alan McKinnon
@ 2016-09-01 3:42 ` J. Roeleveld
2016-09-01 6:03 ` Alan McKinnon
2016-09-01 12:41 ` Michael Mol
2016-09-07 17:41 ` Grant
2 siblings, 1 reply; 70+ messages in thread
From: J. Roeleveld @ 2016-09-01 3:42 UTC (permalink / raw
To: gentoo-user
On August 31, 2016 11:45:15 PM GMT+02:00, Alan McKinnon <alan.mckinnon@gmail.com> wrote:
>On 31/08/2016 17:25, Grant wrote:
>>>> Is there a
>>>> filesystem that will make that unnecessary and exhibit better
>>>> reliability than NTFS?
>>>
>>> Yes, FAT. It works and works well.
>>> Or exFAT which is Microsoft's solution to the problem of very large
>>> files on FAT.
>>
>>
>> FAT32 won't work for me since I need to use files larger than 4GB. I
>> know it's beta software but should exfat be more reliable than ntfs?
>
>It doesn't do all the fancy journalling that ntfs does, so based solely
>
>on complexity, it ought to be more reliable.
>
>None of us have done real tests and mentioned it here, so we really
>don't know how it pans out in the real world.
>
>Do a bunch of tests yourself and decide
When I was a student, one of my professors used FAT to explain how filesystems work. The reason for this is that the actual filesystem is quite simple to follow and fixing can actually be done by hand using a hex editor.
This is no longer possible with other filesystems.
Then again, a lot of embedded devices (especially digital cameras) don't even get FAT correctly. Leading to broken images.
Those implementations are broken at the point where fragmentation would occur.
Solution: never delete pictures on the camera. Simply move them off and do it on a computer.
>>> Which NTFS system are you using?
>>>
>>> ntfs kernel module? It's quite dodgy and unsafe with writes
>>> ntfs-ng on fuse? I find that one quite solid
>>
>>
>> I'm using ntfs-ng as opposed to the kernel option(s).
>
>I'm offering 10 to 1 odds that your problems came from a faulty USB
>stick, or maybe one that you yanked too soon
I'm with Alan here. I have seen too many handout USB sticks from conferences that don't last. I only use them for:
Quickly moving a file from A to B.
Booting the latest sysresccd
Scanning a document
Printing a PDF
(For last 2, my printer has a USB slot)
Important files are stored on my NAS which is backed up regularly.
--
Joost
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-09-01 3:42 ` J. Roeleveld
@ 2016-09-01 6:03 ` Alan McKinnon
0 siblings, 0 replies; 70+ messages in thread
From: Alan McKinnon @ 2016-09-01 6:03 UTC (permalink / raw
To: gentoo-user
On 01/09/2016 05:42, J. Roeleveld wrote:
> On August 31, 2016 11:45:15 PM GMT+02:00, Alan McKinnon <alan.mckinnon@gmail.com> wrote:
>> On 31/08/2016 17:25, Grant wrote:
>>>>> Is there a
>>>>> filesystem that will make that unnecessary and exhibit better
>>>>> reliability than NTFS?
>>>>
>>>> Yes, FAT. It works and works well.
>>>> Or exFAT which is Microsoft's solution to the problem of very large
>>>> files on FAT.
>>>
>>>
>>> FAT32 won't work for me since I need to use files larger than 4GB. I
>>> know it's beta software but should exfat be more reliable than ntfs?
>>
>> It doesn't do all the fancy journalling that ntfs does, so based solely
>>
>> on complexity, it ought to be more reliable.
>>
>> None of us have done real tests and mentioned it here, so we really
>> don't know how it pans out in the real world.
>>
>> Do a bunch of tests yourself and decide
>
> When I was a student, one of my professors used FAT to explain how filesystems work. The reason for this is that the actual filesystem is quite simple to follow and fixing can actually be done by hand using a hex editor.
>
> This is no longer possible with other filesystems.
>
> Then again, a lot of embedded devices (especially digital cameras) don't even get FAT correctly. Leading to broken images.
> Those implementations are broken at the point where fragmentation would occur.
> Solution: never delete pictures on the camera. Simply move them off and do it on a computer.
>
>>>> Which NTFS system are you using?
>>>>
>>>> ntfs kernel module? It's quite dodgy and unsafe with writes
>>>> ntfs-ng on fuse? I find that one quite solid
>>>
>>>
>>> I'm using ntfs-ng as opposed to the kernel option(s).
>>
>> I'm offering 10 to 1 odds that your problems came from a faulty USB
>> stick, or maybe one that you yanked too soon
>
> I'm with Alan here. I have seen too many handout USB sticks from conferences that don't last. I only use them for:
> Quickly moving a file from A to B.
> Booting the latest sysresccd
> Scanning a document
> Printing a PDF
> (For last 2, my printer has a USB slot)
>
> Important files are stored on my NAS which is backed up regularly.
Indeed. The trouble with backups is that they are difficult to get
right, time consuming, easy to ignore, and very very expensive (time and
money wise)
--
Alan McKinnon
alan.mckinnon@gmail.com
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-31 21:45 ` Alan McKinnon
2016-09-01 3:42 ` J. Roeleveld
@ 2016-09-01 12:41 ` Michael Mol
2016-09-01 13:35 ` Rich Freeman
2016-09-01 14:21 ` J. Roeleveld
2016-09-07 17:41 ` Grant
2 siblings, 2 replies; 70+ messages in thread
From: Michael Mol @ 2016-09-01 12:41 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 1979 bytes --]
On Wednesday, August 31, 2016 11:45:15 PM Alan McKinnon wrote:
> On 31/08/2016 17:25, Grant wrote:
> >> Which NTFS system are you using?
> >>
> >> ntfs kernel module? It's quite dodgy and unsafe with writes
> >> ntfs-ng on fuse? I find that one quite solid
> >
> > I'm using ntfs-ng as opposed to the kernel option(s).
>
> I'm offering 10 to 1 odds that your problems came from ... one that you
> yanked too soon
(pardon the in-line snip, while I get on my soap box)
The likelihood of this happening can be greatly reduced by setting
vm.dirty_bytes to something like 2097125 and vm.dirty_background_bytes to
something like 1048576. This prevents the kernel from queuing up as much data
for sending to disk. The application doing the copy or write will normally
report "complete" long before writes to slow media are actually...complete.
Setting vm.dirty_bytes to something low prevents the kernel's backlog of data
from getting so long.
vm.dirty_bytes has another, closely-related setting, vm.dirty_bytes_ratio.
vm.dirty_bytes_ratio is a percentage of RAM that is used for dirty bytes. If
vm.dirty_bytes_ratio is set, vm.dirty_bytes will read 0. If vm.dirty_bytes is
set, vm.dirty_bytes_ratio will read 0.
The default is for vm.dirty_bytes_ratio to be 20, which means up to 20% of
your memory can find itself used as a write buffer for data on its way to a
filesystem. On a system with only 2GiB of RAM, that's 409MiB of data that the
kernel may still be waiting to push through the filesystem layer! If you're
writing to, say, a class 10 SDHC card, the data may not be at rest for another
40s after the application reports the copy operation is complete!
If you've got a system with 8GiB of memory, multiply all that by four.
The defaults for vm.dirty_bytes and vm.dirty_background_bytes are, IMO, badly
broken and an insidious source of problems for both regular Linux users and
system administrators.
--
:wq
[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-09-01 12:41 ` Michael Mol
@ 2016-09-01 13:35 ` Rich Freeman
2016-09-01 14:55 ` Michael Mol
2016-09-01 14:21 ` J. Roeleveld
1 sibling, 1 reply; 70+ messages in thread
From: Rich Freeman @ 2016-09-01 13:35 UTC (permalink / raw
To: gentoo-user
On Thu, Sep 1, 2016 at 8:41 AM, Michael Mol <mikemol@gmail.com> wrote:
>
> The defaults for vm.dirty_bytes and vm.dirty_background_bytes are, IMO, badly
> broken and an insidious source of problems for both regular Linux users and
> system administrators.
>
It depends on whether you tend to yank out drives without unmounting
them, or if you have a poorly-implemented database that doesn't know
about fsync and tries to implement transactions across multiple hosts.
The flip side of all of this is that you can save-save-save in your
applications and not sit there and watch your application wait for the
USB drive to catch up. It also allows writes to be combined more
efficiently (less of an issue for flash, but you probably can still
avoid multiple rounds of overwriting data in place if multiple
revisions come in succession, and metadata updating can be
consolidated).
For a desktop-oriented workflow I'd think that having nice big write
buffers would greatly improve the user experience, as long as you hit
that unmount button or pay attention to that flashing green light
every time you yank a drive.
--
Rich
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-09-01 12:41 ` Michael Mol
2016-09-01 13:35 ` Rich Freeman
@ 2016-09-01 14:21 ` J. Roeleveld
2016-09-01 15:01 ` Michael Mol
1 sibling, 1 reply; 70+ messages in thread
From: J. Roeleveld @ 2016-09-01 14:21 UTC (permalink / raw
To: gentoo-user
On Thursday, September 01, 2016 08:41:39 AM Michael Mol wrote:
> On Wednesday, August 31, 2016 11:45:15 PM Alan McKinnon wrote:
> > On 31/08/2016 17:25, Grant wrote:
> > >> Which NTFS system are you using?
> > >>
> > >> ntfs kernel module? It's quite dodgy and unsafe with writes
> > >> ntfs-ng on fuse? I find that one quite solid
> > >
> > > I'm using ntfs-ng as opposed to the kernel option(s).
> >
> > I'm offering 10 to 1 odds that your problems came from ... one that you
> > yanked too soon
>
> (pardon the in-line snip, while I get on my soap box)
>
> The likelihood of this happening can be greatly reduced by setting
> vm.dirty_bytes to something like 2097125 and vm.dirty_background_bytes to
> something like 1048576. This prevents the kernel from queuing up as much
> data for sending to disk. The application doing the copy or write will
> normally report "complete" long before writes to slow media are
> actually...complete. Setting vm.dirty_bytes to something low prevents the
> kernel's backlog of data from getting so long.
>
> vm.dirty_bytes has another, closely-related setting, vm.dirty_bytes_ratio.
> vm.dirty_bytes_ratio is a percentage of RAM that is used for dirty bytes. If
> vm.dirty_bytes_ratio is set, vm.dirty_bytes will read 0. If vm.dirty_bytes
> is set, vm.dirty_bytes_ratio will read 0.
>
> The default is for vm.dirty_bytes_ratio to be 20, which means up to 20% of
> your memory can find itself used as a write buffer for data on its way to a
> filesystem. On a system with only 2GiB of RAM, that's 409MiB of data that
> the kernel may still be waiting to push through the filesystem layer! If
> you're writing to, say, a class 10 SDHC card, the data may not be at rest
> for another 40s after the application reports the copy operation is
> complete!
>
> If you've got a system with 8GiB of memory, multiply all that by four.
>
> The defaults for vm.dirty_bytes and vm.dirty_background_bytes are, IMO,
> badly broken and an insidious source of problems for both regular Linux
> users and system administrators.
I would prefer to be able to have different settings per disk.
Swappable drives like USB, I would put small numbers.
But for built-in drives, I'd prefer to keep default values or tuned to the
actual drive.
--
Joost
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] USB crucial file recovery
2016-08-31 15:25 ` Grant
2016-08-31 21:45 ` Alan McKinnon
@ 2016-09-01 14:36 ` Stroller
1 sibling, 0 replies; 70+ messages in thread
From: Stroller @ 2016-09-01 14:36 UTC (permalink / raw
To: gentoo-user
> On 31 Aug 2016, at 16:25, Grant <emailgrant@gmail.com> wrote:
>
>> Yes, FAT. It works and works well.
>> Or exFAT which is Microsoft's solution to the problem of very large
>> files on FAT.
>
> FAT32 won't work for me since I need to use files larger than 4GB. I
> know it's beta software but should exfat be more reliable than rtfs?
There's always `split`.
Very easy to use, just a little inconvenient to have to invoke it every time you copy a file to USB.
Stroller.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-09-01 13:35 ` Rich Freeman
@ 2016-09-01 14:55 ` Michael Mol
2016-09-01 15:02 ` Rich Freeman
0 siblings, 1 reply; 70+ messages in thread
From: Michael Mol @ 2016-09-01 14:55 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 3096 bytes --]
On Thursday, September 01, 2016 09:35:15 AM Rich Freeman wrote:
> On Thu, Sep 1, 2016 at 8:41 AM, Michael Mol <mikemol@gmail.com> wrote:
> > The defaults for vm.dirty_bytes and vm.dirty_background_bytes are, IMO,
> > badly broken and an insidious source of problems for both regular Linux
> > users and system administrators.
>
> It depends on whether you tend to yank out drives without unmounting
> them,
The sad truth is that many (most?) users don't understand the idea of
unmounting. Even Microsoft largely gave up, having flash drives "optimized for
data safety" as opposed to "optimized for speed". While it'd be nice if the
average John Doe would follow instructions, anyone who's worked in IT
understands that the average John Doe...doesn't. And above-average ones assume
they know better and don't have to.
As such, queuing up that much data while reporting to the user that the copy
is already complete violates the principle of least surprise.
> or if you have a poorly-implemented database that doesn't know
> about fsync and tries to implement transactions across multiple hosts.
I don't know off the top of my head what database implementation would do that,
though I could think of a dozen that could be vulnerable if they didn't sync
properly.
The real culprit that comes to mind, for me, are copy tools. Whether it's dd,
mv, cp, or a copy dialog in GNOME or KDE. I would love to see CoDeL-style
time-based buffer sizes applied throughout the stack. The user may not care
about how many milliseconds it takes for a read to turn into a completed write
on the face of it, but they do like accurate time estimates and low latency
UI.
>
> The flip side of all of this is that you can save-save-save in your
> applications and not sit there and watch your application wait for the
> USB drive to catch up. It also allows writes to be combined more
> efficiently (less of an issue for flash, but you probably can still
> avoid multiple rounds of overwriting data in place if multiple
> revisions come in succession, and metadata updating can be
> consolidated).
I recently got bit by vim's easytags causing saves to take a couple dozen
seconds, leading me not to save as often as I used to. And then a bunch of
code I wrote Monday...wasn't there any more. I was sad.
>
> For a desktop-oriented workflow I'd think that having nice big write
> buffers would greatly improve the user experience, as long as you hit
> that unmount button or pay attention to that flashing green light
> every time you yank a drive.
Realistically, users aren't going to pay attention. You and I do, but that's
because we understand the *why* behind the importance.
I love me fat write buffers for write combining, page caches etc. But, IMO, it
shouldn't take longer than 1-2s (barring spinning rust disk wake) for full
buffers to flush to disk; at modern write speeds (even for a slow spinning
disc), that's going to be a dozen or so megabytes of data, which is plenty big
for write-combining purposes.
--
:wq
[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-09-01 14:21 ` J. Roeleveld
@ 2016-09-01 15:01 ` Michael Mol
0 siblings, 0 replies; 70+ messages in thread
From: Michael Mol @ 2016-09-01 15:01 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 2735 bytes --]
On Thursday, September 01, 2016 04:21:18 PM J. Roeleveld wrote:
> On Thursday, September 01, 2016 08:41:39 AM Michael Mol wrote:
> > On Wednesday, August 31, 2016 11:45:15 PM Alan McKinnon wrote:
> > > On 31/08/2016 17:25, Grant wrote:
> > > >> Which NTFS system are you using?
> > > >>
> > > >> ntfs kernel module? It's quite dodgy and unsafe with writes
> > > >> ntfs-ng on fuse? I find that one quite solid
> > > >
> > > > I'm using ntfs-ng as opposed to the kernel option(s).
> > >
> > > I'm offering 10 to 1 odds that your problems came from ... one that you
> > > yanked too soon
> >
> > (pardon the in-line snip, while I get on my soap box)
> >
> > The likelihood of this happening can be greatly reduced by setting
> > vm.dirty_bytes to something like 2097125 and vm.dirty_background_bytes to
> > something like 1048576. This prevents the kernel from queuing up as much
> > data for sending to disk. The application doing the copy or write will
> > normally report "complete" long before writes to slow media are
> > actually...complete. Setting vm.dirty_bytes to something low prevents the
> > kernel's backlog of data from getting so long.
> >
> > vm.dirty_bytes has another, closely-related setting, vm.dirty_bytes_ratio.
> > vm.dirty_bytes_ratio is a percentage of RAM that is used for dirty bytes.
> > If vm.dirty_bytes_ratio is set, vm.dirty_bytes will read 0. If
> > vm.dirty_bytes is set, vm.dirty_bytes_ratio will read 0.
> >
> > The default is for vm.dirty_bytes_ratio to be 20, which means up to 20% of
> > your memory can find itself used as a write buffer for data on its way to
> > a
> > filesystem. On a system with only 2GiB of RAM, that's 409MiB of data that
> > the kernel may still be waiting to push through the filesystem layer! If
> > you're writing to, say, a class 10 SDHC card, the data may not be at rest
> > for another 40s after the application reports the copy operation is
> > complete!
> >
> > If you've got a system with 8GiB of memory, multiply all that by four.
> >
> > The defaults for vm.dirty_bytes and vm.dirty_background_bytes are, IMO,
> > badly broken and an insidious source of problems for both regular Linux
> > users and system administrators.
>
> I would prefer to be able to have different settings per disk.
> Swappable drives like USB, I would put small numbers.
> But for built-in drives, I'd prefer to keep default values or tuned to the
> actual drive.
The problem is that's not really possible. vm.dirty_bytes and
vm.dirty_background_bytes deal with the page cache, which sits at the VFS
layer, not the block device layer. It could certainly make sense to apply it
on a per-mount basis, though.
--
:wq
[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-09-01 14:55 ` Michael Mol
@ 2016-09-01 15:02 ` Rich Freeman
0 siblings, 0 replies; 70+ messages in thread
From: Rich Freeman @ 2016-09-01 15:02 UTC (permalink / raw
To: gentoo-user
On Thu, Sep 1, 2016 at 10:55 AM, Michael Mol <mikemol@gmail.com> wrote:
>
> The sad truth is that many (most?) users don't understand the idea of
> unmounting. Even Microsoft largely gave up, having flash drives "optimized for
> data safety" as opposed to "optimized for speed". While it'd be nice if the
> average John Doe would follow instructions, anyone who's worked in IT
> understands that the average John Doe...doesn't. And above-average ones assume
> they know better and don't have to.
>
If these users are the target of your OS then you should probably tune
the settings accordingly.
This mailing list notwitstanding (sometimes), I don't think this is
really Gentoo's core audience.
--
Rich
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-31 14:33 ` Michael Mol
2016-08-31 16:43 ` Rich Freeman
@ 2016-09-01 18:09 ` Volker Armin Hemmann
2016-09-01 18:54 ` Rich Freeman
1 sibling, 1 reply; 70+ messages in thread
From: Volker Armin Hemmann @ 2016-09-01 18:09 UTC (permalink / raw
To: gentoo-user
Am 31.08.2016 um 16:33 schrieb Michael Mol:
>
> In data=journal mode, the contents of files pass through the journal as well,
> ensuring that, at least as far as the filesystem's responsibility is concerned,
> the data will be intact in the event of a crash.
a common misconception. But not true at all. Google a bit.
>
> Now, I can still think of ways you can lose data in data=journal mode:
>
> * You mounted the filesystem with barrier=0 or with nobarrier; this can result
not needed.
> in data writes going to disk out of order, if the I/O stack supports barriers.
> If you say "my file is ninety bytes" "here are ninety bytes of data, all 9s",
> "my file is now thirty bytes", "here are thirty bytes of data, all 3s", then in
> the end you should have a thirty-byte file filled with 3s. If you have barriers
> enabled and you crash halfway through the whole process, you should find a file
> of ninety bytes, all 9s. But if you have barriers disabled, the data may hit
> disk as though you'd said "my file is ninety bytes, here are ninety bytes of
> data, all 9s, here are thirty bytes of data, all 3s, now my file is thirty
> bytes." If that happens, and you crash partway through the commit to disk, you
> may see a ninety-byte file consisting of thirty 3s and sixty 9s. Or things may
> landthat you see a thirty-byte file of 9s.
>
> * Your application didn't flush its writes to disk when it should have.
not needed either.
>
> * Your vm.dirty_bytes or vm.dirty_ratio are too high, you've been writing a
> lot to disk, and the kernel still has a lot of data buffered waiting to be
> written. (Well, that can always lead to data loss regardless of how high those
> settings are, which is why applications should flush their writes.)
>
> * You've used hdparm to enable write buffers in your hard disks, and your hard
> disks lose power while their buffers have data waiting to be written.
>
> * You're using a buggy disk device that does a poor job of handling power
> loss. Such as some SSDs which don't have large enough capacitors for their own
> write reordering. Or just about any flash drive.
>
> * There's a bug in some code, somewhere.
nope.
> In-memory corruption of a data is a universal hazard. ECC should be the norm,
> not the exception, honestly.
>
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-09-01 18:09 ` Volker Armin Hemmann
@ 2016-09-01 18:54 ` Rich Freeman
0 siblings, 0 replies; 70+ messages in thread
From: Rich Freeman @ 2016-09-01 18:54 UTC (permalink / raw
To: gentoo-user
On Thu, Sep 1, 2016 at 2:09 PM, Volker Armin Hemmann
<volkerarmin@googlemail.com> wrote:
>
> a common misconception. But not true at all. Google a bit.
Feel free to enlighten us. My understanding is that data=journal
means that all data gets written first to the journal. Completed
writes will make it to the main filesystem after a crash, and
incomplete writes will of course be rolled back, which is what you
want.
But simply disagreeing and saying to search Google is fairly useless,
since you can find all kinds of junk on Google. You can't even
guarantee that the same search terms will lead to the same results for
two different people.
And FWIW, this is a topic that Linus and the ext3 authors have
disagreed with at points (not this specific question, but rather what
the most appropriate defaults are). So, it isn't like there isn't
room for disagreement on best practice, or that any two people with
knowledge of the issues are guaranteed to agree.
>>
>> Now, I can still think of ways you can lose data in data=journal mode:
>>
>> * You mounted the filesystem with barrier=0 or with nobarrier; this can result
>
> not needed.
Well, duh. He is telling people NOT to do this, because this is how
you can LOSE data.
>>
>> * Your application didn't flush its writes to disk when it should have.
>
> not needed either.
That very much depends on the application. If you need to ensure that
transactions are in-sync with remote hosts (such as in a database) it
is absolutely critical to flush writes.
Applications shouldn't just flush on every write or close, because
that causes needless disk thrashing. Yes, data will be lost if users
have write caching enabled, and users who would prefer a slow system
over one that loses more data when the power goes out should disable
caching or buy a UPS.
>
> nope.
Care to actually offer anything constructive? His advice was
reasonably well-founded, even if I personally wouldn't do everything
exactly as he prefers to do so.
--
Rich
^ permalink raw reply [flat|nested] 70+ messages in thread
* [gentoo-user] Re: USB crucial file recovery
2016-08-30 0:51 ` Grant
` (3 preceding siblings ...)
2016-08-30 8:29 ` Alarig Le Lay
@ 2016-09-01 21:48 ` Kai Krakow
4 siblings, 0 replies; 70+ messages in thread
From: Kai Krakow @ 2016-09-01 21:48 UTC (permalink / raw
To: gentoo-user
Am Mon, 29 Aug 2016 17:51:19 -0700
schrieb Grant <emailgrant@gmail.com>:
> > # mount -o loop,ro -t ntfs usb.img /mnt/usbstick
> > NTFS signature is missing.
> > Failed to mount '/dev/loop0': Invalid argument
> > The device '/dev/loop0' doesn't seem to have a valid NTFS.
> > Maybe the wrong device is used? Or the whole disk instead of a
> > partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
> >
> > How else can I get my file from the ddrescue image of the USB stick?
> >
> > - Grant
>
>
> Ah, I got it, I just needed to specify the offset when mounting.
> Thank you so much everyone. Many hours of work went into the file I
> just recovered.
>
> So I'm done with NTFS forever. Will ext2 somehow allow me to use the
> USB stick across Gentoo systems without permission/ownership problems?
Long story short: Do not put important files on USB thumb drives. They
are known to break unexpected and horribly. They even offer silent data
corruption as a hidden feature if stored away for a few weeks or 1-2
years without ever connecting them.
By the way: Many thumb drives are internally optimized to FAT and NTFS
usage - putting anything else on them puts more stress on the internal
flash transition layer, which is most of the time very simple (some
drives only do wear leveling where the FAT tables usually are).
So using NTFS was probably not your worst decision. Ext2 (or even worse
ext3 due to its journal) may very well destroy your thumb drive faster.
I was once able to destroy a cheap thumb drive within two weeks by
putting something else on it than FAT32, and wrote some multiple 10 GBs
to it constantly in small blocks. Now it has unusable blocks spread all
over its storage space. I cannot format anything else to it than FAT32
now. I don't use it any longer. It no longer reliable stores files.
Most thumb drives also need to refresh their cells internally, this is
part of a maintenance process which runs while they are connected. So,
you even cannot use them for archive storage. Thumb drives are for
temporary storage only, to transport files. But never use them as a
single copy of important data.
--
Regards,
Kai
Replies to list-only preferred.
^ permalink raw reply [flat|nested] 70+ messages in thread
* [gentoo-user] Re: USB crucial file recovery
2016-08-30 7:38 ` Neil Bothwick
@ 2016-09-01 21:50 ` Kai Krakow
2016-09-01 22:02 ` Neil Bothwick
0 siblings, 1 reply; 70+ messages in thread
From: Kai Krakow @ 2016-09-01 21:50 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 814 bytes --]
Am Tue, 30 Aug 2016 08:38:26 +0100
schrieb Neil Bothwick <neil@digimed.co.uk>:
> On Tue, 30 Aug 2016 06:46:54 +0100, Mick wrote:
>
> > > So I'm done with NTFS forever. Will ext2 somehow allow me to use
> > > the USB stick across Gentoo systems without permission/ownership
> > > problems?
> > >
> > > - Grant
> >
> > ext2 will work, but you'll have to mount it or chmod -R 0777, or
> > only root will be able to access it.
>
> That's not true. Whoever owns the files and directories will be able
> to access then, even if root mounted the stick, just like a hard
> drive. If you have the same UID on all your systems, chown -R
> youruser: /mount/point will make everything available on all systems.
As long as uids match...
--
Regards,
Kai
Replies to list-only preferred.
[-- Attachment #2: Digitale Signatur von OpenPGP --]
[-- Type: application/pgp-signature, Size: 181 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-09-01 21:50 ` [gentoo-user] " Kai Krakow
@ 2016-09-01 22:02 ` Neil Bothwick
2016-09-01 23:54 ` Kai Krakow
0 siblings, 1 reply; 70+ messages in thread
From: Neil Bothwick @ 2016-09-01 22:02 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 776 bytes --]
On Thu, 1 Sep 2016 23:50:17 +0200, Kai Krakow wrote:
> > > ext2 will work, but you'll have to mount it or chmod -R 0777, or
> > > only root will be able to access it.
> >
> > That's not true. Whoever owns the files and directories will be able
> > to access then, even if root mounted the stick, just like a hard
> > drive. If you have the same UID on all your systems, chown -R
> > youruser: /mount/point will make everything available on all
> > systems.
>
> As long as uids match...
That's what I said, whoever owns the files. As far as Linux is
concerned, the UID is the person, usernames are just a convenience
mapping to make life simpler for the wetware.
--
Neil Bothwick
deja noo - reminds you of the last time you visited Scotland
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 163 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* [gentoo-user] Re: USB crucial file recovery
2016-08-30 20:27 ` Volker Armin Hemmann
2016-08-30 20:32 ` Grant
2016-08-30 20:42 ` [gentoo-user] " Grant Edwards
@ 2016-09-01 22:05 ` Kai Krakow
2 siblings, 0 replies; 70+ messages in thread
From: Kai Krakow @ 2016-09-01 22:05 UTC (permalink / raw
To: gentoo-user
Am Tue, 30 Aug 2016 22:27:46 +0200
schrieb Volker Armin Hemmann <volkerarmin@googlemail.com>:
> Am 30.08.2016 um 21:14 schrieb J. Roeleveld:
> > On August 30, 2016 8:58:17 PM GMT+02:00, Volker Armin Hemmann
> > <volkerarmin@googlemail.com> wrote:
> >> Am 30.08.2016 um 20:12 schrieb Alan McKinnon:
> [...]
> [...]
> [...]
> >> first
> [...]
> [...]
> >> includes
> [...]
> [...]
> [...]
> [...]
> >> because exfat does not work across gentoo systems. ext2 does.
> > Exfat works when the drivers are installed.
> > Same goes for ext2.
> >
> > It is possible to not have support for ext2/3 or 4 and still have a
> > fully functional system. (Btrfs or zfs for the full system for
> > instance)
> >
> > When using UEFI boot, a vfat partition with support is required.
> >
> > --
> > Joost
>
> ext2 is on every system
Not on mine...
> , exfat not. ext2 is very stable, tested and
> well aged. exfat is some fuse something crap. New, hardly tested and
> unstable as it gets.
>
> And why use exfat if you use linux? It is just not needed at all.
I consider ext2 not suitable for USB drives because it has no journal
and can break horribly if accidentally removed without unmounting (or
pulling before everything was written).
OTOH, I recommend against using filesystems with a fixed journal area
on thumb drives. Some may be optimized for NTFS usage.
A log structured filesystem (like f2fs, nilfs2) or one with wandering
journals (like reiserfs) may be best - tho I cannot speak for them
regarding accidental disconnects without unmounting first. One should
test it. Reiserfs3 worked very well for me, much better than ext[23],
when I once had to fight with a failing RAID controller (it just went
offline). All reiserfs could be recovered by fsck, only a few files had
wrong checksums, a few were missing - that system (different hardware
of course) is still in use today but converted over to xfs because
reiserfs is a really bad performer for parallel access. Ext[23] was
totally borked, fsck had no chance to recover anything. I'd consider
reiserfs3 mature.
Personally, I tend to using f2fs on thumb drives. Nilfs2 may be an
option, too. But I never used that. I have no experience with f2fs on
failing hardware, tho. But in the end: thumb drives aren't for
important data anyway. So what counts is getting the best life time out
of them.
--
Regards,
Kai
Replies to list-only preferred.
^ permalink raw reply [flat|nested] 70+ messages in thread
* [gentoo-user] Re: USB crucial file recovery
2016-08-30 21:59 ` Rich Freeman
2016-08-30 22:12 ` Volker Armin Hemmann
@ 2016-09-01 22:35 ` Kai Krakow
2016-09-01 23:06 ` Rich Freeman
1 sibling, 1 reply; 70+ messages in thread
From: Kai Krakow @ 2016-09-01 22:35 UTC (permalink / raw
To: gentoo-user
Am Tue, 30 Aug 2016 17:59:02 -0400
schrieb Rich Freeman <rich0@gentoo.org>:
> On Tue, Aug 30, 2016 at 4:58 PM, Volker Armin Hemmann
> <volkerarmin@googlemail.com> wrote:
> >
> > the journal does not add any data integrity benefits at all. It just
> > makes it more likely that the fs is in a sane state if there is a
> > crash. Likely. Not a guarantee. Your data? No one cares.
> >
>
> That depends on the mode of operation. In journal=data I believe
> everything gets written twice, which should make it fairly immune to
> most forms of corruption.
No, journal != data integrity. Journal only ensure that data is written
transactionally. You won't end up with messed up meta data, and from
API perspective and with journal=data, a partial written block of data
will be rewritten after recovering from a crash - up to the last fsync.
If it happens that this last fsync was half way into a file: Well, then
there's only your work written upto the half of the file. A well
designed application can then handle this (e.g. transactional
databases). But your carefully written thesis may still be broken.
Journals only ensure consistency on API level, not integrity.
If you need integrity, so then file system can tell you if your file is
broken or not, you need checksums.
If you need a way to recover from a half written file, you need a CoW
file system where you could, by luck, go back some generations.
> f2fs would also have this benefit. Data is not overwritten in-place
> in a log-based filesystem; they're essentially journaled by their
> design (actually, they're basically what you get if you ditch the
> regular part of the filesystem and keep nothing but the journal).
This is log-structed, not journalled. You pointed that out, yes, but
you weakened that by writing "basically the same". I think the
difference is important. Mostly because the journal is a fixed area on
the disk, while a log-structured file system has no such journal.
> > If you want an fs that cares about your data: zfs.
> >
>
> I won't argue that the COW filesystems have better data security
> features. It will be nice when they're stable in the main kernel.
This point was raised because it supports checksums, not because it
supports CoW.
Log structered file systems are, btw, interesting for write-mostly
workloads on spinning disks because head movements are minimized.
They are not automatically helping dumb/simple flash translation layers.
This incorporates a little more logic by exploiting the internal
structure of flash (writing only sequentially in page sized blocks,
garbage collection and reuse only on erase block level). F2fs and
bcache (as a caching layer) do this. Not sure about the others.
Fully (and only fully) journalled file systems are a little similar for
write-mostly workloads because head movements stay within the journal
area but performance goes down as soon as the journal needs to be
spooled to permanent storage.
I think xfs spreads the journal across the storage space along with the
allocation groups (thus better exploiting performance on jbod RAIDs and
RAID systems that do not stripe diagonally already). It may thus also be
an option for thumb drives. But it is known to prefer killing your
complete file contents of what you just saved for security reasons
during unclean shutdowns / unmounts. But it is pretty rock solid. And
it will gain CoW support in the future which probably eliminates the
kill-the-contents issue, even supporting native snapshotting then.
--
Regards,
Kai
Replies to list-only preferred.
^ permalink raw reply [flat|nested] 70+ messages in thread
* [gentoo-user] Re: USB crucial file recovery
2016-08-31 0:32 ` Alan McKinnon
2016-08-31 15:25 ` Grant
@ 2016-09-01 22:56 ` Kai Krakow
2016-09-01 23:39 ` Alan McKinnon
1 sibling, 1 reply; 70+ messages in thread
From: Kai Krakow @ 2016-09-01 22:56 UTC (permalink / raw
To: gentoo-user
Am Wed, 31 Aug 2016 02:32:24 +0200
schrieb Alan McKinnon <alan.mckinnon@gmail.com>:
> On 31/08/2016 02:08, Grant wrote:
> [...]
> [...]
> >>
> >> You can't control ownership and permissions of existing files with
> >> mount options on a Linux filesystem. See man mount.
> >
> >
> > So in order to use a USB stick between multiple Gentoo systems with
> > ext2, I need to make sure my users have matching UIDs/GIDs?
>
> Yes
>
> The uids/gids/modes in the inodes themselves are the owners and perms,
> you cannot override them.
>
> So unless you have mode=666, you will need matching UIDs/GIDs (which
> is a royal massive pain in the butt to bring about without NIS or
> similar
>
> > I think
> > this is how I ended up on NTFS in the first place.
>
> Didn't we have this discussion about a year ago? Sounds familiar now
>
> > Is there a
> > filesystem that will make that unnecessary and exhibit better
> > reliability than NTFS?
>
> Yes, FAT. It works and works well.
> Or exFAT which is Microsoft's solution to the problem of very large
> files on FAT.
>
> Which NTFS system are you using?
>
> ntfs kernel module? It's quite dodgy and unsafe with writes
> ntfs-ng on fuse? I find that one quite solid
>
>
> ntfs-ng does have an annoyance that has bitten me more than once. When
> ntfs-nf writes to an FS, it can get marked dirty. Somehow, when used
> in a Windows machine the driver there has issues with the FS. Remount
> it in Linux again and all is good.
Well, ntfs-ng simply sets the dirty flag which to Windows means "needs
chkdsk". So Windows complains upon mount that it needs to chkdsk the
drive first. That's all. Nothing bad.
> The cynic in me says that Microsoft didn'y implement their own FS spec
> properly whereas ntfs-ng did :-)
Or ntfs-ng simply doesn't trust itself enough while MS trusts itself
too much. Modern Windows kernels almost never set the dirty bit and
instead trust self-healing capabilities of NTFS by using repair
hotspots. By current design, NTFS may be broken at any time while
Windows tells you nothing about it. If the kernel comes across a
defective structure it marks it as a repair hotspot. A background
process repairs these online. If that fails, it is marked for offline
repair which is repaired silently during mount phase. But the dirty
bit? I haven't seen this in a long time (last time was Windows 2003).
Run a chkdsk on an aging Windows installation which has crashed one
or another time. Did you ever see a chkdsk running? No? Then run a
forced chkdsk. Chances are that it will find and repair problems. Run a
non-forced chkdsk: It will only check if there are repair hotspots. If
none are there, it says: Everything fine. It's lying at you.
But still, the papers about NTFS self-healing are quite interesting to
read. It just appears not as mature to me as MS thinks it to be.
--
Regards,
Kai
Replies to list-only preferred.
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-09-01 22:35 ` Kai Krakow
@ 2016-09-01 23:06 ` Rich Freeman
0 siblings, 0 replies; 70+ messages in thread
From: Rich Freeman @ 2016-09-01 23:06 UTC (permalink / raw
To: gentoo-user
On Thu, Sep 1, 2016 at 6:35 PM, Kai Krakow <hurikhan77@gmail.com> wrote:
> Am Tue, 30 Aug 2016 17:59:02 -0400
> schrieb Rich Freeman <rich0@gentoo.org>:
>
>>
>> That depends on the mode of operation. In journal=data I believe
>> everything gets written twice, which should make it fairly immune to
>> most forms of corruption.
>
> No, journal != data integrity. Journal only ensure that data is written
> transactionally. You won't end up with messed up meta data, and from
> API perspective and with journal=data, a partial written block of data
> will be rewritten after recovering from a crash - up to the last fsync.
> If it happens that this last fsync was half way into a file: Well, then
> there's only your work written upto the half of the file.
Well, sure, but all an application needs to do is make sure it calls
write on whole files, and not half-files. It doesn't need to fsync as
far as I'm aware. It just needs to write consistent files in one
system call. Then that write either will or won't make it to disk,
but you won't get half of a write.
> Journals only ensure consistency on API level, not integrity.
Correct, but this is way better than not journaling or ordering data,
which protects the metadata but doesn't ensure your files aren't
garbled even if the application is careful.
>
> If you need integrity, so then file system can tell you if your file is
> broken or not, you need checksums.
>
Btrfs and zfs fail in the exact same way in this particular regard.
If you call write with half of a file, btrfs/zfs will tell you that
half of that file was successfully written. But, it won't hold up for
the other half of the file that the kernel hasn't been told about.
The checksumming in these filesystems really only protects data from
modification after it is written. Sectors that were only half-written
during an outage which have inconsistent checksums probably won't even
be looked at during an fsck/mount, because the filesystem is just
going to replay the journal and write right over them (or to some new
block, still treating the half-written data as unallocated). These
filesystems don't go scrubbing the disk to figure out what happened,
they just replay the log back to the last checkpoint. The checksums
are just used during routine reads to ensure the data wasn't somehow
corrupted after it was written, in which case a good copy is used,
assuming one exists. If not at least you'll know about the problem.
> If you need a way to recover from a half written file, you need a CoW
> file system where you could, by luck, go back some generations.
Only if you've kept snapshots, or plan to hex-edit your disk/etc. The
solution here is to correctly use the system calls.
>
>> f2fs would also have this benefit. Data is not overwritten in-place
>> in a log-based filesystem; they're essentially journaled by their
>> design (actually, they're basically what you get if you ditch the
>> regular part of the filesystem and keep nothing but the journal).
>
> This is log-structed, not journalled. You pointed that out, yes, but
> you weakened that by writing "basically the same". I think the
> difference is important. Mostly because the journal is a fixed area on
> the disk, while a log-structured file system has no such journal.
My point was that they're equivalent from the standpoint that every
write either completes or fails and you don't get half-written data.
Yes, I know how f2fs actually works, and this wasn't intended to be a
primer on log-based filesystems. The COW filesystems have similar
benefits since they don't overwrite data in place, other than maybe
their superblocks (or whatever you call them). I don't know what the
on-disk format of zfs is, but btrfs has multiple copies of the tree
root with a generation number so if something dies partway it is
really easy for it to figure out where it left off (if none of the
roots were updated then any partial tree structures laid down are in
unallocated space and just get rewritten on the next commit, and if
any were written then you have a fully consistent new tree used to
update the remaining roots).
One of these days I'll have to read up on the on-disk format of zfs as
I suspect it would make an interest contrast with btrfs.
>
> This point was raised because it supports checksums, not because it
> supports CoW.
Sure, but both provide benefits in these contexts. And the only COW
filesystems are also the only ones I'm aware of (at least in popular
use) that have checksums.
>
> Log structered file systems are, btw, interesting for write-mostly
> workloads on spinning disks because head movements are minimized.
> They are not automatically helping dumb/simple flash translation layers.
> This incorporates a little more logic by exploiting the internal
> structure of flash (writing only sequentially in page sized blocks,
> garbage collection and reuse only on erase block level). F2fs and
> bcache (as a caching layer) do this. Not sure about the others.
Sure. It is just really easy to do big block erases in a log-based
filesystem since everything tends to be written (and overwritten)
sequentially. You can of course build a log-based filesystem that
doesn't perform well on flash. They would still tend to have the
benefits of data journaling (for free; the cost is fragmentation which
is of course a bigger issue on disks).
--
Rich
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-09-01 22:56 ` [gentoo-user] " Kai Krakow
@ 2016-09-01 23:39 ` Alan McKinnon
0 siblings, 0 replies; 70+ messages in thread
From: Alan McKinnon @ 2016-09-01 23:39 UTC (permalink / raw
To: gentoo-user
On 02/09/2016 00:56, Kai Krakow wrote:
> Am Wed, 31 Aug 2016 02:32:24 +0200
> schrieb Alan McKinnon <alan.mckinnon@gmail.com>:
>
>> On 31/08/2016 02:08, Grant wrote:
>> [...]
>> [...]
>>>>
>>>> You can't control ownership and permissions of existing files with
>>>> mount options on a Linux filesystem. See man mount.
>>>
>>>
>>> So in order to use a USB stick between multiple Gentoo systems with
>>> ext2, I need to make sure my users have matching UIDs/GIDs?
>>
>> Yes
>>
>> The uids/gids/modes in the inodes themselves are the owners and perms,
>> you cannot override them.
>>
>> So unless you have mode=666, you will need matching UIDs/GIDs (which
>> is a royal massive pain in the butt to bring about without NIS or
>> similar
>>
>>> I think
>>> this is how I ended up on NTFS in the first place.
>>
>> Didn't we have this discussion about a year ago? Sounds familiar now
>>
>>> Is there a
>>> filesystem that will make that unnecessary and exhibit better
>>> reliability than NTFS?
>>
>> Yes, FAT. It works and works well.
>> Or exFAT which is Microsoft's solution to the problem of very large
>> files on FAT.
>>
>> Which NTFS system are you using?
>>
>> ntfs kernel module? It's quite dodgy and unsafe with writes
>> ntfs-ng on fuse? I find that one quite solid
>>
>>
>> ntfs-ng does have an annoyance that has bitten me more than once. When
>> ntfs-nf writes to an FS, it can get marked dirty. Somehow, when used
>> in a Windows machine the driver there has issues with the FS. Remount
>> it in Linux again and all is good.
>
> Well, ntfs-ng simply sets the dirty flag which to Windows means "needs
> chkdsk". So Windows complains upon mount that it needs to chkdsk the
> drive first. That's all. Nothing bad.
No, that's not it. Read again what I wrote - i have a specific fail mode
which I don't care to investigate, not the general dirty state flag
setting you describe
^ permalink raw reply [flat|nested] 70+ messages in thread
* [gentoo-user] Re: USB crucial file recovery
2016-09-01 22:02 ` Neil Bothwick
@ 2016-09-01 23:54 ` Kai Krakow
0 siblings, 0 replies; 70+ messages in thread
From: Kai Krakow @ 2016-09-01 23:54 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 934 bytes --]
Am Thu, 1 Sep 2016 23:02:17 +0100
schrieb Neil Bothwick <neil@digimed.co.uk>:
> On Thu, 1 Sep 2016 23:50:17 +0200, Kai Krakow wrote:
>
> [...]
> > >
> > > That's not true. Whoever owns the files and directories will be
> > > able to access then, even if root mounted the stick, just like a
> > > hard drive. If you have the same UID on all your systems, chown -R
> > > youruser: /mount/point will make everything available on all
> > > systems.
> >
> > As long as uids match...
>
> That's what I said, whoever owns the files. As far as Linux is
> concerned, the UID is the person, usernames are just a convenience
> mapping to make life simpler for the wetware.
Oh yes, I was confused... ;-)
After I hit reply, my eyes stopped at "owns the file" and continued at
"chown -R youruser". So if others were confused, too: Now they
shouldn't.
--
Regards,
Kai
Replies to list-only preferred.
[-- Attachment #2: Digitale Signatur von OpenPGP --]
[-- Type: application/pgp-signature, Size: 181 bytes --]
^ permalink raw reply [flat|nested] 70+ messages in thread
* Re: [gentoo-user] Re: USB crucial file recovery
2016-08-31 21:45 ` Alan McKinnon
2016-09-01 3:42 ` J. Roeleveld
2016-09-01 12:41 ` Michael Mol
@ 2016-09-07 17:41 ` Grant
2 siblings, 0 replies; 70+ messages in thread
From: Grant @ 2016-09-07 17:41 UTC (permalink / raw
To: Gentoo mailing list
>>>> Is there a
>>>> filesystem that will make that unnecessary and exhibit better
>>>> reliability than NTFS?
>>>
>>>
>>> Yes, FAT. It works and works well.
>>> Or exFAT which is Microsoft's solution to the problem of very large
>>> files on FAT.
>>
>>
>> FAT32 won't work for me since I need to use files larger than 4GB. I
>> know it's beta software but should exfat be more reliable than ntfs?
>
>
> It doesn't do all the fancy journalling that ntfs does, so based solely on
> complexity, it ought to be more reliable.
>
> None of us have done real tests and mentioned it here, so we really don't
> know how it pans out in the real world.
>
> Do a bunch of tests yourself and decide
>>
>>
>>> Which NTFS system are you using?
>>>
>>> ntfs kernel module? It's quite dodgy and unsafe with writes
>>> ntfs-ng on fuse? I find that one quite solid
>>
>>
>> I'm using ntfs-ng as opposed to the kernel option(s).
>
>
> I'm offering 10 to 1 odds that your problems came from a faulty USB stick,
> or maybe one that you yanked too soon
It could be failing hardware but I didn't touch the USB stick when it
freaked out. This same thing has happened several times now with two
different USB sticks.
It sounds like I'm stuck with NTFS if I want to share the USB stick
amongst Gentoo systems without managing UUIDs and I want to work with
files larger than 4GB. exfat is the other option but it sounds rather
unproven.
- Grant
^ permalink raw reply [flat|nested] 70+ messages in thread
end of thread, other threads:[~2016-09-07 17:41 UTC | newest]
Thread overview: 70+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-08-28 18:49 [gentoo-user] USB crucial file recovery Grant
2016-08-28 18:57 ` Neil Bothwick
2016-08-28 19:12 ` Mick
2016-08-29 1:24 ` Grant
2016-08-29 3:28 ` J. Roeleveld
2016-08-30 0:40 ` Grant
2016-08-30 0:51 ` Grant
2016-08-30 5:35 ` Azamat Hackimov
2016-08-30 14:09 ` Rich Freeman
2016-08-30 5:46 ` Mick
2016-08-30 7:38 ` Neil Bothwick
2016-09-01 21:50 ` [gentoo-user] " Kai Krakow
2016-09-01 22:02 ` Neil Bothwick
2016-09-01 23:54 ` Kai Krakow
2016-08-30 7:41 ` [gentoo-user] " Neil Bothwick
2016-08-30 8:29 ` Alarig Le Lay
2016-08-30 9:40 ` Neil Bothwick
2016-08-30 9:43 ` Alarig Le Lay
2016-08-30 10:08 ` Alan McKinnon
2016-08-30 12:04 ` Neil Bothwick
2016-08-30 18:12 ` Alan McKinnon
2016-08-30 18:58 ` Volker Armin Hemmann
2016-08-30 19:14 ` J. Roeleveld
2016-08-30 20:27 ` Volker Armin Hemmann
2016-08-30 20:32 ` Grant
2016-08-30 20:43 ` Rich Freeman
2016-08-30 20:53 ` Volker Armin Hemmann
2016-08-30 22:38 ` Neil Bothwick
2016-08-30 20:42 ` [gentoo-user] " Grant Edwards
2016-08-30 20:46 ` Rich Freeman
2016-08-30 20:58 ` Volker Armin Hemmann
2016-08-30 21:59 ` Rich Freeman
2016-08-30 22:12 ` Volker Armin Hemmann
2016-08-31 14:33 ` Michael Mol
2016-08-31 16:43 ` Rich Freeman
2016-09-01 18:09 ` Volker Armin Hemmann
2016-09-01 18:54 ` Rich Freeman
2016-09-01 22:35 ` Kai Krakow
2016-09-01 23:06 ` Rich Freeman
2016-08-30 22:42 ` Neil Bothwick
2016-08-30 23:06 ` Grant Edwards
2016-08-30 23:54 ` Alan McKinnon
2016-08-31 0:08 ` Grant
2016-08-31 0:32 ` Alan McKinnon
2016-08-31 15:25 ` Grant
2016-08-31 21:45 ` Alan McKinnon
2016-09-01 3:42 ` J. Roeleveld
2016-09-01 6:03 ` Alan McKinnon
2016-09-01 12:41 ` Michael Mol
2016-09-01 13:35 ` Rich Freeman
2016-09-01 14:55 ` Michael Mol
2016-09-01 15:02 ` Rich Freeman
2016-09-01 14:21 ` J. Roeleveld
2016-09-01 15:01 ` Michael Mol
2016-09-07 17:41 ` Grant
2016-09-01 14:36 ` [gentoo-user] " Stroller
2016-09-01 22:56 ` [gentoo-user] " Kai Krakow
2016-09-01 23:39 ` Alan McKinnon
2016-08-31 7:45 ` Neil Bothwick
2016-08-31 7:47 ` Neil Bothwick
2016-08-31 10:30 ` Alarig Le Lay
2016-08-31 11:44 ` Rich Freeman
2016-08-31 12:21 ` Neil Bothwick
2016-08-31 17:09 ` waltdnes
2016-08-31 18:30 ` Neil Bothwick
2016-09-01 22:05 ` Kai Krakow
2016-08-30 22:36 ` [gentoo-user] " Neil Bothwick
2016-08-30 12:01 ` Neil Bothwick
2016-09-01 21:48 ` [gentoo-user] " Kai Krakow
2016-08-30 19:31 ` [gentoo-user] " R0b0t1
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox