* [gentoo-user] Unable to expand ext4 partition
@ 2022-02-05 17:43 Julien Roy
2022-02-05 19:09 ` Wol
0 siblings, 1 reply; 9+ messages in thread
From: Julien Roy @ 2022-02-05 17:43 UTC (permalink / raw
To: Gentoo User
[-- Attachment #1: Type: text/plain, Size: 3152 bytes --]
Hello,
I've been running an LVM RAID 5 on my home lab for a while, and recently it's been getting awfully close to 100% full, so I decided to buy a new drive to add to it, however, growing an LVM RAID is more complicated than I thought! I found very few documentation on how to do this, and settled on following some user's notes on the Arch Wiki [0]. I should've used mdadm !...
My RAID 5 consisted of 3x6TB drives giving me a total of 12TB of usable space. I am trying to grow it to 18TB now (4x6TB -1 for parity).
I seem to have done everything in order since I can see all 4 drives are used when I run the vgdisplay command, and lvdisplay tells me that there is 16.37TB of usable space in the logical volume.
In fact, running fdisk -l on the lv confirms this as well :
Disk /dev/vgraid/lvraid: 16.37 TiB
However, the partition on it is still at 12TB (or a little bit less in HDD units) and I am unable to expand it.
When I run the resize2fs command on the logical volume, I can see that it's doing something, and I can hear the disks doing HDD noises, but after just a few minutes (perhaps seconds), the disks turn quiet, and then a few more minutes later, resize2fs halts with the following error:
doas resize2fs /dev/vgraid/lvraid
resize2fs 1.46.4 (18-Aug-2021)
Resizing the filesystem on /dev/vgraid/lvraid to 4395386880 (4k) blocks.
resize2fs: Input/output error while trying to resize /dev/vgraid/lvraid
Please run 'e2fsck -fy /dev/vgraid/lvraid' to fix the filesystem
after the aborted resize operation.
A few seconds after the resize2fs gives the "input/output" error, I can see the following lines appearing multiple times in dmesg:
Feb 5 12:35:50 gentoo kernel: Buffer I/O error on dev dm-8, logical block 2930769920, lost async page write
At first I was worried about data corruption or a defective drive, but I ran a smartctl test on all 4 drives and they all turn out healthy. Also, I am still capable of mounting the LVM partition and accessing all the data without any issue.
I have then tried running the e2fsck command as instructed, which fixes some things [1], and then running the resize2fs command again, but it does the same thing every time.
My Google skills seem to not be good enough for this one so I am hoping someone else here has an idea what is wrong...
Thanks !
Julien
[0] https://wiki.archlinux.org/title/User:Ctag/Notes#Growing_LVM_Raid5
[1] doas e2fsck -fy /dev/vgraid/lvraid
e2fsck 1.46.4 (18-Aug-2021)
Resize inode not valid. Recreate? yes
Pass 1: Checking inodes, blocks, and sizes
Inode 238814586 extent tree (at level 1) could be narrower. Optimize? yes
Pass 1E: Optimizing extent trees
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Block bitmap differences: -(2080--2096) +(2304--2305) +(2307--2321)
Fix? yes
Free blocks count wrong for group #0 (1863, counted=1864).
Fix? yes
/dev/vgraid/lvraid: ***** FILE SYSTEM WAS MODIFIED *****
/dev/vgraid/lvraid: 199180/366284800 files (0.8% non-contiguous), 2768068728/2930257920 blocks
[-- Attachment #2: Type: text/html, Size: 8265 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [gentoo-user] Unable to expand ext4 partition
2022-02-05 17:43 [gentoo-user] Unable to expand ext4 partition Julien Roy
@ 2022-02-05 19:09 ` Wol
2022-02-05 19:37 ` Julien Roy
0 siblings, 1 reply; 9+ messages in thread
From: Wol @ 2022-02-05 19:09 UTC (permalink / raw
To: gentoo-user
On 05/02/2022 17:43, Julien Roy wrote:
> Hello,
>
> I've been running an LVM RAID 5 on my home lab for a while, and recently
> it's been getting awfully close to 100% full, so I decided to buy a new
> drive to add to it, however, growing an LVM RAID is more complicated
> than I thought! I found very few documentation on how to do this, and
> settled on following some user's notes on the Arch Wiki [0]. I should've
> used mdadm !...
> My RAID 5 consisted of 3x6TB drives giving me a total of 12TB of usable
> space. I am trying to grow it to 18TB now (4x6TB -1 for parity).
> I seem to have done everything in order since I can see all 4 drives are
> used when I run the vgdisplay command, and lvdisplay tells me that there
> is 16.37TB of usable space in the logical volume.
> In fact, running fdisk -l on the lv confirms this as well :
> Disk /dev/vgraid/lvraid: 16.37 TiB
If you'd been running mdadm I'd have been able to help ... my setup is
ext4 over lvm over md-raid over dm-integrity over hardware...
But you've made no mention of lvgrow or whatever it's called. Not using
lv-raid, I don't know whether you put ext straight on top of the raid,
or do you need to grow the lv volume after you've grown the raid? I know
I'd have to grow the volume.
Cheers,
Wol
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [gentoo-user] Unable to expand ext4 partition
2022-02-05 19:09 ` Wol
@ 2022-02-05 19:37 ` Julien Roy
2022-02-05 22:02 ` Wols Lists
0 siblings, 1 reply; 9+ messages in thread
From: Julien Roy @ 2022-02-05 19:37 UTC (permalink / raw
To: Gentoo User
[-- Attachment #1: Type: text/plain, Size: 5355 bytes --]
I'm running ext4 over the logical volume over hardware
The steps I used to grow the logical volume are as follows:
1- I created a physical volume on the disk using pvcreate /dev/sda (the new disk became sda and the other ones offset to sd[bcd])
doas pvs -a
PV VG Fmt Attr PSize PFree
/dev/sda vgraid lvm2 a-- <5.46t 0
/dev/sdb vgraid lvm2 a-- <5.46t 0
/dev/sdc vgraid lvm2 a-- <5.46t 0
/dev/sdd vgraid lvm2 a-- <5.46t 0
2- I added the PV to the volume group using vgextend vgraid /dev/sda
doas vgs -a
VG #PV #LV #SN Attr VSize VFree
vgraid 4 1 0 wz--n- 21.83t 0
3- I used the lvconvert command to add the PV to the LV lvconvert --stripes 3 /dev/vgraid/lvraid
doas lvs -a
lvraid vgraid rwi-aor--- 16.37t 100.00
[lvraid_rimage_0] vgraid iwi-aor--- <5.46t
[lvraid_rimage_1] vgraid iwi-aor--- <5.46t
[lvraid_rimage_2] vgraid iwi-aor--- <5.46t
[lvraid_rimage_3] vgraid Iwi-aor--- <5.46t
[lvraid_rmeta_0] vgraid ewi-aor--- 4.00m
[lvraid_rmeta_1] vgraid ewi-aor--- 4.00m
[lvraid_rmeta_2] vgraid ewi-aor--- 4.00m
[lvraid_rmeta_3] vgraid ewi-aor--- 4.00m
Now, if I remember this right, I ran the lvchange --syncaction check /dev/vgraid/lvraid
command, waited for almost a day for the sync to complete, then lvchange --rebuild /dev/sda /dev/vgraid/lvraid command.
One strange thing I noticed is that the `blkid` command doesn't show my LV anymore, and I cannot mount it from fstab using the UUID. I can mount it using the device name, however (mount /dev/vgraid/lvraid /mnt/raid), and that works.
At this point, I am considering transfering all my data to another volume, and re-creating the RAID using mdadm.
Here's some more info on my VG and LV :
doas vgdisplay /dev/vgraid
--- Volume group ---
VG Name vgraid
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 4
Act PV 4
VG Size 21.83 TiB
PE Size 4.00 MiB
Total PE 5723164
Alloc PE / Size 5723164 / 21.83 TiB
Free PE / Size 0 / 0
VG UUID y8U06D-V0ZF-90MK-dhS6-szZf-7qzx-yErLF2
doas lvdisplay /dev/vgraid/lvraid
--- Logical volume ---
LV Path /dev/vgraid/lvraid
LV Name lvraid
VG Name vgraid
LV UUID 73wJt0-E6Ni-rujY-9tRm-QsoF-8FPy-3c10Rg
LV Write Access read/write
LV Creation host, time gentoo, 2021-12-02 10:12:48 -0500
LV Status available
# open 1
LV Size 16.37 TiB
Current LE 4292370
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 1024
Block device 253:8
Julien
Feb 5, 2022, 14:09 by antlists@youngman.org.uk:
> On 05/02/2022 17:43, Julien Roy wrote:
>
>> Hello,
>>
>> I've been running an LVM RAID 5 on my home lab for a while, and recently it's been getting awfully close to 100% full, so I decided to buy a new drive to add to it, however, growing an LVM RAID is more complicated than I thought! I found very few documentation on how to do this, and settled on following some user's notes on the Arch Wiki [0]. I should've used mdadm !...
>> My RAID 5 consisted of 3x6TB drives giving me a total of 12TB of usable space. I am trying to grow it to 18TB now (4x6TB -1 for parity).
>> I seem to have done everything in order since I can see all 4 drives are used when I run the vgdisplay command, and lvdisplay tells me that there is 16.37TB of usable space in the logical volume.
>> In fact, running fdisk -l on the lv confirms this as well :
>> Disk /dev/vgraid/lvraid: 16.37 TiB
>>
>
> If you'd been running mdadm I'd have been able to help ... my setup is ext4 over lvm over md-raid over dm-integrity over hardware...
>
> But you've made no mention of lvgrow or whatever it's called. Not using lv-raid, I don't know whether you put ext straight on top of the raid, or do you need to grow the lv volume after you've grown the raid? I know I'd have to grow the volume.
>
> Cheers,
> Wol
>
[-- Attachment #2: Type: text/html, Size: 20217 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [gentoo-user] Unable to expand ext4 partition
2022-02-05 19:37 ` Julien Roy
@ 2022-02-05 22:02 ` Wols Lists
2022-02-05 22:16 ` Julien Roy
0 siblings, 1 reply; 9+ messages in thread
From: Wols Lists @ 2022-02-05 22:02 UTC (permalink / raw
To: gentoo-user
On 05/02/2022 19:37, Julien Roy wrote:
> At this point, I am considering transfering all my data to another
> volume, and re-creating the RAID using mdadm.
You know about the raid wiki
https://raid.wiki.kernel.org/index.php/Linux_Raid ?
(Edited by yours truly ...)
Cheers,
Wol
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [gentoo-user] Unable to expand ext4 partition
2022-02-05 22:02 ` Wols Lists
@ 2022-02-05 22:16 ` Julien Roy
2022-02-05 23:04 ` Wol
0 siblings, 1 reply; 9+ messages in thread
From: Julien Roy @ 2022-02-05 22:16 UTC (permalink / raw
To: Gentoo User
[-- Attachment #1: Type: text/plain, Size: 1009 bytes --]
I didn't - I typically use the Gentoo and Arch wiki when I need information, but will keep that in mind.
I noticed, on that page, that there's a big bold warning about using post-2019 WD Red drives. Sadly, that's exactly what I am doing, my array is 4xWD60EFAX. I don't know whether that's the cause of the problem. It does say on the wiki that these drives can't be added to existing arrays, so it would make sense. Oh well, lesson learned.
Right now, I am trying to move my data to another volume I have. I don't have another 12TB volume, so instead I am trying to compress the data so it fits on my other volume. Not sure how well that'll work.
Julien
Feb 5, 2022, 17:02 by antlists@youngman.org.uk:
> On 05/02/2022 19:37, Julien Roy wrote:
>
>> At this point, I am considering transfering all my data to another volume, and re-creating the RAID using mdadm.
>>
>
> You know about the raid wiki https://raid.wiki.kernel.org/index.php/Linux_Raid ?
>
> (Edited by yours truly ...)
>
> Cheers,
> Wol
>
[-- Attachment #2: Type: text/html, Size: 1592 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [gentoo-user] Unable to expand ext4 partition
2022-02-05 22:16 ` Julien Roy
@ 2022-02-05 23:04 ` Wol
2022-02-06 0:37 ` Julien Roy
0 siblings, 1 reply; 9+ messages in thread
From: Wol @ 2022-02-05 23:04 UTC (permalink / raw
To: gentoo-user
On 05/02/2022 22:16, Julien Roy wrote:
> I didn't - I typically use the Gentoo and Arch wiki when I need
> information, but will keep that in mind.
> I noticed, on that page, that there's a big bold warning about using
> post-2019 WD Red drives. Sadly, that's exactly what I am doing, my array
> is 4xWD60EFAX. I don't know whether that's the cause of the problem. It
> does say on the wiki that these drives can't be added to existing
> arrays, so it would make sense. Oh well, lesson learned.
Ouch. EFAX drives are the new SMR version it seems. You might have
been lucky, it might have added okay.
The problem with these drives, basically, is you cannot stream data to
them. They'll accept so much, fill up their CMR buffers, and then stall
while they do an internal re-organisation. And by the time they start
responding again, the OS thinks the drive has failed ...
I've just bought a Toshiba N300 8TB for £165 as my backup drive. As far
as I know that's an okay drive for raid - I haven't heard any bad
stories about SMR being sneaked in ... I've basically split it in 2, 3TB
as a spare partition for my raid, and 5TB as backup for my 6TB (3x3)
raid array.
Look at creating a raid-10 from your WDs, or if you create a new raid-5
array from scratch using --assume-clean then format it, you're probably
okay. Replacing SMRs with CMRs will probably work fine so if one of your
WDs fail, you should be okay replacing it, so long as it's not another
SMR :-) (If you do a scrub, expects loads of parity errors first time
:-) but you will probably get away with it if you're careful.
Cheers,
Wol
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [gentoo-user] Unable to expand ext4 partition
2022-02-05 23:04 ` Wol
@ 2022-02-06 0:37 ` Julien Roy
2022-02-06 0:47 ` Mark Knecht
0 siblings, 1 reply; 9+ messages in thread
From: Julien Roy @ 2022-02-06 0:37 UTC (permalink / raw
To: Gentoo User; +Cc: Gentoo User
[-- Attachment #1: Type: text/plain, Size: 1414 bytes --]
Thanks - the drives are new from this year, so I don't think they'll fail any time soon.
Considering that the WD60EFAX is advertised as "RAID compatible", what's for sure is that my next drives won't be WD. CMR *or* SMR...
Feb 5, 2022, 18:04 by antlists@youngman.org.uk:
> Ouch. EFAX drives are the new SMR version it seems. You might have been lucky, it might have added okay.
>
> The problem with these drives, basically, is you cannot stream data to them. They'll accept so much, fill up their CMR buffers, and then stall while they do an internal re-organisation. And by the time they start responding again, the OS thinks the drive has failed ...
>
> I've just bought a Toshiba N300 8TB for £165 as my backup drive. As far as I know that's an okay drive for raid - I haven't heard any bad stories about SMR being sneaked in ... I've basically split it in 2, 3TB as a spare partition for my raid, and 5TB as backup for my 6TB (3x3) raid array.
>
> Look at creating a raid-10 from your WDs, or if you create a new raid-5 array from scratch using --assume-clean then format it, you're probably okay. Replacing SMRs with CMRs will probably work fine so if one of your WDs fail, you should be okay replacing it, so long as it's not another SMR :-) (If you do a scrub, expects loads of parity errors first time :-) but you will probably get away with it if you're careful.
>
> Cheers,
> Wol
>
[-- Attachment #2: Type: text/html, Size: 1936 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [gentoo-user] Unable to expand ext4 partition
2022-02-06 0:37 ` Julien Roy
@ 2022-02-06 0:47 ` Mark Knecht
2022-02-06 8:12 ` Wols Lists
0 siblings, 1 reply; 9+ messages in thread
From: Mark Knecht @ 2022-02-06 0:47 UTC (permalink / raw
To: Gentoo User
If it's a WD Red Plus on the label then it's CMR and good. If it's WD
Red without the "Plus" then it's SMR and WD has said don't use them
for this purpose. It's not impossible to run the WD Red in a RAID, but
they tend to fail when resilvering. If it resilvers correctly then it
will probably be OK at least in the short term but you should consider
getting a couple of Red Plus and having them on hand if the plain WD
Red goes bad.
HTH,
Mark
On Sat, Feb 5, 2022 at 5:38 PM Julien Roy <julien@jroy.ca> wrote:
>
> Thanks - the drives are new from this year, so I don't think they'll fail any time soon.
> Considering that the WD60EFAX is advertised as "RAID compatible", what's for sure is that my next drives won't be WD. CMR *or* SMR...
>
> Feb 5, 2022, 18:04 by antlists@youngman.org.uk:
>
> Ouch. EFAX drives are the new SMR version it seems. You might have been lucky, it might have added okay.
>
> The problem with these drives, basically, is you cannot stream data to them. They'll accept so much, fill up their CMR buffers, and then stall while they do an internal re-organisation. And by the time they start responding again, the OS thinks the drive has failed ...
>
> I've just bought a Toshiba N300 8TB for £165 as my backup drive. As far as I know that's an okay drive for raid - I haven't heard any bad stories about SMR being sneaked in ... I've basically split it in 2, 3TB as a spare partition for my raid, and 5TB as backup for my 6TB (3x3) raid array.
>
> Look at creating a raid-10 from your WDs, or if you create a new raid-5 array from scratch using --assume-clean then format it, you're probably okay. Replacing SMRs with CMRs will probably work fine so if one of your WDs fail, you should be okay replacing it, so long as it's not another SMR :-) (If you do a scrub, expects loads of parity errors first time :-) but you will probably get away with it if you're careful.
>
> Cheers,
> Wol
>
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [gentoo-user] Unable to expand ext4 partition
2022-02-06 0:47 ` Mark Knecht
@ 2022-02-06 8:12 ` Wols Lists
0 siblings, 0 replies; 9+ messages in thread
From: Wols Lists @ 2022-02-06 8:12 UTC (permalink / raw
To: gentoo-user
On 06/02/2022 00:47, Mark Knecht wrote:
> If it's a WD Red Plus on the label then it's CMR and good. If it's WD
> Red without the "Plus" then it's SMR and WD has said don't use them
> for this purpose. It's not impossible to run the WD Red in a RAID, but
> they tend to fail when resilvering. If it resilvers correctly then it
> will probably be OK at least in the short term but you should consider
> getting a couple of Red Plus and having them on hand if the plain WD
> Red goes bad.
Avoid WD ...
I've got two 4TB Seagate Ironwolves and a 8TB Toshiba N300.
I've also got two 3TB Barracudas, but they're quite old and I didn't
know they were a bad choice for raid. From what I can make out, Seagate
has now split the Barracuda line in two, and you have the BarraCuda (all
SMR) and FireCuda (all CMR) aimed at the desktop niche. So you might
well be okay with a FireCuda but neither Seagate nor us raid guys would
recommend it.
Cheers,
Wol
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2022-02-06 8:12 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-02-05 17:43 [gentoo-user] Unable to expand ext4 partition Julien Roy
2022-02-05 19:09 ` Wol
2022-02-05 19:37 ` Julien Roy
2022-02-05 22:02 ` Wols Lists
2022-02-05 22:16 ` Julien Roy
2022-02-05 23:04 ` Wol
2022-02-06 0:37 ` Julien Roy
2022-02-06 0:47 ` Mark Knecht
2022-02-06 8:12 ` Wols Lists
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox