Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-8313

unable to xfs_grow resized vdo virt lv

    • None
    • Moderate
    • sst_logical_storage
    • ssg_filesystems_storage_and_HA
    • 8
    • 12
    • 1
    • Hide

      Already upstreamed and present in RHEL-9.5 rebased lvm2 version.

      Show
      Already upstreamed and present in RHEL-9.5 rebased lvm2 version.
    • QE ack, Dev ack
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • If docs needed, set a value
    • None

      +++ This bug was initially created as a clone of Bug #2120738 +++

      Description of problem:
      This vdo virt volume had been resided using lvextend prior to this xfs_grow attempt.

      [root@hayes-01 ~]# lvs -a -o +devices,segtype
      LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices Type
      snap vdo_sanity swi-a-s--- 6.00g vdo_lv 0.03 /dev/sde1(12800) linear
      vdo_lv vdo_sanity owi-aos--- 101.95g vdo_pool vdo_pool(0) vdo
      vdo_pool vdo_sanity dwi------- 50.00g 8.08 vdo_pool_vdata(0) vdo-pool
      [vdo_pool_vdata] vdo_sanity Dwi-ao---- 50.00g /dev/sde1(0) linear

      [root@hayes-01 ~]# df -h
      Filesystem Size Used Avail Use% Mounted on
      /dev/mapper/vdo_sanity-vdo_lv 100G 833M 100G 1% /mnt/vdo_lv

      [root@hayes-01 ~]# xfs_growfs /mnt/vdo_lv
      meta-data=/dev/mapper/vdo_sanity-vdo_lv isize=512 agcount=4, agsize=6553600 blks
      = sectsz=4096 attr=2, projid32bit=1
      = crc=1 finobt=1, sparse=1, rmapbt=0
      = reflink=1 bigtime=0 inobtcount=0
      data = bsize=4096 blocks=26214400, imaxpct=25
      = sunit=0 swidth=0 blks
      naming =version 2 bsize=4096 ascii-ci=0, ftype=1
      log =internal log bsize=4096 blocks=12800, version=2
      = sectsz=4096 sunit=1 blks, lazy-count=1
      realtime =none extsz=4096 blocks=0, rtextents=0
      xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: Input/output error

      Aug 23 11:03:20 hayes-01 kernel: kvdo621:logQ0: Completing read VIO for LBN 26726527 with error after launch: kvdo: Out of range (2049)
      Aug 23 11:03:20 hayes-01 kernel: kvdo621:cpuQ1: mapToSystemError: mapping internal status code 2049 (kvdo: VDO_OUT_OF_RANGE: kvdo: Out of range) to EIO

      Version-Release number of selected component (if applicable):
      kernel-4.18.0-417.el8 BUILT: Wed Aug 10 15:40:43 CDT 2022

      lvm2-2.03.14-6.el8 BUILT: Fri Jul 29 05:40:53 CDT 2022
      lvm2-libs-2.03.14-6.el8 BUILT: Fri Jul 29 05:40:53 CDT 2022

      vdo-6.2.7.17-14.el8 BUILT: Tue Jul 19 10:05:39 CDT 2022
      kmod-kvdo-6.2.7.17-87.el8 BUILT: Thu Aug 11 13:47:21 CDT 2022

      How reproducible:
      Everytime

      — Additional comment from corwin on 2022-08-23 17:52:51 UTC —

      I believe this is a mismatch in lvm's and vdo's perceptions of the logical size of the vdo device, probably due to a rounding error. In RHEL-9 vdo does more validation of the table line, so this mismatch is detected when the table is loaded rather than when I/O goes off the end of the device.

      — Additional comment from Zdenek Kabelac on 2023-02-03 11:07:41 UTC —

      Question - wasn't accidentally vdo_lv resized while being inactive?

      This has been prohibited with recent commit:

      https://listman.redhat.com/archives/lvm-devel/2023-January/024535.html

      Since I'm seeing there snapshot for this LV and there is no support for resizing 'active' snapshot - but inactive vdoLV cannot be resized either - so my guess is the older version of lvm allowed to 'extend' virtual size of inactive vdo volume - and this was not tracked properly inside vdo target.

      With above mentioned patch included into the build you should get error while trying to resize inactive vdo LVs (unsure which version will include this patch).

      — Additional comment from Corey Marthaler on 2023-02-22 19:33:18 UTC —

      The current (latest) 9.2 build has the same behavior.

      kernel-5.14.0-252.el9 BUILT: Wed Feb 1 03:30:10 PM CET 2023
      lvm2-2.03.17-7.el9 BUILT: Thu Feb 16 03:24:54 PM CET 2023
      lvm2-libs-2.03.17-7.el9 BUILT: Thu Feb 16 03:24:54 PM CET 2023

      [root@virt-008 ~]# df -h
      Filesystem Size Used Avail Use% Mounted on
      /dev/mapper/vdo_sanity-vdo_lv 100G 2.2G 98G 3% /mnt/vdo_lv

      [root@virt-008 ~]# xfs_growfs /mnt/vdo_lv
      meta-data=/dev/mapper/vdo_sanity-vdo_lv isize=512 agcount=4, agsize=6553600 blks
      = sectsz=4096 attr=2, projid32bit=1
      = crc=1 finobt=1, sparse=1, rmapbt=0
      = reflink=1 bigtime=1 inobtcount=1
      data = bsize=4096 blocks=26214400, imaxpct=25
      = sunit=0 swidth=0 blks
      naming =version 2 bsize=4096 ascii-ci=0, ftype=1
      log =internal log bsize=4096 blocks=12800, version=2
      = sectsz=4096 sunit=1 blks, lazy-count=1
      realtime =none extsz=4096 blocks=0, rtextents=0
      xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: Input/output error

      Feb 22 20:28:54 virt-008 kernel: kvdo8:logQ0: Completing read vio for LBN 26726527 with error after launch: VDO Status: Out of range (1465)
      Feb 22 20:28:54 virt-008 kernel: kvdo8:cpuQ1: vdo_map_to_system_error: mapping internal status code 1465 (VDO_OUT_OF_RANGE: VDO Status: Out of range) to EIO

            mcsontos@redhat.com Marian Csontos
            cmarthal@redhat.com Corey Marthaler
            Zdenek Kabelac Zdenek Kabelac
            Cluster QE Cluster QE
            Votes:
            0 Vote for this issue
            Watchers:
            9 Start watching this issue

              Created:
              Updated: