Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-54177

raid1 conversion to vdo backed _tdata can result in "device-mapper: remove ioctl" failure

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Normal Normal
    • None
    • rhel-9.5
    • None
    • No
    • None
    • rhel-sst-logical-storage
    • ssg_filesystems_storage_and_HA
    • 5
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • x86_64
    • None

      Concerning kernel "remove ioctl on failed" messages when converting raid volumes to vdo backed thin pool volumes.

      kernel-5.14.0-490.el9    BUILT: Fri Aug  2 10:42:23 PM CEST 2024
      lvm2-2.03.24-2.el9    BUILT: Wed Aug  7 09:41:45 PM CEST 2024
      lvm2-libs-2.03.24-2.el9    BUILT: Wed Aug  7 09:41:45 PM CEST 2024
       
       
      [root@virt-485 ~]# lvcreate --yes --type raid1 -n convert_pool  -L 12G -m 2 thinpool_sanity && lvconvert --yes --type thin-pool --pooldatavdo y thinpool_sanity/convert_pool
        Wiping vdo signature on /dev/thinpool_sanity/convert_pool.
        Logical volume "convert_pool" created.
        Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data.
        WARNING: Converting thinpool_sanity/convert_pool to thin pool's data volume with metadata wiping.
        THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
          The VDO volume can address 8 GB in 4 data slabs, each 2 GB.
          It can grow to address at most 16 TB of physical storage in 8192 slabs.
          If a larger maximum size might be needed, use bigger slabs.
        device-mapper: remove ioctl on  (253:8) failed: Device or resource busy
        Logical volume "convert_pool" created.
        Converted thinpool_sanity/convert_pool to thin pool.
       
       
      Aug 13 17:15:20 virt-485 kernel: device-mapper: raid: Superblocks created for new raid set
      Aug 13 17:15:20 virt-485 kernel: md/raid1:mdX: not clean -- starting background reconstruction
      Aug 13 17:15:20 virt-485 kernel: md/raid1:mdX: active with 3 out of 3 mirrors
      Aug 13 17:15:20 virt-485 kernel: mdX: bitmap file is out of date, doing full recovery
      Aug 13 17:15:20 virt-485 kernel: md: resync of RAID array mdX
      Aug 13 17:15:20 virt-485 dmeventd[258602]: Monitoring RAID device thinpool_sanity-convert_pool for events.
      Aug 13 17:15:21 virt-485 kernel: md: mdX: resync interrupted.
      Aug 13 17:15:21 virt-485 dmeventd[258602]: No longer monitoring RAID device thinpool_sanity-convert_pool for events.
      Aug 13 17:15:21 virt-485 kernel: dm-8: detected capacity change from 0 to 25165824
      Aug 13 17:15:21 virt-485 kernel: md/raid1:mdX: not clean -- starting background reconstruction
      Aug 13 17:15:21 virt-485 kernel: md/raid1:mdX: active with 3 out of 3 mirrors
      Aug 13 17:15:21 virt-485 kernel: md: requested-resync of RAID array mdX
      Aug 13 17:15:22 virt-485 dmeventd[258602]: Monitoring RAID device thinpool_sanity-convert_pool_vpool0 for events.
      Aug 13 17:15:22 virt-485 UDS/vdoformat[293126]: INFO   (vdoformat/293126) Using 1 indexing zone for concurrency.
      Aug 13 17:15:22 virt-485 dmeventd[258602]: No longer monitoring RAID device thinpool_sanity-convert_pool_vpool0 for events.
      Aug 13 17:15:23 virt-485 kernel: md: mdX: requested-resync interrupted.
      Aug 13 17:15:23 virt-485 kernel: dm-8: detected capacity change from 0 to 25165824
      Aug 13 17:15:23 virt-485 kernel: md/raid1:mdX: not clean -- starting background reconstruction
      Aug 13 17:15:23 virt-485 kernel: md/raid1:mdX: active with 3 out of 3 mirrors
      Aug 13 17:15:23 virt-485 kernel: md: requested-resync of RAID array mdX
      Aug 13 17:15:23 virt-485 kernel: kvdo110:lvconvert: loading device '253:10'
      Aug 13 17:15:23 virt-485 kernel: kvdo110:lvconvert: zones: 1 logical, 1 physical, 1 hash; total threads: 12
      Aug 13 17:15:24 virt-485 kernel: kvdo110:lvconvert: starting device '253:10'
      Aug 13 17:15:24 virt-485 kernel: kvdo110:physQ0: VDO commencing normal operation
      Aug 13 17:15:24 virt-485 kernel: kvdo110:journal: Setting UDS index target state to online
      Aug 13 17:15:24 virt-485 kernel: kvdo110:lvconvert: device '253:10' started
      Aug 13 17:15:24 virt-485 kernel: kvdo110:lvconvert: resuming device '253:10'
      Aug 13 17:15:24 virt-485 kernel: kvdo110:lvconvert: device '253:10' resumed
      Aug 13 17:15:24 virt-485 kernel: kvdo110:dedupeQ: creating index: /dev/dm-9
      Aug 13 17:15:24 virt-485 kernel: kvdo110:dedupeQ: Using 1 indexing zone for concurrency.
      Aug 13 17:15:24 virt-485 kernel: device-mapper: thin: Data device (dm-11) max discard sectors smaller than a block: Disabling discard passdown.
      Aug 13 17:15:24 virt-485 dmeventd[258602]: Monitoring RAID device thinpool_sanity-convert_pool_vpool0_vdata for events.
      Aug 13 17:15:24 virt-485 dmeventd[258602]: Monitoring VDO pool thinpool_sanity-convert_pool_vpool0-vpool.
      Aug 13 17:15:24 virt-485 dmeventd[258602]: Monitoring thin pool thinpool_sanity-convert_pool.
      
      [root@virt-485 ~]# lvs -a -o +devices,segtype
        LV                                   VG              Attr       LSize  Pool                Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                                                                           Type     
        convert_pool                         thinpool_sanity twi-a-tz-- 12.00g                            0.00   10.68                            convert_pool_tdata(0)                                                                                             thin-pool
        [convert_pool_tdata]                 thinpool_sanity vwi-aov--- 12.00g convert_pool_vpool0        0.00                                    convert_pool_vpool0(0)                                                                                            vdo      
        [convert_pool_tmeta]                 thinpool_sanity ewi-ao---- 12.00m                                                                    /dev/sda(3073)                                                                                                    linear   
        convert_pool_vpool0                  thinpool_sanity dwi------- 12.00g                            33.38                                   convert_pool_vpool0_vdata(0)                                                                                      vdo-pool 
        [convert_pool_vpool0_vdata]          thinpool_sanity rwi-aor--- 12.00g                                                   100.00           convert_pool_vpool0_vdata_rimage_0(0),convert_pool_vpool0_vdata_rimage_1(0),convert_pool_vpool0_vdata_rimage_2(0) raid1    
        [convert_pool_vpool0_vdata_rimage_0] thinpool_sanity iwi-aor--- 12.00g                                                                    /dev/sda(1)                                                                                                       linear   
        [convert_pool_vpool0_vdata_rimage_1] thinpool_sanity iwi-aor--- 12.00g                                                                    /dev/sdb(1)                                                                                                       linear   
        [convert_pool_vpool0_vdata_rimage_2] thinpool_sanity iwi-aor--- 12.00g                                                                    /dev/sdc(1)                                                                                                       linear   
        [convert_pool_vpool0_vdata_rmeta_0]  thinpool_sanity ewi-aor---  4.00m                                                                    /dev/sda(0)                                                                                                       linear   
        [convert_pool_vpool0_vdata_rmeta_1]  thinpool_sanity ewi-aor---  4.00m                                                                    /dev/sdb(0)                                                                                                       linear   
        [convert_pool_vpool0_vdata_rmeta_2]  thinpool_sanity ewi-aor---  4.00m                                                                    /dev/sdc(0)                                                                                                       linear   
        [lvol0_pmspare]                      thinpool_sanity ewi------- 12.00m                                                                    /dev/sda(3076)                                                                                                    linear   
      
      

              zkabelac@redhat.com Zdenek Kabelac
              cmarthal@redhat.com Corey Marthaler
              lvm-team lvm-team
              Cluster QE Cluster QE
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

                Created:
                Updated: