Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-65535

unable to stack an LV on top of VDO PVs on a physical machine using physical devices

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Undefined Undefined
    • None
    • rhel-10.0
    • lvm2 / VDO
    • None
    • No
    • None
    • rhel-sst-logical-storage
    • ssg_filesystems_storage_and_HA
    • None
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • x86_64
    • None

      I have not been able to reproduce this using virt machines. I also haven't been able to reproduce this on physical machines when using non VDO volumes as the PVs to stack on. This bug requires physical machines and exists in at least the current two builds of RHEL-10.0 (27-1, 27-2) and RHEL-9.6 (27-1).

      kernel-6.11.0-26.el10    BUILT: Fri Oct 25 01:28:01 AM EDT 2024
      lvm2-2.03.27-2.el10    BUILT: Tue Oct 29 02:52:20 PM EDT 2024
      lvm2-libs-2.03.27-2.el10    BUILT: Tue Oct 29 02:52:20 PM EDT 2024
      vdo-8.3.0.71-1.el10    BUILT: Wed Jul 10 04:19:11 PM EDT 2024
       
       
      [root@grant-03 ~]# vgcreate    vdo_sanity /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1
        Volume group "vdo_sanity" successfully created
      [root@grant-03 ~]# lvcreate --yes --type vdo -n vdo_0  -L 10G vdo_sanity -V 44G  
        Wiping LVM2_member signature on /dev/vdo_sanity/vpool0.
          The VDO volume can address 6.00 GB in 3 data slabs, each 2.00 GB.
          It can grow to address at most 16.00 TB of physical storage in 8192 slabs.
          If a larger maximum size might be needed, use bigger slabs.
        Logical volume "vdo_0" created.
      [root@grant-03 ~]# lvcreate --yes --type vdo -n vdo_1  -L 12G vdo_sanity -V 28G  
        Wiping LVM2_member signature on /dev/vdo_sanity/vpool1.
          The VDO volume can address 8.00 GB in 4 data slabs, each 2.00 GB.
          It can grow to address at most 16.00 TB of physical storage in 8192 slabs.
          If a larger maximum size might be needed, use bigger slabs.
        Logical volume "vdo_1" created.
      [root@grant-03 ~]# lvcreate --yes --type vdo -n vdo_2  -L 11G vdo_sanity -V 40G  
        Wiping vdo signature on /dev/vdo_sanity/vpool2.
          The VDO volume can address 8.00 GB in 4 data slabs, each 2.00 GB.
          It can grow to address at most 16.00 TB of physical storage in 8192 slabs.
          If a larger maximum size might be needed, use bigger slabs.
        Logical volume "vdo_2" created.
      [root@grant-03 ~]# lvcreate --yes --type vdo -n vdo_3  -L 11G vdo_sanity -V 48G  
        Wiping vdo signature on /dev/vdo_sanity/vpool3.
          The VDO volume can address 8.00 GB in 4 data slabs, each 2.00 GB.
          It can grow to address at most 16.00 TB of physical storage in 8192 slabs.
          If a larger maximum size might be needed, use bigger slabs.
        Logical volume "vdo_3" created.
      [root@grant-03 ~]# lvcreate --yes --type vdo -n vdo_4  -L 16G vdo_sanity -V 46G  
        Wiping LVM2_member signature on /dev/vdo_sanity/vpool4.
          The VDO volume can address 12.00 GB in 6 data slabs, each 2.00 GB.
          It can grow to address at most 16.00 TB of physical storage in 8192 slabs.
          If a larger maximum size might be needed, use bigger slabs.
        Logical volume "vdo_4" created.
      [root@grant-03 ~]# vgcreate  --config devices/scan_lvs=1  stack_VG /dev/vdo_sanity/vdo_0 /dev/vdo_sanity/vdo_1 /dev/vdo_sanity/vdo_2 /dev/vdo_sanity/vdo_3 /dev/vdo_sanity/vdo_4
        Physical volume "/dev/vdo_sanity/vdo_0" successfully created.
        Physical volume "/dev/vdo_sanity/vdo_1" successfully created.
        Physical volume "/dev/vdo_sanity/vdo_2" successfully created.
        Physical volume "/dev/vdo_sanity/vdo_3" successfully created.
        Physical volume "/dev/vdo_sanity/vdo_4" successfully created.
        Volume group "stack_VG" successfully created
      [root@grant-03 ~]# lvs -a -o +devices --config devices/scan_lvs=1
        LV             VG            Attr       LSize    Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices         
        home           rhel_grant-03 -wi-ao---- <344.08g                                                       /dev/sda3(8192) 
        root           rhel_grant-03 -wi-ao----   69.98g                                                       /dev/sda3(96276)
        swap           rhel_grant-03 -wi-ao----   32.00g                                                       /dev/sda3(0)    
        vdo_0          vdo_sanity    vwi-a-v---   44.00g vpool0        0.01                                    vpool0(0)       
        vdo_1          vdo_sanity    vwi-a-v---   28.00g vpool1        0.01                                    vpool1(0)       
        vdo_2          vdo_sanity    vwi-a-v---   40.00g vpool2        0.01                                    vpool2(0)       
        vdo_3          vdo_sanity    vwi-a-v---   48.00g vpool3        0.01                                    vpool3(0)       
        vdo_4          vdo_sanity    vwi-a-v---   46.00g vpool4        0.01                                    vpool4(0)       
        vpool0         vdo_sanity    dwi-------   10.00g               40.04                                   vpool0_vdata(0) 
        [vpool0_vdata] vdo_sanity    Dwi-ao----   10.00g                                                       /dev/sdb1(0)    
        vpool1         vdo_sanity    dwi-------   12.00g               33.38                                   vpool1_vdata(0) 
        [vpool1_vdata] vdo_sanity    Dwi-ao----   12.00g                                                       /dev/sdb1(2560) 
        vpool2         vdo_sanity    dwi-------   11.00g               27.32                                   vpool2_vdata(0) 
        [vpool2_vdata] vdo_sanity    Dwi-ao----   11.00g                                                       /dev/sdb1(5632) 
        vpool3         vdo_sanity    dwi-------   11.00g               27.32                                   vpool3_vdata(0) 
        [vpool3_vdata] vdo_sanity    Dwi-ao----   11.00g                                                       /dev/sdb1(8448) 
        vpool4         vdo_sanity    dwi-------   16.00g               25.05                                   vpool4_vdata(0) 
        [vpool4_vdata] vdo_sanity    Dwi-ao----   16.00g                                                       /dev/sdb1(11264)
      [root@grant-03 ~]# pvscan --config devices/scan_lvs=1
        PV /dev/sda3               VG rhel_grant-03   lvm2 [446.06 GiB / 0    free]
        PV /dev/sdb1               VG vdo_sanity      lvm2 [447.11 GiB / 387.11 GiB free]
        PV /dev/sdc1               VG vdo_sanity      lvm2 [447.11 GiB / 447.11 GiB free]
        PV /dev/sdd1               VG vdo_sanity      lvm2 [447.11 GiB / 447.11 GiB free]
        PV /dev/sde1               VG vdo_sanity      lvm2 [447.11 GiB / 447.11 GiB free]
        PV /dev/sdf1               VG vdo_sanity      lvm2 [447.11 GiB / 447.11 GiB free]
        PV /dev/sdg1               VG vdo_sanity      lvm2 [447.11 GiB / 447.11 GiB free]
        PV /dev/vdo_sanity/vdo_0   VG stack_VG        lvm2 [<44.00 GiB / <44.00 GiB free]
        PV /dev/vdo_sanity/vdo_1   VG stack_VG        lvm2 [<28.00 GiB / <28.00 GiB free]
        PV /dev/vdo_sanity/vdo_2   VG stack_VG        lvm2 [<40.00 GiB / <40.00 GiB free]
        PV /dev/vdo_sanity/vdo_3   VG stack_VG        lvm2 [<48.00 GiB / <48.00 GiB free]
        PV /dev/vdo_sanity/vdo_4   VG stack_VG        lvm2 [<46.00 GiB / <46.00 GiB free]
        Total: 12 [<3.26 TiB] / in use: 12 [<3.26 TiB] / in no VG: 0 [0   ]
      [root@grant-03 ~]# lvcreate --yes --type linear -n vdo_lv  -l25%VG stack_VG  --config devices/scan_lvs=1
        device-mapper: reload ioctl on  (253:18) failed: Invalid argument
        Failed to activate new LV stack_VG/vdo_lv.
       
       
      11:21:10.788472 lvcreate[9782] format_text/format-text.c:1115  VG stack_VG metadata commit slot0 offset 14336 size 2620 slot1 offset 0 size 0.
      11:21:10.788475 lvcreate[9782] device/bcache.c:229  Limit write at 0 len 131072 to len 4608 rounded to 8192
      11:21:10.788608 lvcreate[9782] metadata/vg.c:74  Freeing VG stack_VG at 0x5561d383bdb0.
      11:21:10.788614 lvcreate[9782] metadata/lv.c:1647  Activating logical volume stack_VG/vdo_lv.
      11:21:10.788620 lvcreate[9782] activate/dev_manager.c:1099  Cached as inactive stack_VG-vdo_lv.
      11:21:10.788625 lvcreate[9782] activate/activate.c:462  activation/volume_list configuration setting not defined: Checking only host tags for stack_VG/vdo_lv.
      11:21:10.788629 lvcreate[9782] activate/activate.c:2643  Activating stack_VG/vdo_lv noscan.
      11:21:10.788632 lvcreate[9782] activate/dev_manager.c:1099  Cached as inactive stack_VG-vdo_lv.
      11:21:10.788646 lvcreate[9782] device/dev-io.c:153  /dev/vdo_sanity/vdo_3: read_ahead is 65528 sectors
      11:21:10.788650 lvcreate[9782] device/dev-io.c:462  Closed /dev/vdo_sanity/vdo_3
      11:21:10.788653 lvcreate[9782] mm/memlock.c:647  Entering prioritized section (activating).
      11:21:10.788663 lvcreate[9782] mm/memlock.c:495  Raised task priority 0 -> -18.
      11:21:10.788686 lvcreate[9782] activate/dev_manager.c:3991  Creating ACTIVATE tree for stack_VG/vdo_lv.
      11:21:10.788693 lvcreate[9782] activate/dev_manager.c:972  Getting device info for stack_VG-vdo_lv [LVM-QPyi7z6ay9TFu1UuKqf5MhytQ6H3HSaYMOkvvcA17XLdmsl73COZ4TLDECDc9Uoz].
      11:21:10.788698 lvcreate[9782] device_mapper/ioctl/libdm-iface.c:2065  dm info  LVM-QPyi7z6ay9TFu1UuKqf5MhytQ6H3HSaYMOkvvcA17XLdmsl73COZ4TLDECDc9Uoz [ noopencount flush ]   [2048] (*1)
      11:21:10.788706 lvcreate[9782] activate/dev_manager.c:952  Skipping checks for old devices without LVM- dm uuid prefix (kernel vsn 6 >= 3).
      11:21:10.788709 lvcreate[9782] activate/dev_manager.c:972  Getting device info for stack_VG-vdo_lv-real [LVM-QPyi7z6ay9TFu1UuKqf5MhytQ6H3HSaYMOkvvcA17XLdmsl73COZ4TLDECDc9Uoz-real].
      11:21:10.788712 lvcreate[9782] device_mapper/ioctl/libdm-iface.c:2065  dm info  LVM-QPyi7z6ay9TFu1UuKqf5MhytQ6H3HSaYMOkvvcA17XLdmsl73COZ4TLDECDc9Uoz-real [ noopencount flush ]   [2048] (*1)
      11:21:10.788716 lvcreate[9782] activate/dev_manager.c:972  Getting device info for stack_VG-vdo_lv-cow [LVM-QPyi7z6ay9TFu1UuKqf5MhytQ6H3HSaYMOkvvcA17XLdmsl73COZ4TLDECDc9Uoz-cow].
      11:21:10.788721 lvcreate[9782] device_mapper/ioctl/libdm-iface.c:2065  dm info  LVM-QPyi7z6ay9TFu1UuKqf5MhytQ6H3HSaYMOkvvcA17XLdmsl73COZ4TLDECDc9Uoz-cow [ noopencount flush ]   [2048] (*1)
      11:21:10.788727 lvcreate[9782] activate/dev_manager.c:3588  Adding new LV stack_VG/vdo_lv to dtree
      11:21:10.788731 lvcreate[9782] device_mapper/libdm-deptree.c:638  Not matched uuid LVM-QPyi7z6ay9TFu1UuKqf5MhytQ6H3HSaYMOkvvcA17XLdmsl73COZ4TLDECDc9Uoz in deptree.
      11:21:10.788734 lvcreate[9782] device_mapper/libdm-config.c:1083  activation/verify_udev_operations not found in config: defaulting to 0
      11:21:10.788737 lvcreate[9782] activate/activate.c:499  Getting driver version
      11:21:10.788741 lvcreate[9782] device_mapper/libdm-deptree.c:638  Not matched uuid LVM-QPyi7z6ay9TFu1UuKqf5MhytQ6H3HSaYMOkvvcA17XLdmsl73COZ4TLDECDc9Uoz in deptree.
      11:21:10.788745 lvcreate[9782] activate/dev_manager.c:3489  Checking kernel supports striped segment type for stack_VG/vdo_lv
      11:21:10.788755 lvcreate[9782] device_mapper/ioctl/libdm-iface.c:2065  dm deps   (253:14) [ opencount flush ]   [16384] (*1)
      11:21:10.788769 lvcreate[9782] device_mapper/ioctl/libdm-iface.c:2065  dm deps   (253:13) [ opencount flush ]   [16384] (*1)
      11:21:10.788777 lvcreate[9782] device_mapper/ioctl/libdm-iface.c:2065  dm deps   (253:12) [ opencount flush ]   [16384] (*1)
      11:21:10.788783 lvcreate[9782] activate/dev_manager.c:3489  Checking kernel supports striped segment type for stack_VG/vdo_lv
      11:21:10.788789 lvcreate[9782] device_mapper/ioctl/libdm-iface.c:2065  dm deps   (253:5) [ opencount flush ]   [16384] (*1)
      11:21:10.788795 lvcreate[9782] device_mapper/ioctl/libdm-iface.c:2065  dm deps   (253:4) [ opencount flush ]   [16384] (*1)
      11:21:10.788801 lvcreate[9782] device_mapper/ioctl/libdm-iface.c:2065  dm deps   (253:3) [ opencount flush ]   [16384] (*1)
      11:21:10.788807 lvcreate[9782] metadata/metadata.c:2113  Calculated readahead of LV vdo_lv is 65528
      11:21:10.788810 lvcreate[9782] device_mapper/libdm-deptree.c:2212  Creating stack_VG-vdo_lv
      11:21:10.788814 lvcreate[9782] device_mapper/ioctl/libdm-iface.c:2065  dm create stack_VG-vdo_lv LVM-QPyi7z6ay9TFu1UuKqf5MhytQ6H3HSaYMOkvvcA17XLdmsl73COZ4TLDECDc9Uoz [ noopencount flush ]   [16384] (*1)
      11:21:10.788964 lvcreate[9782] device_mapper/libdm-deptree.c:3237  Loading table for stack_VG-vdo_lv (253:18).
      11:21:10.788970 lvcreate[9782] device_mapper/libdm-deptree.c:3179  Adding target to (253:18): 0 100655104 linear 253:14 2049
      11:21:10.788974 lvcreate[9782] device_mapper/libdm-deptree.c:3179  Adding target to (253:18): 100655104 7331840 linear 253:5 2049
      11:21:10.788977 lvcreate[9782] device_mapper/ioctl/libdm-iface.c:2065  dm table   (253:18) [ opencount flush ]   [16384] (*1)
      11:21:10.788983 lvcreate[9782] device_mapper/ioctl/libdm-iface.c:2065  dm reload   (253:18) [ noopencount flush ]   [2048] (*1)
      11:21:10.789090 lvcreate[9782] device_mapper/ioctl/libdm-iface.c:2116  device-mapper: reload ioctl on  (253:18) failed: Invalid argument
      11:21:10.789094 lvcreate[9782] device_mapper/libdm-deptree.c:3392  <backtrace>
      11:21:10.789098 lvcreate[9782] device_mapper/libdm-deptree.c:3327  Reverting stack_VG-vdo_lv (253:18). 
      11:21:10.789102 lvcreate[9782] device_mapper/libdm-deptree.c:1027  Removing stack_VG-vdo_lv (253:18)
      11:21:10.789113 lvcreate[9782] device_mapper/libdm-common.c:2561  Udev cookie 0xd4dbb64 (semid 32770) created
      11:21:10.789118 lvcreate[9782] device_mapper/libdm-common.c:2582  Udev cookie 0xd4dbb64 (semid 32770) incremented to 1
      11:21:10.789124 lvcreate[9782] device_mapper/libdm-common.c:2452  Udev cookie 0xd4dbb64 (semid 32770) incremented to 2
      11:21:10.789127 lvcreate[9782] device_mapper/libdm-common.c:2687  Udev cookie 0xd4dbb64 (semid 32770) assigned to REMOVE task(2) with flags DISABLE_LIBRARY_FALLBACK SUBSYSTEM_0        (0x120)
      11:21:10.789131 lvcreate[9782] device_mapper/ioctl/libdm-iface.c:2065  dm remove   (253:18) [ noopencount flush ]   [2048] (*1)
      11:21:10.789217 lvcreate[9782] device_mapper/ioctl/libdm-iface.c:2141  Uevent not generated! Calling udev_complete internally to avoid process lock-up.
      11:21:10.789226 lvcreate[9782] device_mapper/libdm-common.c:2489  Udev cookie 0xd4dbb64 (semid 32770) decremented to 1
      11:21:10.789229 lvcreate[9782] device_mapper/libdm-common.c:1491  stack_VG-vdo_lv: Stacking NODE_DEL [trust_udev]
      11:21:10.789233 lvcreate[9782] activate/dev_manager.c:4068  <backtrace>
      11:21:10.789236 lvcreate[9782] activate/dev_manager.c:4110  <backtrace>
      11:21:10.789239 lvcreate[9782] activate/activate.c:1475  <backtrace>
      11:21:10.789251 lvcreate[9782] activate/activate.c:2666  <backtrace>
      11:21:10.789253 lvcreate[9782] mm/memlock.c:659  Leaving section (activated).
      11:21:10.789256 lvcreate[9782] activate/activate.c:2698  <backtrace>
      11:21:10.789258 lvcreate[9782] metadata/lv.c:1649  <backtrace>
      

              lvm-team lvm-team
              cmarthal@redhat.com Corey Marthaler
              lvm-team lvm-team
              Cluster QE Cluster QE
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

                Created:
                Updated: