Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-33814

multiple virt volumes using same VDOPoolLV - for practical purposes does not work and should not be described in the man page

    • Icon: Bug Bug
    • Resolution: Done-Errata
    • Icon: Blocker Blocker
    • rhel-9.5
    • rhel-9.4
    • lvm2 / VDO
    • None
    • lvm2-2.03.24-1.el9
    • None
    • None
    • rhel-sst-logical-storage
    • ssg_filesystems_storage_and_HA
    • 21
    • 22
    • 2
    • QE ack, Dev ack
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • x86_64
    • None

      This is a follow up to bug https://issues.redhat.com/browse/RHEL-31863.

      The reality is that any attempted use of the multiple "vdo" thin volumes results in I/O errors. So, sure LVM will allow you to create them, but they can't actually be used, hence we shouldn't advertise this functionality in the vdo man page.

      kernel-5.14.0-427.el9    BUILT: Fri Feb 23 07:31:31 AM CET 2024
      lvm2-2.03.23-2.el9    BUILT: Sat Feb  3 01:10:34 AM CET 2024
      lvm2-libs-2.03.23-2.el9    BUILT: Sat Feb  3 01:10:34 AM CET 2024
      
      From lvmvdo(7):
         1. Using multiple volumes using same VDOPoolLV
             You can convert existing VDO LV into a thin volume. After this conversion you can create a thin snapshot or you can add more thin volumes with thin-pool named after orignal LV name LV_tpool0.
      
             Example
             # lvcreate --type vdo -L 5G -V 10G -n vdo1 vg/vdopool
             # lvconvert --type thin vg/vdo1
             # lvcreate -V20 vg/vdo1_tpool0
      
      
      [root@virt-019 ~]# lvcreate --yes --type vdo -n vdo_lv  -L 25G vdo_sanity -V 50G  
        Wiping vdo signature on /dev/vdo_sanity/vpool0.
          The VDO volume can address 22 GB in 11 data slabs, each 2 GB.
          It can grow to address at most 16 TB of physical storage in 8192 slabs.
          If a larger maximum size might be needed, use bigger slabs.
        Logical volume "vdo_lv" created.
      [root@virt-019 ~]# mkfs --type xfs -f /dev/vdo_sanity/vdo_lv
      meta-data=/dev/vdo_sanity/vdo_lv isize=512    agcount=4, agsize=3276800 blks
               =                       sectsz=4096  attr=2, projid32bit=1
               =                       crc=1        finobt=1, sparse=1, rmapbt=0
               =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
      data     =                       bsize=4096   blocks=13107200, imaxpct=25
               =                       sunit=0      swidth=0 blks
      naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
      log      =internal log           bsize=4096   blocks=16384, version=2
               =                       sectsz=4096  sunit=1 blks, lazy-count=1
      realtime =none                   extsz=4096   blocks=0, rtextents=0
      Discarding blocks...Done.
       
      [root@virt-019 ~]# lvconvert --yes --type thin vdo_sanity/vdo_lv
        Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data.
        WARNING: Converting vdo_sanity/vdo_lv to fully provisioned thin volume.
        Converted vdo_sanity/vdo_lv to thin volume.
      [root@virt-019 ~]# lvs -a -o +devices
        LV                    VG         Attr       LSize  Pool          Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices               
        [lvol0_pmspare]       vdo_sanity ewi------- 52.00m                                                              /dev/sda1(6413)       
        vdo_lv                vdo_sanity Vwi-a-tz-- 50.00g vdo_lv_tpool0        100.00                                                        
        vdo_lv_tpool0         vdo_sanity twi-aotz-- 50.00g                      100.00 34.97                            vdo_lv_tpool0_tdata(0)
        [vdo_lv_tpool0_tdata] vdo_sanity vwi-aov--- 50.00g vpool0               0.01                                    vpool0(0)             
        [vdo_lv_tpool0_tmeta] vdo_sanity ewi-ao---- 52.00m                                                              /dev/sda1(6400)       
        vpool0                vdo_sanity dwi------- 25.00g                      12.06                                   vpool0_vdata(0)       
        [vpool0_vdata]        vdo_sanity Dwi-ao---- 25.00g                                                              /dev/sda1(0)          
       
      [root@virt-019 ~]#  lvcreate --yes --virtualsize 10G -T vdo_sanity/vdo_lv_tpool0 -n another_virt1
        WARNING: Sum of all thin volume sizes (60.00 GiB) exceeds the size of thin pool vdo_sanity/vdo_lv_tpool0 (50.00 GiB).
        WARNING: You have not turned on protection against thin pools running out of space.
        WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
        Logical volume "another_virt1" created.
      [root@virt-019 ~]#  lvcreate --yes --virtualsize 10G -T vdo_sanity/vdo_lv_tpool0 -n another_virt2
        WARNING: Sum of all thin volume sizes (70.00 GiB) exceeds the size of thin pool vdo_sanity/vdo_lv_tpool0 (50.00 GiB).
        WARNING: You have not turned on protection against thin pools running out of space.
        WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
        Logical volume "another_virt2" created.
      [root@virt-019 ~]#  lvcreate --yes --virtualsize 10G -T vdo_sanity/vdo_lv_tpool0 -n another_virt3
        WARNING: Sum of all thin volume sizes (80.00 GiB) exceeds the size of thin pool vdo_sanity/vdo_lv_tpool0 (50.00 GiB).
        WARNING: You have not turned on protection against thin pools running out of space.
        WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
        Logical volume "another_virt3" created.
      [root@virt-019 ~]# mkfs /dev/vdo_sanity/another_virt3
      mke2fs 1.46.5 (30-Dec-2021)
      Discarding device blocks: done                            
      Creating filesystem with 2621440 4k blocks and 655360 inodes
      Filesystem UUID: 77ecf527-e2dc-47ae-9f9d-6868b671ca1d
      Superblock backups stored on blocks: 
              32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
       
      Allocating group tables: done                            
      Writing inode tables: done                            
      Writing superblocks and filesystem accounting information: mkfs.ext2: Input/output error while writing out and closing file system
       
       
      Apr 23 21:27:36 virt-019 kernel: device-mapper: thin: 253:7: switching pool to out-of-data-space (error IO) mode
      Apr 23 21:27:36 virt-019 kernel: buffer_io_error: 249 callbacks suppressed
      Apr 23 21:27:36 virt-019 kernel: Buffer I/O error on dev dm-11, logical block 0, lost async page write
      Apr 23 21:27:37 virt-019 kernel: Buffer I/O error on dev dm-11, logical block 1, lost async page write
      Apr 23 21:27:37 virt-019 kernel: Buffer I/O error on dev dm-11, logical block 2, lost async page write
      Apr 23 21:27:37 virt-019 kernel: Buffer I/O error on dev dm-11, logical block 3, lost async page write
      Apr 23 21:27:37 virt-019 kernel: Buffer I/O error on dev dm-11, logical block 4, lost async page write
      Apr 23 21:27:37 virt-019 kernel: Buffer I/O error on dev dm-11, logical block 5, lost async page write
      Apr 23 21:27:37 virt-019 kernel: Buffer I/O error on dev dm-11, logical block 6, lost async page write
      Apr 23 21:27:37 virt-019 kernel: Buffer I/O error on dev dm-11, logical block 7, lost async page write
      Apr 23 21:27:37 virt-019 kernel: Buffer I/O error on dev dm-11, logical block 8, lost async page write
      Apr 23 21:27:37 virt-019 kernel: Buffer I/O error on dev dm-11, logical block 9, lost async page write
      

              mcsontos@redhat.com Marian Csontos
              cmarthal@redhat.com Corey Marthaler
              Zdenek Kabelac Zdenek Kabelac
              Cluster QE Cluster QE
              Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

                Created:
                Updated:
                Resolved: