Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-56673

if using multiple vdo volumes on same VDOPoolLV with threshold turned off, fstrim is required

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Normal Normal
    • None
    • rhel-9.5
    • lvm2 / VDO
    • None
    • No
    • None
    • rhel-sst-logical-storage
    • ssg_filesystems_storage_and_HA
    • 5
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • x86_64
    • None

      If using multiple vdo volumes through a single vdo thin pool (https://issues.redhat.com/browse/RHEL-31863) with the auto extend threshold turned off, you will need to fstrim any used space before creating additional thin lvs. Otherwise you'll end up with an unusable filesystems due to full pool space usage.

      kernel-5.14.0-497.el9    BUILT: Thu Aug 15 12:13:18 AM CEST 2024
      lvm2-2.03.24-2.el9    BUILT: Wed Aug  7 09:41:45 PM CEST 2024
      lvm2-libs-2.03.24-2.el9    BUILT: Wed Aug  7 09:41:45 PM CEST 2024
       
       
      vgcreate    vdo_sanity /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh
        Volume group "vdo_sanity" successfully created
      lvcreate --yes --type vdo -n vdo_lv  -L 50G vdo_sanity -V 100G  
      Wiping vdo signature on /dev/vdo_sanity/vpool0.
          The VDO volume can address 46 GB in 23 data slabs, each 2 GB.
          It can grow to address at most 16 TB of physical storage in 8192 slabs.
          If a larger maximum size might be needed, use bigger slabs.
        Logical volume "vdo_lv" created.
       
      mkfs --type xfs -f /dev/vdo_sanity/vdo_lv
      meta-data=/dev/vdo_sanity/vdo_lv isize=512    agcount=4, agsize=6553600 blks
               =                       sectsz=4096  attr=2, projid32bit=1
               =                       crc=1        finobt=1, sparse=1, rmapbt=0
               =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
      data     =                       bsize=4096   blocks=26214400, imaxpct=25
               =                       sunit=0      swidth=0 blks
      naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
      log      =internal log           bsize=4096   blocks=16384, version=2
               =                       sectsz=4096  sunit=1 blks, lazy-count=1
      realtime =none                   extsz=4096   blocks=0, rtextents=0
      Discarding blocks...Done.
       
      mount  /dev/vdo_sanity/vdo_lv /mnt/vdo_lv
      Writing files to /mnt/vdo_lv
      /usr/tests/sts-rhel9.5/bin/checkit -w /mnt/vdo_lv -f /tmp/Filesystem.1686410 -n 5000
      checkit starting with:
      CREATE
      Num files:          5000
      Random Seed:        420037
      Verify XIOR Stream: /tmp/Filesystem.1686410
      Working dir:        /mnt/vdo_lv
      Checking files from /mnt/vdo_lv
      /usr/tests/sts-rhel9.5/bin/checkit -w /mnt/vdo_lv -f /tmp/Filesystem.1686410 -v
      checkit starting with:
      VERIFY
      Verify XIOR Stream: /tmp/Filesystem.1686410
      Working dir:        /mnt/vdo_lv
      umount /mnt/vdo_lv
       
      lvconvert --yes --type thin  vdo_sanity/vdo_lv 
      Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data.
        Converted vdo_sanity/vdo_lv to thin volume.
      WARNING: Converting vdo_sanity/vdo_lv to fully provisioned thin volume.
      Re-verify the filesystem data post conversion
      mount  /dev/vdo_sanity/vdo_lv /mnt/vdo_lv
      Checking files from /mnt/vdo_lv
      /usr/tests/sts-rhel9.5/bin/checkit -w /mnt/vdo_lv -f /tmp/Filesystem.1686410 -v
      checkit starting with:
      VERIFY
      Verify XIOR Stream: /tmp/Filesystem.1686410
      Working dir:        /mnt/vdo_lv
      umount /mnt/vdo_lv
      Now create other thin lvs, including thin snap of origin using this VDO pool, effectively supporting multi vdo volumes per one VDO pool
      lvcreate --yes -n virt_1  -V 20 vdo_sanity/vdo_lv_tpool0 
      WARNING: You have not turned on protection against thin pools running out of space.
        WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
        Logical volume "virt_1" created.
      WARNING: Sum of all thin volume sizes (<100.02 GiB) exceeds the size of thin pool vdo_sanity/vdo_lv_tpool0 (100.00 GiB).
      lvcreate --yes -n virt_2  -V 20 vdo_sanity/vdo_lv_tpool0 
      WARNING: You have not turned on protection against thin pools running out of space.
        WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
        Logical volume "virt_2" created.
      WARNING: Sum of all thin volume sizes (<100.04 GiB) exceeds the size of thin pool vdo_sanity/vdo_lv_tpool0 (100.00 GiB).
      lvcreate --yes -n virt_3  -V 20 vdo_sanity/vdo_lv_tpool0 
      WARNING: You have not turned on protection against thin pools running out of space.
        WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
        Logical volume "virt_3" created.
      WARNING: Sum of all thin volume sizes (<100.06 GiB) exceeds the size of thin pool vdo_sanity/vdo_lv_tpool0 (100.00 GiB).
      lvcreate --yes -n virt_4  -V 20 vdo_sanity/vdo_lv_tpool0 
      WARNING: You have not turned on protection against thin pools running out of space.
        WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
        Logical volume "virt_4" created.
      WARNING: Sum of all thin volume sizes (<100.08 GiB) exceeds the size of thin pool vdo_sanity/vdo_lv_tpool0 (100.00 GiB).
      Create THIN snapshot of vdo -> thin converted origin volume
      lvcreate --yes -n virtsnap -k n   -s vdo_sanity/vdo_lv 
      WARNING: You have not turned on protection against thin pools running out of space.
        WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
        Logical volume "virtsnap" created.
      WARNING: Sum of all thin volume sizes (<200.08 GiB) exceeds the size of thin pool vdo_sanity/vdo_lv_tpool0 (100.00 GiB).
       
      [root@virt-001 ~]# lvs -a -o +devices
        LV                    VG         Attr       LSize   Pool          Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices               
        [lvol0_pmspare]       vdo_sanity ewi------- 100.00m                                                              /dev/sda(12825)       
        vdo_lv                vdo_sanity Vwi-a-tz-- 100.00g vdo_lv_tpool0        100.00                                                        
        vdo_lv_tpool0         vdo_sanity twi-aotzD- 100.00g                      100.00 35.95                            vdo_lv_tpool0_tdata(0)
        [vdo_lv_tpool0_tdata] vdo_sanity vwi-aov--- 100.00g vpool0               0.73                                    vpool0(0)             
        [vdo_lv_tpool0_tmeta] vdo_sanity ewi-ao---- 100.00m                                                              /dev/sda(12800)       
        virt_1                vdo_sanity Vwi-a-tz--  20.00m vdo_lv_tpool0        0.00                                                          
        virt_2                vdo_sanity Vwi-a-tz--  20.00m vdo_lv_tpool0        0.00                                                          
        virt_3                vdo_sanity Vwi-a-tz--  20.00m vdo_lv_tpool0        0.00                                                          
        virt_4                vdo_sanity Vwi-a-tz--  20.00m vdo_lv_tpool0        0.00                                                          
        virtsnap              vdo_sanity Vwi-a-tz-- 100.00g vdo_lv_tpool0 vdo_lv 100.00                                                        
        vpool0                vdo_sanity dwi-------  50.00g                      9.53                                    vpool0_vdata(0)       
        [vpool0_vdata]        vdo_sanity Dwi-ao----  50.00g                                                              /dev/sda(0)           
       
      [root@virt-001 ~]# mount -o nouuid /dev/vdo_sanity/virtsnap /mnt/virtsnap
      mount: /mnt/virtsnap: mount(2) system call failed: No space left on device.
       
      Aug 29 16:19:04 virt-001 kernel: XFS (dm-13): Mounting V5 Filesystem a44fdb83-729a-4c1a-9051-268efc4dace0
      Aug 29 16:19:04 virt-001 kernel: XFS (dm-13): log recovery write I/O error at daddr 0x1b860 len 4096 error -28
      Aug 29 16:19:04 virt-001 kernel: XFS (dm-13): failed to locate log tail
      Aug 29 16:19:04 virt-001 kernel: XFS (dm-13): log mount/recovery failed: error -28
      Aug 29 16:19:04 virt-001 kernel: XFS (dm-13): log mount failed
       
      [root@virt-001 ~]# mount  /dev/vdo_sanity/vdo_lv /mnt/vdo_lv
      mount: /mnt/vdo_lv: mount(2) system call failed: No space left on device.
      

              zkabelac@redhat.com Zdenek Kabelac
              cmarthal@redhat.com Corey Marthaler
              lvm-team lvm-team
              Cluster QE Cluster QE
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

                Created:
                Updated: