Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-8309

lvextend should check for and report "Not enough free memory for VDO target" like lvcreate does

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Normal Normal
    • None
    • rhel-9.2.0
    • lvm2 / VDO
    • None
    • Moderate
    • rhel-sst-logical-storage
    • ssg_filesystems_storage_and_HA
    • 8
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • If docs needed, set a value
    • None

      Description of problem:
      This is an actual bug report for the extension issue mentioned here: https://bugzilla.redhat.com/show_bug.cgi?id=2079048#c1

      It appears that lvcreate checks for free memory before attempting a create that is too large. lvextend should do that as well rather than running into "vmalloc errors"

      1. Doesn't cause kdump issues
        [root@virt-506 ~]# lvcreate --yes --type vdo -n vdo_lv -L 10G vdo_sanity -V 4096TiB
        Not enough free memory for VDO target. 6.83 GiB RAM is required, but only <3.47 GiB RAM is available.
        VDO configuration needs 3.00 MiB RAM for physical volume size 10.00 GiB, <6.40 GiB RAM for virtual volume size 4.00 PiB, 150.00 MiB RAM for block map cache size 128.00 MiB and 256.00 MiB RAM for index memory.
      1. Doesn't cause kdump issues
        [root@virt-498 ~]# lvextend --yes -l1000000%FREE vdo_sanity/vdo_lv
        Size of logical volume vdo_sanity/vdo_lv changed from 1.00 PiB (268438081 extents) to <5.53 PiB (1484316801 extents).
        VDO logical size is larger than limit 4096 TiB by 1681715106816 KiB.
        Failed to suspend logical volume vdo_sanity/vdo_lv.
      1. Does cause kdump issues
        [root@virt-498 ~]# lvextend --yes -L+2.75P vdo_sanity/vdo_lv
        Size of logical volume vdo_sanity/vdo_lv changed from 1.00 PiB (268438081 extents) to 3.75 PiB (1006635585 extents).
        device-mapper: reload ioctl on (253:3) failed: Cannot allocate memory
        Failed to suspend logical volume vdo_sanity/vdo_lv.

      Feb 16 18:14:09 virt-498 kernel: kvdo12:lvextend: preparing to modify device '253:3'
      Feb 16 18:14:09 virt-498 kernel: kvdo12:lvextend: Preparing to resize logical to 1030794839296
      Feb 16 18:14:10 virt-498 kernel: lvextend: vmalloc error: size 4766163840, exceeds total pages, mode:0x4dc0(GFP_KERNEL|_GFP_RETRY_MAYFAIL|_GFP_ZERO), nodemask=(null),cpuset=/,mems_allowed=0
      Feb 16 18:14:10 virt-498 kernel: CPU: 1 PID: 6658 Comm: lvextend Kdump: loaded Tainted: G O --------- — 5.14.0-252.el9.x86_64 #1
      Feb 16 18:14:10 virt-498 kernel: Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
      Feb 16 18:14:10 virt-498 kernel: Call Trace:
      Feb 16 18:14:10 virt-498 kernel: <TASK>
      Feb 16 18:14:10 virt-498 kernel: dump_stack_lvl+0x34/0x48
      Feb 16 18:14:10 virt-498 kernel: warn_alloc+0x138/0x160
      Feb 16 18:14:10 virt-498 kernel: ? schedule+0x5a/0xc0
      Feb 16 18:14:10 virt-498 kernel: __vmalloc_node_range+0x1f0/0x220
      Feb 16 18:14:10 virt-498 kernel: __vmalloc_node+0x4a/0x70
      Feb 16 18:14:10 virt-498 kernel: ? uds_allocate_memory+0x265/0x2e0 [kvdo]
      Feb 16 18:14:10 virt-498 kernel: uds_allocate_memory+0x265/0x2e0 [kvdo]
      Feb 16 18:14:10 virt-498 kernel: make_segment+0x107/0x360 [kvdo]
      Feb 16 18:14:10 virt-498 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
      Feb 16 18:14:10 virt-498 kernel: ? uds_allocate_memory+0xf9/0x2e0 [kvdo]
      Feb 16 18:14:10 virt-498 kernel: ? uds_log_embedded_message+0x3f/0x60 [kvdo]
      Feb 16 18:14:10 virt-498 kernel: vdo_make_forest+0xd8/0x120 [kvdo]
      Feb 16 18:14:10 virt-498 kernel: vdo_prepare_to_grow_logical+0x43/0xa0 [kvdo]
      Feb 16 18:14:10 virt-498 kernel: vdo_prepare_to_modify+0x69/0x130 [kvdo]
      Feb 16 18:14:10 virt-498 kernel: vdo_ctr+0xe4/0x240 [kvdo]
      Feb 16 18:14:10 virt-498 kernel: dm_table_add_target+0x16f/0x3a0 [dm_mod]
      Feb 16 18:14:10 virt-498 kernel: table_load+0x12b/0x370 [dm_mod]
      Feb 16 18:14:10 virt-498 kernel: ctl_ioctl+0x1a2/0x290 [dm_mod]
      Feb 16 18:14:10 virt-498 kernel: dm_ctl_ioctl+0xa/0x20 [dm_mod]
      Feb 16 18:14:10 virt-498 kernel: __x64_sys_ioctl+0x8a/0xc0
      Feb 16 18:14:10 virt-498 kernel: do_syscall_64+0x5c/0x90
      Feb 16 18:14:10 virt-498 kernel: ? do_syscall_64+0x69/0x90
      Feb 16 18:14:10 virt-498 kernel: ? syscall_exit_to_user_mode+0x12/0x30
      Feb 16 18:14:10 virt-498 kernel: ? do_syscall_64+0x69/0x90
      Feb 16 18:14:10 virt-498 kernel: ? syscall_exit_work+0x11a/0x150
      Feb 16 18:14:10 virt-498 kernel: ? syscall_exit_to_user_mode+0x12/0x30
      Feb 16 18:14:10 virt-498 kernel: ? do_syscall_64+0x69/0x90
      Feb 16 18:14:10 virt-498 kernel: ? syscall_exit_to_user_mode+0x12/0x30
      Feb 16 18:14:10 virt-498 kernel: ? do_syscall_64+0x69/0x90
      Feb 16 18:14:10 virt-498 kernel: ? sysvec_apic_timer_interrupt+0x3c/0x90
      Feb 16 18:14:10 virt-498 kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd
      Feb 16 18:14:10 virt-498 kernel: RIP: 0033:0x7f54dc63ec6b
      Feb 16 18:14:10 virt-498 kernel: Code: 73 01 c3 48 8b 0d b5 b1 1b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 85 b1 1b 00 f7 d8 64 89 01 48
      Feb 16 18:14:10 virt-498 kernel: RSP: 002b:00007fff3be91a88 EFLAGS: 00000206 ORIG_RAX: 0000000000000010
      Feb 16 18:14:10 virt-498 kernel: RAX: ffffffffffffffda RBX: 0000563698fe8e30 RCX: 00007f54dc63ec6b
      Feb 16 18:14:10 virt-498 kernel: RDX: 000056369b5dc800 RSI: 00000000c138fd09 RDI: 0000000000000008
      Feb 16 18:14:10 virt-498 kernel: RBP: 00005636990c70d6 R08: 0000563699140960 R09: 00007fff3be918e0
      Feb 16 18:14:10 virt-498 kernel: R10: 0000000000000007 R11: 0000000000000206 R12: 000056369b5bfe00
      Feb 16 18:14:10 virt-498 kernel: R13: 000056369b5dc8b0 R14: 000056369913ee9d R15: 000056369b5dc800
      Feb 16 18:14:10 virt-498 kernel: </TASK>
      Feb 16 18:14:10 virt-498 kernel: Mem-Info:
      Feb 16 18:14:10 virt-498 kernel: active_anon:214 inactive_anon:18424 isolated_anon:0#012 active_file:23355 inactive_file:37859 isolated_file:0#012 unevictable:29716 dirty:3 writeback:0#012 slab_reclaimable:8047 slab_unreclaimable:18734#012 mapped:12288 shmem:2579 pagetables:630 bounce:0#012 kernel_misc_reclaimable:0#012 free:333023 free_pcp:10960 free_cma:0
      Feb 16 18:14:10 virt-498 kernel: Node 0 active_anon:856kB inactive_anon:73696kB active_file:93420kB inactive_file:151436kB unevictable:118864kB isolated(anon):0kB isolated(file):0kB mapped:49152kB dirty:12kB writeback:0kB shmem:10316kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 104448kB writeback_tmp:0kB kernel_stack:3120kB pagetables:2520kB all_unreclaimable? no
      Feb 16 18:14:10 virt-498 kernel: Node 0 DMA free:14848kB boost:0kB min:256kB low:320kB high:384kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
      Feb 16 18:14:10 virt-498 kernel: lowmem_reserve[]: 0 3213 3897 3897 3897
      Feb 16 18:14:10 virt-498 kernel: Node 0 DMA32 free:1300468kB boost:0kB min:55500kB low:69372kB high:83244kB reserved_highatomic:0KB active_anon:20kB inactive_anon:5356kB active_file:2224kB inactive_file:66648kB unevictable:43480kB writepending:12kB present:3653612kB managed:3325932kB mlocked:43480kB bounce:0kB free_pcp:39604kB local_pcp:22128kB free_cma:0kB
      Feb 16 18:14:10 virt-498 kernel: lowmem_reserve[]: 0 0 684 684 684
      Feb 16 18:14:10 virt-498 kernel: Node 0 Normal free:16776kB boost:2048kB min:13872kB low:16828kB high:19784kB reserved_highatomic:0KB active_anon:836kB inactive_anon:68340kB active_file:91196kB inactive_file:84788kB unevictable:75384kB writepending:0kB present:835584kB managed:707192kB mlocked:73848kB bounce:0kB free_pcp:4236kB local_pcp:420kB free_cma:0kB
      Feb 16 18:14:10 virt-498 kernel: lowmem_reserve[]: 0 0 0 0 0
      Feb 16 18:14:10 virt-498 kernel: Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB (U) 0*1024kB 1*2048kB (M) 3*4096kB (M) = 14848kB
      Feb 16 18:14:10 virt-498 kernel: Node 0 DMA32: 119*4kB (UM) 53*8kB (U) 17*16kB (UE) 7*32kB (UE) 32*64kB (UME) 21*128kB (UME) 10*256kB (ME) 3*512kB (UE) 2*1024kB (UM) 1*2048kB (E) 314*4096kB (M) = 1300468kB
      Feb 16 18:14:10 virt-498 kernel: Node 0 Normal: 112*4kB (UME) 223*8kB (UME) 171*16kB (UME) 67*32kB (UME) 29*64kB (UM) 27*128kB (UME) 5*256kB (UME) 4*512kB (UME) 1*1024kB (M) 0*2048kB 0*4096kB = 16776kB
      Feb 16 18:14:10 virt-498 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
      Feb 16 18:14:10 virt-498 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
      Feb 16 18:14:10 virt-498 kernel: 68079 total pagecache pages
      Feb 16 18:14:10 virt-498 kernel: 0 pages in swap cache
      Feb 16 18:14:10 virt-498 kernel: Swap cache stats: add 0, delete 0, find 0/0
      Feb 16 18:14:10 virt-498 kernel: Free swap = 839676kB
      Feb 16 18:14:10 virt-498 kernel: Total swap = 839676kB
      Feb 16 18:14:10 virt-498 kernel: 1126297 pages RAM
      Feb 16 18:14:10 virt-498 kernel: 0 pages HighMem/MovableOnly
      Feb 16 18:14:10 virt-498 kernel: 114176 pages reserved
      Feb 16 18:14:10 virt-498 kernel: 0 pages cma reserved
      Feb 16 18:14:10 virt-498 kernel: 0 pages hwpoisoned
      Feb 16 18:14:10 virt-498 kernel: kvdo12:lvextend: Could not allocate 4766163840 bytes for new forest pages in 1053 msecs
      Feb 16 18:14:10 virt-498 kernel: device-mapper: table: 253:3: vdo: Device vdo_prepare_to_grow_logical failed (-ENOMEM)
      Feb 16 18:14:10 virt-498 kernel: device-mapper: ioctl: error adding target to table

      Version-Release number of selected component (if applicable):
      kernel-5.14.0-252.el9 BUILT: Wed Feb 1 03:30:10 PM CET 2023
      lvm2-2.03.17-7.el9 BUILT: Thu Feb 16 03:24:54 PM CET 2023
      lvm2-libs-2.03.17-7.el9 BUILT: Thu Feb 16 03:24:54 PM CET 2023

      How reproducible:
      Everytime

              zkabelac@redhat.com Zdenek Kabelac
              cmarthal@redhat.com Corey Marthaler
              Zdenek Kabelac Zdenek Kabelac
              Cluster QE Cluster QE
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated: