-
Bug
-
Resolution: Done-Errata
-
Major
-
rhel-10.1
-
None
-
lvm2-2.03.32-3.el10
-
Yes
-
Important
-
rhel-storage-lvm
-
ssg_platform_storage
-
16
-
7
-
False
-
False
-
-
None
-
None
-
Pass
-
Automated
-
Unspecified
-
Unspecified
-
Unspecified
-
-
x86_64
-
None
This appears to be a regression of sorts for the test scenario implemented for bug https://issues.redhat.com/browse/RHEL-39444
The vgremove ends up deadlocking in the latest 10.1 build.
kernel-6.12.0-66.el10 BUILT: Thu Mar 20 04:08:57 PM CET 2025 lvm2-2.03.32-1.el10 BUILT: Tue May 6 12:41:07 PM CEST 2025 lvm2-libs-2.03.32-1.el10 BUILT: Tue May 6 12:41:07 PM CEST 2025 sanlock-3.9.5-4.el10 BUILT: Wed Jan 8 09:54:11 PM CET 2025 sanlock-lib-3.9.5-4.el10 BUILT: Wed Jan 8 09:54:11 PM CET 2025 SCENARIO - vgimportdevices_from_shared_vg_wo_global_lock: Test importing all devices from a shared VG but without an active global lock running (RHEL-39444) Setting use_lvmlockd to enable Setting lvmlocal.conf host_id to 942 Starting sanlock systemctl start sanlock Starting lvm2-lockd systemctl start lvmlockd adding entry to the devices file for /dev/sda on virt-502.cluster-qe.lab.eng.brq.redhat.com lvmdevices -y --config devices/scan_lvs=1 --adddev /dev/sda creating PV on virt-502.cluster-qe.lab.eng.brq.redhat.com using device /dev/sda pvcreate --yes -ff --nolock /dev/sda Physical volume "/dev/sda" successfully created. adding entry to the devices file for /dev/sdb on virt-502.cluster-qe.lab.eng.brq.redhat.com lvmdevices -y --config devices/scan_lvs=1 --adddev /dev/sdb creating PV on virt-502.cluster-qe.lab.eng.brq.redhat.com using device /dev/sdb pvcreate --yes -ff --nolock /dev/sdb Physical volume "/dev/sdb" successfully created. adding entry to the devices file for /dev/sdc on virt-502.cluster-qe.lab.eng.brq.redhat.com lvmdevices -y --config devices/scan_lvs=1 --adddev /dev/sdc creating PV on virt-502.cluster-qe.lab.eng.brq.redhat.com using device /dev/sdc pvcreate --yes -ff --nolock /dev/sdc Physical volume "/dev/sdc" successfully created. adding entry to the devices file for /dev/sdd on virt-502.cluster-qe.lab.eng.brq.redhat.com lvmdevices -y --config devices/scan_lvs=1 --adddev /dev/sdd creating PV on virt-502.cluster-qe.lab.eng.brq.redhat.com using device /dev/sdd pvcreate --yes -ff --nolock /dev/sdd Physical volume "/dev/sdd" successfully created. creating VG on virt-502.cluster-qe.lab.eng.brq.redhat.com using PV(s) /dev/sda /dev/sdb /dev/sdc /dev/sdd vgcreate --shared devfile_VG /dev/sda /dev/sdb /dev/sdc /dev/sdd Enabling sanlock global lock Logical volume "lvmlock" created. Volume group "devfile_VG" successfully created VG devfile_VG starting sanlock lockspace Starting locking. Waiting until locks are ready... vgs devfile_VG VG #PV #LV #SN Attr VSize VFree devfile_VG 4 0 0 wz--ns 159.98g 159.73g stopping VG locking on virt-502.cluster-qe.lab.eng.brq.redhat.com vgchange --lockstop devfile_VG removing entry from the devices file for /dev/sda on virt-502.cluster-qe.lab.eng.brq.redhat.com lvmdevices -y --config devices/scan_lvs=1 --deldev /dev/sda removing entry from the devices file for /dev/sdb on virt-502.cluster-qe.lab.eng.brq.redhat.com lvmdevices -y --config devices/scan_lvs=1 --deldev /dev/sdb removing entry from the devices file for /dev/sdc on virt-502.cluster-qe.lab.eng.brq.redhat.com lvmdevices -y --config devices/scan_lvs=1 --deldev /dev/sdc removing entry from the devices file for /dev/sdd on virt-502.cluster-qe.lab.eng.brq.redhat.com lvmdevices -y --config devices/scan_lvs=1 --deldev /dev/sdd Re-scanning lvm PVs (pvscan --cache /dev/sda) pvscan[83777] /dev/sda excluded: device is not in devices file. Verify that all removed devices file entries can be recreated using vgimportdevices (but wo a valid global lock RHEL-39444) adding all current devices in devfile_VG to the devicesfile on virt-502.cluster-qe.lab.eng.brq.redhat.com Added 4 devices to devices file. IDTYPE=sys_wwid IDNAME=naa.6001405dcc98a832a8445a4b37bbec68 DEVNAME=/dev/sda PVID=2qVdk5bUYe3pkIVy4EbrT0wSa41Bsihu IDTYPE=sys_wwid IDNAME=naa.6001405102d31bdec6d4757bdfc160d2 DEVNAME=/dev/sdb PVID=tIamjdPcNtj4eKvuXPCpEXKX3tsu5qt2 IDTYPE=sys_wwid IDNAME=naa.6001405662db1719617493598c9e8fce DEVNAME=/dev/sdc PVID=5MP63DWsGOmvzAr2IalQf7sKUzce7tII IDTYPE=sys_wwid IDNAME=naa.60014053f1a2c20eb304c9e84b8fa679 DEVNAME=/dev/sdd PVID=TlGeHBX0RsktMCTFsRcClPRbCYCm8sUR starting VG locking on virt-502.cluster-qe.lab.eng.brq.redhat.com vgchange --lockstart devfile_VG VG devfile_VG starting sanlock lockspace Starting locking. Waiting for sanlock may take a few seconds to 3 min... removing VG devfile_VG from virt-502.cluster-qe.lab.eng.brq.redhat.com [DEADLOCK] May 6 23:45:09 virt-502 qarshd[83984]: Running cmdline: vgchange --lockstart devfile_VG May 6 23:45:10 virt-502 wdmd[82979]: /dev/watchdog0 open with timeout 60 May 6 23:45:14 virt-502 systemd[1]: qarshd@2222-10.37.165.150:5016-10.22.66.16:40148.service: Deactivated successfully. May 6 23:45:14 virt-502 systemd[1]: Started qarshd@2223-10.37.165.150:5016-10.22.66.16:40158.service - qarsh Per-Connection Server (10.22.66.16:40158). May 6 23:45:14 virt-502 qarshd[84026]: Talking to peer ::ffff:10.22.66.16:40158 (IPv6) May 6 23:45:15 virt-502 qarshd[84026]: Running cmdline: vgremove -f devfile_VG May 6 23:45:15 virt-502 lvmlockd[83044]: 1746567915 S lvm_devfile_VG hosts_san get_hosts error -11 May 6 23:45:15 virt-502 sanlock[82975]: 2025-05-06 23:45:15 26201 [82982]: cmd_release 2,10,83044 no resource May 6 23:45:15 virt-502 lvmlockd[83044]: 1746567915 lvm_devfile_VG:VGLK unlock_san release rename error -1 May 6 23:45:15 virt-502 lvmlockd[83044]: 1746567915 lvm_devfile_VG:VGLK clear_locks free 1 drop 0 lm unlock error -221 May 6 23:45:15 virt-502 sanlock[82975]: 2025-05-06 23:45:15 26201 [82981]: cmd_release 2,10,83044 no resource May 6 23:45:15 virt-502 lvmlockd[83044]: 1746567915 lvm_devfile_VG:GLLK unlock_san release rename error -1 May 6 23:45:15 virt-502 lvmlockd[83044]: 1746567915 lvm_devfile_VG:GLLK clear_locks free 1 drop 0 lm unlock error -221 May 6 23:45:15 virt-502 lvmlockd[83044]: 1746567915 S lvm_devfile_VG hosts_san get_hosts error -11 May 6 23:45:15 virt-502 sanlock[82975]: 2025-05-06 23:45:15 26202 [82975]: s2 kill 83044 sig 15 count 1 May 6 23:45:16 virt-502 sanlock[82975]: 2025-05-06 23:45:16 26203 [82975]: s2 kill 83044 sig 15 count 2 May 6 23:45:17 virt-502 sanlock[82975]: 2025-05-06 23:45:17 26204 [82975]: s2 kill 83044 sig 15 count 3 May 6 23:45:18 virt-502 sanlock[82975]: 2025-05-06 23:45:18 26205 [82975]: s2 kill 83044 sig 15 count 4 May 6 23:45:19 virt-502 sanlock[82975]: 2025-05-06 23:45:19 26206 [82975]: s2 kill 83044 sig 15 count 5 May 6 23:45:20 virt-502 sanlock[82975]: 2025-05-06 23:45:20 26207 [82975]: s2 kill 83044 sig 15 count 6 May 6 23:45:21 virt-502 sanlock[82975]: 2025-05-06 23:45:21 26208 [82975]: s2 kill 83044 sig 15 count 7 May 6 23:45:22 virt-502 sanlock[82975]: 2025-05-06 23:45:22 26209 [82975]: s2 kill 83044 sig 15 count 8 [...] May 6 23:45:40 virt-502 sanlock[82975]: 2025-05-06 23:45:40 26227 [82975]: s2 kill 83044 sig 15 count 26 May 6 23:45:41 virt-502 sanlock[82975]: 2025-05-06 23:45:41 26228 [82975]: s2 kill 83044 sig 15 count 27 May 6 23:45:42 virt-502 sanlock[82975]: 2025-05-06 23:45:42 26229 [82975]: s2 kill 83044 sig 15 count 28 May 6 23:45:43 virt-502 sanlock[82975]: 2025-05-06 23:45:43 26230 [82975]: s2 kill 83044 sig 15 count 29 May 6 23:45:44 virt-502 sanlock[82975]: 2025-05-06 23:45:44 26231 [82975]: s2 kill 83044 sig 15 count 30 May 7 19:47:12 virt-502 kernel: task:vgremove state:S stack:0 pid:84027 tgid:84027 ppid:84026 flags:0x00000002 May 7 19:47:12 virt-502 kernel: Call Trace: May 7 19:47:12 virt-502 kernel: <TASK> May 7 19:47:12 virt-502 kernel: __schedule+0x259/0x640 May 7 19:47:12 virt-502 kernel: schedule+0x27/0xa0 May 7 19:47:12 virt-502 kernel: schedule_timeout+0x12f/0x160 May 7 19:47:12 virt-502 kernel: unix_stream_data_wait+0x128/0x1b0 May 7 19:47:12 virt-502 kernel: ? __pfx_autoremove_wake_function+0x10/0x10 May 7 19:47:12 virt-502 kernel: unix_stream_read_generic+0x2b6/0x770 May 7 19:47:12 virt-502 kernel: unix_stream_recvmsg+0x8c/0xa0 May 7 19:47:12 virt-502 kernel: ? __pfx_unix_stream_read_actor+0x10/0x10 May 7 19:47:12 virt-502 kernel: sock_recvmsg+0xc0/0xd0 May 7 19:47:12 virt-502 kernel: sock_read_iter+0x93/0x100 May 7 19:47:12 virt-502 kernel: vfs_read+0x326/0x370 May 7 19:47:12 virt-502 kernel: ksys_read+0xb9/0xf0 May 7 19:47:12 virt-502 kernel: do_syscall_64+0x7d/0x160 May 7 19:47:12 virt-502 kernel: ? srso_return_thunk+0x5/0x5f May 7 19:47:12 virt-502 kernel: ? do_syscall_64+0x89/0x160 May 7 19:47:12 virt-502 kernel: ? srso_return_thunk+0x5/0x5f May 7 19:47:12 virt-502 kernel: ? sock_write_iter+0x190/0x1a0 May 7 19:47:12 virt-502 kernel: ? srso_return_thunk+0x5/0x5f May 7 19:47:12 virt-502 kernel: ? vfs_write+0x3a6/0x450 May 7 19:47:12 virt-502 kernel: ? srso_return_thunk+0x5/0x5f May 7 19:47:12 virt-502 kernel: ? syscall_exit_work+0xf3/0x120 May 7 19:47:12 virt-502 kernel: ? srso_return_thunk+0x5/0x5f May 7 19:47:12 virt-502 kernel: ? syscall_exit_to_user_mode+0x32/0x190 May 7 19:47:12 virt-502 kernel: ? srso_return_thunk+0x5/0x5f May 7 19:47:12 virt-502 kernel: ? do_syscall_64+0x89/0x160 May 7 19:47:12 virt-502 kernel: ? srso_return_thunk+0x5/0x5f May 7 19:47:12 virt-502 kernel: ? srso_return_thunk+0x5/0x5f May 7 19:47:12 virt-502 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e May 7 19:47:12 virt-502 kernel: RIP: 0033:0x7f83ebae4321 May 7 19:47:12 virt-502 kernel: RSP: 002b:00007ffe2707e868 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 May 7 19:47:12 virt-502 kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f83ebae4321 May 7 19:47:12 virt-502 kernel: RDX: 0000000000000020 RSI: 0000563f60ad1160 RDI: 0000000000000003 May 7 19:47:12 virt-502 kernel: RBP: 00007ffe2707e940 R08: 0000000000000007 R09: 0000563f60ad2970 May 7 19:47:12 virt-502 kernel: R10: 0000563f60766010 R11: 0000000000000246 R12: 0000000000000003 May 7 19:47:12 virt-502 kernel: R13: 00007ffe2707e968 R14: 0000563f60ad1160 R15: 0000563f612e8950 May 7 19:47:12 virt-502 kernel: </TASK> [root@virt-502 ~]# vgremove -vvvv -f devfile_VG > /tmp/vgremove 2>&1 [...] 20:46:30.511437 vgremove[3255] cache/lvmcache.c:2022 lvmcache /dev/sda: VG devfile_VG: set creation host to virt-502.cluster-qe.lab.eng.brq.redhat.com. 20:46:30.511446 vgremove[3255] cache/lvmcache.c:2040 lvmcache /dev/sda: VG devfile_VG: set lock_type to sanlock. 20:46:30.511455 vgremove[3255] format_text/text_label.c:566 Found metadata seqno 3 in mda1 on /dev/sda 20:46:30.511467 vgremove[3255] label/label.c:714 Processing data from device /dev/sdb 8:16 di 1. 20:46:30.511477 vgremove[3255] device/dev-io.c:96 /dev/sdb: using cached size 83886080 sectors 20:46:30.511529 vgremove[3255] device/dev-io.c:96 /dev/sdb: using cached size 83886080 sectors 20:46:30.511540 vgremove[3255] filters/filter-persistent.c:132 filter caching good /dev/sdb. 20:46:30.511553 vgremove[3255] label/label.c:312 Found label at sector 1 on /dev/sdb 20:46:30.511562 vgremove[3255] cache/lvmcache.c:2494 Found PVID oazhmxsUOvmMMOdcSThtQnDU8gvyt6S4 on /dev/sdb 20:46:30.511572 vgremove[3255] cache/lvmcache.c:1988 lvmcache /dev/sdb: now in VG #orphans_lvm2 #orphans_lvm2 20:46:30.511582 vgremove[3255] format_text/text_label.c:538 Scanning /dev/sdb mda1 summary. 20:46:30.511590 vgremove[3255] format_text/format-text.c:196 Reading mda header sector from /dev/sdb at 4096 20:46:30.511600 vgremove[3255] format_text/format-text.c:1550 Skipping read of already known VG metadata with matching mda checksum on /dev/sdb. 20:46:30.511609 vgremove[3255] format_text/format-text.c:1575 Found metadata summary on /dev/sdb at 8704 size 2090 for VG devfile_VG 20:46:30.511619 vgremove[3255] cache/lvmcache.c:1988 lvmcache /dev/sdb: now in VG devfile_VG GgdTfEePuFx9EQwB1SdXEd1cVX6Eu2Hz 20:46:30.511627 vgremove[3255] format_text/text_label.c:566 Found metadata seqno 3 in mda1 on /dev/sdb 20:46:30.511637 vgremove[3255] label/label.c:714 Processing data from device /dev/sdc 8:32 di 2. 20:46:30.511646 vgremove[3255] device/dev-io.c:96 /dev/sdc: using cached size 83886080 sectors 20:46:30.511690 vgremove[3255] device/dev-io.c:96 /dev/sdc: using cached size 83886080 sectors 20:46:30.511700 vgremove[3255] filters/filter-persistent.c:132 filter caching good /dev/sdc. 20:46:30.511711 vgremove[3255] label/label.c:312 Found label at sector 1 on /dev/sdc 20:46:30.511720 vgremove[3255] cache/lvmcache.c:2494 Found PVID PAQt8OmHG4NAiVBcZHbUNaEhgQDVu3RP on /dev/sdc 20:46:30.511729 vgremove[3255] cache/lvmcache.c:1988 lvmcache /dev/sdc: now in VG #orphans_lvm2 #orphans_lvm2 20:46:30.511739 vgremove[3255] format_text/text_label.c:538 Scanning /dev/sdc mda1 summary. 20:46:30.511748 vgremove[3255] format_text/format-text.c:196 Reading mda header sector from /dev/sdc at 4096 20:46:30.511757 vgremove[3255] format_text/format-text.c:1550 Skipping read of already known VG metadata with matching mda checksum on /dev/sdc. 20:46:30.511766 vgremove[3255] format_text/format-text.c:1575 Found metadata summary on /dev/sdc at 8704 size 2090 for VG devfile_VG 20:46:30.511775 vgremove[3255] cache/lvmcache.c:1988 lvmcache /dev/sdc: now in VG devfile_VG GgdTfEePuFx9EQwB1SdXEd1cVX6Eu2Hz 20:46:30.511784 vgremove[3255] format_text/text_label.c:566 Found metadata seqno 3 in mda1 on /dev/sdc 20:46:30.511793 vgremove[3255] label/label.c:714 Processing data from device /dev/sdd 8:48 di 3. 20:46:30.511803 vgremove[3255] device/dev-io.c:96 /dev/sdd: using cached size 83886080 sectors 20:46:30.511845 vgremove[3255] device/dev-io.c:96 /dev/sdd: using cached size 83886080 sectors 20:46:30.511855 vgremove[3255] filters/filter-persistent.c:132 filter caching good /dev/sdd. 20:46:30.511866 vgremove[3255] label/label.c:312 Found label at sector 1 on /dev/sdd 20:46:30.511875 vgremove[3255] cache/lvmcache.c:2494 Found PVID QvY5QFsjIXNUOrQlaWMPkUXVImFMp87G on /dev/sdd 20:46:30.511884 vgremove[3255] cache/lvmcache.c:1988 lvmcache /dev/sdd: now in VG #orphans_lvm2 #orphans_lvm2 20:46:30.511901 vgremove[3255] format_text/text_label.c:538 Scanning /dev/sdd mda1 summary. 20:46:30.511910 vgremove[3255] format_text/format-text.c:196 Reading mda header sector from /dev/sdd at 4096 20:46:30.511925 vgremove[3255] format_text/format-text.c:1550 Skipping read of already known VG metadata with matching mda checksum on /dev/sdd. 20:46:30.511934 vgremove[3255] format_text/format-text.c:1575 Found metadata summary on /dev/sdd at 8704 size 2090 for VG devfile_VG 20:46:30.511943 vgremove[3255] cache/lvmcache.c:1988 lvmcache /dev/sdd: now in VG devfile_VG GgdTfEePuFx9EQwB1SdXEd1cVX6Eu2Hz 20:46:30.511951 vgremove[3255] format_text/text_label.c:566 Found metadata seqno 3 in mda1 on /dev/sdd 20:46:30.511960 vgremove[3255] label/label.c:748 Scanned devices: read errors 0 process errors 0 failed 0 20:46:30.511969 vgremove[3255] device/device_id.c:3082 Validating devices file entries 20:46:30.511978 vgremove[3255] device/device_id.c:3085 Validating sys_wwid naa.6001405dcc98a832a8445a4b37bbec68 PVID eqBtxTobQQf3vg6XR1K2l0P04ZLtuHCx: initial match /dev/sda 20:46:30.511987 vgremove[3255] device/device_id.c:3085 Validating sys_wwid naa.6001405102d31bdec6d4757bdfc160d2 PVID oazhmxsUOvmMMOdcSThtQnDU8gvyt6S4: initial match /dev/sdb 20:46:30.511996 vgremove[3255] device/device_id.c:3085 Validating sys_wwid naa.6001405662db1719617493598c9e8fce PVID PAQt8OmHG4NAiVBcZHbUNaEhgQDVu3RP: initial match /dev/sdc 20:46:30.512005 vgremove[3255] device/device_id.c:3085 Validating sys_wwid naa.60014053f1a2c20eb304c9e84b8fa679 PVID QvY5QFsjIXNUOrQlaWMPkUXVImFMp87G: initial match /dev/sdd 20:46:30.512014 vgremove[3255] filters/filter-persistent.c:107 /dev/sda: filter cache using (cached good). 20:46:30.512023 vgremove[3255] device/device_id.c:3212 Validate sys_wwid naa.6001405dcc98a832a8445a4b37bbec68 PVID eqBtxTobQQf3vg6XR1K2l0P04ZLtuHCx on /dev/sda: correct 20:46:30.512032 vgremove[3255] filters/filter-persistent.c:107 /dev/sdb: filter cache using (cached good). 20:46:30.512040 vgremove[3255] device/device_id.c:3212 Validate sys_wwid naa.6001405102d31bdec6d4757bdfc160d2 PVID oazhmxsUOvmMMOdcSThtQnDU8gvyt6S4 on /dev/sdb: correct 20:46:30.512049 vgremove[3255] filters/filter-persistent.c:107 /dev/sdc: filter cache using (cached good). 20:46:30.512058 vgremove[3255] device/device_id.c:3212 Validate sys_wwid naa.6001405662db1719617493598c9e8fce PVID PAQt8OmHG4NAiVBcZHbUNaEhgQDVu3RP on /dev/sdc: correct 20:46:30.512067 vgremove[3255] filters/filter-persistent.c:107 /dev/sdd: filter cache using (cached good). 20:46:30.512075 vgremove[3255] device/device_id.c:3212 Validate sys_wwid naa.60014053f1a2c20eb304c9e84b8fa679 PVID QvY5QFsjIXNUOrQlaWMPkUXVImFMp87G on /dev/sdd: correct 20:46:30.512110 vgremove[3255] device/device_id.c:3537 Validated device ids: invalid=0, no update needed. 20:46:30.512122 vgremove[3255] cache/lvmcache.c:1677 lvmcache label scan done 20:46:30.512142 vgremove[3255] toollib.c:2640 Obtaining the complete list of VGs to process 20:46:30.512173 vgremove[3255] device_mapper/libdm-config.c:1073 report/output_format not found in config: defaulting to "basic" 20:46:30.512185 vgremove[3255] device_mapper/libdm-config.c:1172 log/report_command_log not found in config: defaulting to 0 20:46:30.512197 vgremove[3255] toollib.c:2336 Processing VG devfile_VG GgdTfE-ePuF-x9EQ-wB1S-dXEd-1cVX-6Eu2Hz 20:46:30.512206 vgremove[3255] locking/lvmlockd.c:2292 lockd VG devfile_VG ex 20:46:30.524028 vgremove[3255] locking/lvmlockd.c:179 lockd lock_vg result: 0 20:46:30.524067 vgremove[3255] misc/lvm-flock.c:228 Locking /run/lock/lvm/V_devfile_VG WB 20:46:30.524185 vgremove[3255] device_mapper/libdm-common.c:988 Preparing SELinux context for /run/lock/lvm/V_devfile_VG to system_u:object_r:lvm_lock_t:s0. 20:46:30.524243 vgremove[3255] misc/lvm-flock.c:113 _do_flock /run/lock/lvm/V_devfile_VG:aux WB 20:46:30.524300 vgremove[3255] misc/lvm-flock.c:113 _do_flock /run/lock/lvm/V_devfile_VG WB 20:46:30.524335 vgremove[3255] misc/lvm-flock.c:47 _undo_flock /run/lock/lvm/V_devfile_VG:aux 20:46:30.524368 vgremove[3255] device_mapper/libdm-common.c:991 Resetting SELinux context to default value. 20:46:30.524402 vgremove[3255] metadata/metadata.c:5015 Rescanning DM cache. 20:46:30.524413 vgremove[3255] activate/activate.c:499 Getting driver version 20:46:30.524437 vgremove[3255] device_mapper/ioctl/libdm-iface.c:2067 dm names UUID [ noopencount noflush ] [16384] (*1) 20:46:30.524464 vgremove[3255] device/dev-cache.c:1353 Preserving DM cache. 20:46:30.524474 vgremove[3255] metadata/metadata.c:4673 Reading VG devfile_VG GgdTfEePuFx9EQwB1SdXEd1cVX6Eu2Hz 20:46:30.524499 vgremove[3255] label/label.c:1858 reopen writable /dev/sda di 0 prev 6 fd 10 20:46:30.524519 vgremove[3255] label/label.c:1858 reopen writable /dev/sdb di 1 prev 7 fd 6 20:46:30.524540 vgremove[3255] label/label.c:1858 reopen writable /dev/sdc di 2 prev 8 fd 7 20:46:30.524560 vgremove[3255] label/label.c:1858 reopen writable /dev/sdd di 3 prev 9 fd 8 20:46:30.524573 vgremove[3255] format_text/format-text.c:196 Reading mda header sector from /dev/sda at 4096 20:46:30.525294 vgremove[3255] format_text/format-text.c:196 Reading mda header sector from /dev/sdb at 4096 20:46:30.525857 vgremove[3255] format_text/format-text.c:196 Reading mda header sector from /dev/sdc at 4096 20:46:30.526465 vgremove[3255] format_text/format-text.c:196 Reading mda header sector from /dev/sdd at 4096 20:46:30.529286 vgremove[3255] metadata/metadata.c:4642 Rescan skipped - unchanged offset 4608 checksum 784dfb2f. 20:46:30.529345 vgremove[3255] metadata/metadata.c:4823 Reading VG devfile_VG metadata from /dev/sda 4096 20:46:30.529357 vgremove[3255] format_text/format-text.c:196 Reading mda header sector from /dev/sda at 4096 20:46:30.529369 vgremove[3255] format_text/import.c:153 Reading metadata from /dev/sda at 8704 size 2090 (+0) 20:46:30.529409 vgremove[3255] metadata/vg.c:61 Allocated VG devfile_VG at 0x556053511850. 20:46:30.529454 vgremove[3255] format_text/import_vsn1.c:629 Importing logical volume lvmlock. 20:46:30.529469 vgremove[3255] format_text/import_vsn1.c:750 Logical volume devfile_VG/lvmlock is sanlock lv. 20:46:30.529500 vgremove[3255] format_text/format-text.c:442 Found metadata text at 8704 off 4608 size 2090 VG devfile_VG on /dev/sda 20:46:30.529511 vgremove[3255] metadata/metadata.c:4823 Reading VG devfile_VG metadata from /dev/sdb 4096 20:46:30.529522 vgremove[3255] format_text/format-text.c:196 Reading mda header sector from /dev/sdb at 4096 20:46:30.529570 vgremove[3255] format_text/import.c:153 Reading metadata from /dev/sdb at 8704 size 2090 (+0) 20:46:30.529590 vgremove[3255] format_text/import.c:173 Skipped parsing metadata on /dev/sdb 20:46:30.529618 vgremove[3255] format_text/format-text.c:442 Found metadata text at 8704 off 4608 size 2090 VG devfile_VG on /dev/sdb 20:46:30.529629 vgremove[3255] metadata/metadata.c:4823 Reading VG devfile_VG metadata from /dev/sdc 4096 20:46:30.529639 vgremove[3255] format_text/format-text.c:196 Reading mda header sector from /dev/sdc at 4096 20:46:30.529651 vgremove[3255] format_text/import.c:153 Reading metadata from /dev/sdc at 8704 size 2090 (+0) 20:46:30.529664 vgremove[3255] format_text/import.c:173 Skipped parsing metadata on /dev/sdc 20:46:30.529674 vgremove[3255] format_text/format-text.c:442 Found metadata text at 8704 off 4608 size 2090 VG devfile_VG on /dev/sdc 20:46:30.529685 vgremove[3255] metadata/metadata.c:4823 Reading VG devfile_VG metadata from /dev/sdd 4096 20:46:30.529695 vgremove[3255] format_text/format-text.c:196 Reading mda header sector from /dev/sdd at 4096 20:46:30.529706 vgremove[3255] format_text/import.c:153 Reading metadata from /dev/sdd at 8704 size 2090 (+0) 20:46:30.529719 vgremove[3255] format_text/import.c:173 Skipped parsing metadata on /dev/sdd 20:46:30.529729 vgremove[3255] format_text/format-text.c:442 Found metadata text at 8704 off 4608 size 2090 VG devfile_VG on /dev/sdd 20:46:30.529742 vgremove[3255] cache/lvmcache.c:2295 lvmcache_update_vg devfile_VG vginfo from metadata 20:46:30.529753 vgremove[3255] cache/lvmcache.c:2360 lvmcache_update_vg devfile_VG for /dev/sda 20:46:30.529786 vgremove[3255] cache/lvmcache.c:2360 lvmcache_update_vg devfile_VG for /dev/sdb 20:46:30.529796 vgremove[3255] cache/lvmcache.c:2360 lvmcache_update_vg devfile_VG for /dev/sdc 20:46:30.529807 vgremove[3255] cache/lvmcache.c:2360 lvmcache_update_vg devfile_VG for /dev/sdd 20:46:30.529821 vgremove[3255] device_mapper/libdm-config.c:1100 metadata/lvs_history_retention_time not found in config: defaulting to 0 20:46:30.529834 vgremove[3255] metadata/pv_manip.c:415 /dev/sda 0: 0 64: lvmlock(0:0) 20:46:30.529845 vgremove[3255] metadata/pv_manip.c:415 /dev/sda 1: 64 10175: NULL(0:0) 20:46:30.529855 vgremove[3255] metadata/pv_manip.c:415 /dev/sdb 0: 0 10239: NULL(0:0) 20:46:30.529866 vgremove[3255] metadata/pv_manip.c:415 /dev/sdc 0: 0 10239: NULL(0:0) 20:46:30.529877 vgremove[3255] metadata/pv_manip.c:415 /dev/sdd 0: 0 10239: NULL(0:0) 20:46:30.529891 vgremove[3255] device/dev-io.c:96 /dev/sda: using cached size 83886080 sectors 20:46:30.529902 vgremove[3255] device/dev-io.c:96 /dev/sdb: using cached size 83886080 sectors 20:46:30.529912 vgremove[3255] device/dev-io.c:96 /dev/sdc: using cached size 83886080 sectors 20:46:30.529922 vgremove[3255] device/dev-io.c:96 /dev/sdd: using cached size 83886080 sectors 20:46:30.529935 vgremove[3255] metadata/vg.c:61 Allocated VG devfile_VG at 0x556053521860. 20:46:30.529961 vgremove[3255] format_text/import_vsn1.c:629 Importing logical volume lvmlock. 20:46:30.529974 vgremove[3255] format_text/import_vsn1.c:750 Logical volume devfile_VG/lvmlock is sanlock lv. 20:46:30.529992 vgremove[3255] toollib.c:2374 Running command for VG devfile_VG GgdTfE-ePuF-x9EQ-wB1S-dXEd-1cVX-6Eu2Hz
- is cloned by
-
RHEL-94577 [RHEL9] vgremove hangs immediately after lockstart
-
- Closed
-
- links to
-
RHBA-2025:151109
lvm2 update