-
Bug
-
Resolution: Done-Errata
-
Minor
-
rhel-9.5
-
None
-
lvm2-2.03.32-1.el9
-
None
-
Low
-
rhel-storage-lvm
-
ssg_filesystems_storage_and_HA
-
14
-
5
-
False
-
False
-
-
None
-
None
-
-
x86_64
-
None
If the user doesn't know what to do, then the created VG is unusable and unremovable. It seems like a bug to allow the user to set themselves up like that.
In lvm's defense, afterward, it does tell the user "invalid sanlock host_id, set in lvmlocal.conf" which should be a fairly clear but it still seems wrong to have the VG create command pass without a non zero error code, even though the create stdout and stderr mentions the word "failed" twice and "invalid" once, and when you're done, you can't use, change, or remove the new VG in its current state.
Operation in question:
[root@virt-007 ~]# vgcreate shared --shared /dev/sd[abcd] Enabling sanlock global lock Logical volume "lvmlock" created. Volume group "shared" successfully created VG shared start failed: invalid sanlock host_id, set in lvmlocal.conf Failed to start locking [root@virt-007 ~]# echo $? 0
Here's the full run of commands:
[root@virt-007 ~]# systemctl start sanlock
[root@virt-007 ~]# systemctl status sanlock
â— sanlock.service - Shared Storage Lease Manager
Loaded: loaded (/usr/lib/systemd/system/sanlock.service; disabled; preset: disabled)
Active: active (running) since Mon 2024-06-03 22:13:10 CEST; 6s ago
Process: 318218 ExecStart=/usr/sbin/sanlock daemon (code=exited, status=0/SUCCESS)
Main PID: 318221 (sanlock)
Tasks: 6 (limit: 25015)
Memory: 14.9M
CPU: 28ms
CGroup: /system.slice/sanlock.service
├─318221 /usr/sbin/sanlock daemon
└─318222 /usr/sbin/sanlock daemon
Jun 03 22:13:10 virt-007.cluster-qe.lab.eng.brq.redhat.com systemd[1]: Starting Shared Storage Lease Manager...
Jun 03 22:13:10 virt-007.cluster-qe.lab.eng.brq.redhat.com systemd[1]: Started Shared Storage Lease Manager.
Jun 03 22:13:10 virt-007.cluster-qe.lab.eng.brq.redhat.com sanlock[318221]: sanlock daemon started 3.9.1 host 10b32729-7345-4af9-b7c5-046230cb7f37.virt-007.cl (virt-007.cluster-qe.lab.eng.brq.redhat.com)
[root@virt-007 ~]# systemctl start lvmlockd
[root@virt-007 ~]# systemctl status lvmlockd
â— lvmlockd.service - LVM lock daemon
Loaded: loaded (/usr/lib/systemd/system/lvmlockd.service; disabled; preset: disabled)
Active: active (running) since Mon 2024-06-03 22:13:32 CEST; 4s ago
Docs: man:lvmlockd(8)
Main PID: 318245 (lvmlockd)
Tasks: 3 (limit: 25015)
Memory: 2.7M
CPU: 38ms
CGroup: /system.slice/lvmlockd.service
└─318245 /usr/sbin/lvmlockd --foreground
Jun 03 22:13:32 virt-007.cluster-qe.lab.eng.brq.redhat.com systemd[1]: Starting LVM lock daemon...
Jun 03 22:13:32 virt-007.cluster-qe.lab.eng.brq.redhat.com lvmlockd[318245]: [D] creating /run/lvm/lvmlockd.socket
Jun 03 22:13:32 virt-007.cluster-qe.lab.eng.brq.redhat.com lvmlockd[318245]: 1717445612 lvmlockd started
Jun 03 22:13:32 virt-007.cluster-qe.lab.eng.brq.redhat.com systemd[1]: Started LVM lock daemon.
[root@virt-007 ~]# vgcreate shared --shared /dev/sd[abcd]
Enabling sanlock global lock
Logical volume "lvmlock" created.
Volume group "shared" successfully created
VG shared start failed: invalid sanlock host_id, set in lvmlocal.conf
Failed to start locking
[root@virt-007 ~]# echo $?
0
[root@virt-007 ~]# dmsetup ls
rhel_virt--007-root (253:0)
rhel_virt--007-swap (253:1)
shared-lvmlock (253:2)
[root@virt-007 ~]# vgremove -f shared
Global lock failed: check that global lockspace is started
[root@virt-007 ~]# vgchange --lock-start shared
Skipping global lock: lockspace not found or started
VG shared start failed: invalid sanlock host_id, set in lvmlocal.conf
[root@virt-007 ~]# vgremove --nolocking -f shared
Cannot free VG sanlock, lvmlockd is not in use.
[root@virt-007 ~]# vgremove --config 'global{use_lvmlockd=0}' shared
Cannot access VG shared with lock type sanlock that requires lvmlockd.
[root@virt-007 ~]# pvscan
Skipping global lock: lockspace not found or started
Reading VG shared without a lock.
PV /dev/sda VG shared lvm2 [<55.00 GiB / <54.75 GiB free]
PV /dev/sdb VG shared lvm2 [<55.00 GiB / <55.00 GiB free]
PV /dev/sdc VG shared lvm2 [<55.00 GiB / <55.00 GiB free]
PV /dev/sdd VG shared lvm2 [<55.00 GiB / <55.00 GiB free]
Total: 4 [219.98 GiB] / in use: 4 [219.98 GiB] / in no VG: 0 [0 ]
[root@virt-007 ~]# vgs
Skipping global lock: lockspace not found or started
Reading VG shared without a lock.
VG #PV #LV #SN Attr VSize VFree
shared 4 0 0 wz--ns 219.98g 219.73g
[root@virt-007 ~]# grep host_id /etc/lvm/lvmlocal.conf
# Configuration option local/host_id.
# The lvmlockd sanlock host_id.
# host_id = 0
[root@virt-007 ~]# grep host_id /etc/lvm/lvmlocal.conf
# Configuration option local/host_id.
# The lvmlockd sanlock host_id.
host_id = 1990
[root@virt-007 ~]# pvscan
Skipping global lock: lockspace not found or started
Reading VG shared without a lock.
PV /dev/sda VG shared lvm2 [<55.00 GiB / <54.75 GiB free]
PV /dev/sdb VG shared lvm2 [<55.00 GiB / <55.00 GiB free]
PV /dev/sdc VG shared lvm2 [<55.00 GiB / <55.00 GiB free]
PV /dev/sdd VG shared lvm2 [<55.00 GiB / <55.00 GiB free]
Total: 4 [219.98 GiB] / in use: 4 [219.98 GiB] / in no VG: 0 [0 ]
[root@virt-007 ~]# vgchange --lock-start shared
Skipping global lock: lockspace not found or started
VG shared starting sanlock lockspace
Starting locking. Waiting for sanlock may take 20 sec to 3 min...
[root@virt-007 ~]# vgremove shared
Volume group "shared" successfully removed
- is blocked by
-
RHEL-70452 [RHEL9.7] Rebase lvm2 to 2.03.29 or later
-
- Closed
-
- is cloned by
-
RHEL-89829 [RHEL10] "invalid sanlock host_id" leaves newly created VG unremovable
-
- Closed
-
- links to
-
RHBA-2025:150953
lvm2 update