-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
rhos-17.1.3
-
None
-
False
-
-
False
-
?
-
None
-
-
-
-
Important
To Reproduce Steps to reproduce the behavior:
VM hard reboot failed with the following error:
2025-06-09 18:15:42.152 2 ERROR nova.virt.libvirt.guest Traceback (most recent call last): 2025-06-09 18:15:42.152 2 ERROR nova.virt.libvirt.guest File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py", line 165, in launch 2025-06-09 18:15:42.152 2 ERROR nova.virt.libvirt.guest return self._domain.createWithFlags(flags) 2025-06-09 18:15:42.152 2 ERROR nova.virt.libvirt.guest File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 190, in doit 2025-06-09 18:15:42.152 2 ERROR nova.virt.libvirt.guest result = proxy_call(self._autowrap, f, *args, **kwargs) 2025-06-09 18:15:42.152 2 ERROR nova.virt.libvirt.guest File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 148, in proxy_call 2025-06-09 18:15:42.152 2 ERROR nova.virt.libvirt.guest rv = execute(f, *args, **kwargs) 2025-06-09 18:15:42.152 2 ERROR nova.virt.libvirt.guest File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 129, in execute 2025-06-09 18:15:42.152 2 ERROR nova.virt.libvirt.guest six.reraise(c, e, tb) 2025-06-09 18:15:42.152 2 ERROR nova.virt.libvirt.guest File "/usr/lib/python3.9/site-packages/six.py", line 709, in reraise 2025-06-09 18:15:42.152 2 ERROR nova.virt.libvirt.guest raise value 2025-06-09 18:15:42.152 2 ERROR nova.virt.libvirt.guest File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 83, in tworker 2025-06-09 18:15:42.152 2 ERROR nova.virt.libvirt.guest rv = meth(*args, **kwargs) 2025-06-09 18:15:42.152 2 ERROR nova.virt.libvirt.guest File "/usr/lib64/python3.9/site-packages/libvirt.py", line 1409, in createWithFlags 2025-06-09 18:15:42.152 2 ERROR nova.virt.libvirt.guest raise libvirtError('virDomainCreateWithFlags() failed') 2025-06-09 18:15:42.152 2 ERROR nova.virt.libvirt.guest libvirt.libvirtError: Cannot access storage file '/dev/dm-15': No such file or directory
After looking into logs together with storage support group we have found out that separate block devices behind dm-15 were created before libvirt tried to boot an instance, but multipath device was initialized later:
2025-06-09T18:15:43.261277414+00:00 stderr F 18433312.299913 | 3600601603d505d0095234768948752c2: adding map
2025-06-09T18:15:43.343939877+00:00 stderr F 18433312.382582 | 3600601603d505d0095234768948752c2: reload [0 41943040 multipath 1 queue_if_no_path 1 alua 2 1 round-robin 0 4 1 8:192 1 65:80 1 65:112 1 65:128 1 round-robin 0 4 1 65:32 1 65:48 1 65:64 1 65:96 1]
2025-06-09T18:15:43.356457843+00:00 stderr F 18433312.395109 | 3600601603d505d0095234768948752c2: already waiting for events on device
2025-06-09T18:15:43.356457843+00:00 stderr F 18433312.395152 | 3600601603d505d0095234768948752c2: devmap dm-15 registered
It looks like os-brick reported success earlier that it should and Nova tried to start a VM with a multipath device before device's initialization was completed.
RHOSP 17.1.3 is affected
Expected behavior
Nova only boots libvirt VM after block devices are prepared.
Bug impact
Sporadic hard reboot failure
Known workaround
No workaround, it is possible to repeat hard reboot.