-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
None
-
None
-
2
-
False
-
-
False
-
?
-
openstack-watcher-10.0.1-18.0.20251203164734.c014f81.el9ost
-
rhos-workloads-evolution
-
None
-
-
-
-
Workload Evolution Sprint 13
-
1
-
Important
To Reproduce Steps to reproduce the behavior:
- Create a zone migration audit with an input like the following
"storage_pools":[
{"src_type": "lvmdriver-1", "dst_type": "test_3", "src_pool": "jgilaber-watcher-1@lvmdriver-1#lvmdriver-1", "dst_pool": "jgilaber-watcher-2@lvmdriver-2#lvmdriver-3"}
]}
In this example there are three storage hosts "jgilaber-watcher-1@lvmdriver-1#lvmdriver-1", "jgilaber-watcher-2@lvmdriver-2#lvmdriver-3" and "jgilaber-watcher-3@lvmdriver-3#lvmdriver-3" and two volume types "lvmdriver-1" which is associated with the "lvmdriver-1" volume_backend name and "test_3" which is associated with the "lvmdriver-3" volume_backend_name. With this configuration a volume created with type "lvmdriver-1" can only be scheduled in "jgilaber-watcher-1@lvmdriver-1#lvmdriver-1". If we run the audit with the example input, the action plan contains one volume migrate action of type "retype" so the volume is retyped to "test_3" and migrated by cinder, but is migrated to any of the hosts that satisfy the type constraints, in this case either of "jgilaber-watcher-2@lvmdriver-2#lvmdriver-3" or "jgilaber-watcher-3@lvmdriver-3#lvmdriver-3". This does not depend on watcher but on the cinder scheduler and how its configured, so the result of the audit may not be the desired state expressed by the user.
Expected behavior
- The user would expect that the volumes in "jgilaber-watcher-1@lvmdriver-1#lvmdriver-1" with type "lvmdriver-1" would end up in "jgilaber-watcher-2@lvmdriver-2#lvmdriver-3" with type "test_3", but the strategy does not guarantee that. Instead they can end up in any cinder host compatible with volume type "test_3".
Bug impact
- This bug could cause the user to migrate volumes to a host different than expected, which could interfere with other operations, such as trying to mark the host for maintenance.
Known workaround
- A workaround would be to run a second zone migration audit to ensure the volumes end in the right host.
- links to
- mentioned on