-
Bug
-
Resolution: Unresolved
-
Blocker
-
rhos-18.0.15
-
None
-
0
-
False
-
-
False
-
?
-
rhos-workloads-compute
-
None
-
-
Known Issue
-
Done
-
-
-
-
Important
We are experiencing reconciliation issues with all Nova API, scheduler, and conductor pods whenever any pod in the RabbitMQ instance used for the Nova notification server fails or restarts. This behavior causes the Nova services to become unstable. This is a critical issue because the failure or restart of a single rabbitmq_notification_server pod directly impacts Nova service.
Customer is able to reproduce the issue on both of their clusters.
To reproduce they just delete the rabbit_notification_server pod
"""
We have 2 environments with notificationBusInstance configured in nova service. In both, when any of rabbitmq pods from notification cluster is deleted, it cause redeployment of Nova. It does not happen for cell/global rabbitmq cluster pods.
To answer below:
- lastTransitionTime: "2026-01-26T14:49:21Z"
message: OpenStackControlPlane Nova completed
reason: Ready
status: "True"
type: OpenStackControlPlaneNovaReady
This transition happen after rabbitmq pod deletion.
"""