-
Bug
-
Resolution: Duplicate
-
Undefined
-
None
-
rhos-18.0.5
-
False
-
-
False
-
?
-
None
-
-
-
PIDONE 18.0.7
-
1
-
Important
To Reproduce Steps to reproduce the behavior:
Changing value in rabbitmq like :
Limits:
Cpu: 26
Memory: 32Gi
Requests:
Cpu: 4
Memory: 8
From https://issues.redhat.com/browse/OSPRH-14136
We noticed that by changing RabbitMQ configuration and apply it, then when pods has been rolled out, the RabbitMQ is not reachable anymore by at least nova pods (notably conductor and/or scheduler). We are able to reproduce it every time a Rabbit configuration change require pods roll out.
The work around is to log in Rabbit pods, issue a rabbitmqctl stop_app on all pods, then (multiple times ...) rabbitmqctl start_app.
Device Info (please complete the following information):
- RHOSO Version 18.0.5
Bug impact
- nova services are unhealthy.
Known workaround
- restart mannually rabbitmq in pods
Additional context
- duplicates
-
OSPRH-10790 Cut in service availability during update and unable to create vm after update
-
- Closed
-
- is duplicated by
-
OSPRH-14136 nova-operator created 8 dysfunctional nova-scheduler pods
-
- Closed
-