-
Bug
-
Resolution: Unresolved
-
Normal
-
rhos-18.0.10 FR 3
-
0
-
False
-
-
False
-
?
-
None
-
-
-
-
Moderate
To Reproduce Steps to reproduce the behavior:
don't apply the NAD for the ctlplane network for some reason (e.g. typo)
observe nova-cell-metadata pods in CrashLoopBackOff
check nova operator logs and find:
025-08-27T10:06:34.967Z INFO network-attachment-definition ctlplane not found {"controller": "novaconductor", "controllerGroup": "nova.openstack.org", "controllerKind": "NovaConductor", "Nova Conductor": {"name":"nova-cell3-conductor","namespace":"openstack"}, "namespace": "openstack", "name": "nova-cell3-conductor", "reconcileID": "729b2116-6ea5-44f0-85bb-bcbd1b9f97e4"}
Expected behavior
At least warning or ERROR level for this log entry, it's easy to overlook.
So at least for certain important networks that really need to be there, raise the log level please.
This affects multiple operators, see this code seach on github:
As discussed with abays@redhat.com
This was encountered during a real world deployment at a customer.
Also this results in a misleading error message further down the line in nova:
2025-08-27 08:59:04.973 10 ERROR nova.utils sqlalchemy.exc.ProgrammingError: (pymysql.err.ProgrammingError) (1146, "Table 'nova_cell1.services' doesn't exist") 2025-08-27 08:59:04.973 10 ERROR nova.utils [SQL: SELECT services.`binary` AS services_binary, min(services.version) AS min_1 2025-08-27 08:59:04.973 10 ERROR nova.utils FROM services 2025-08-27 08:59:04.973 10 ERROR nova.utils WHERE services.`binary` IN (%(binary_1_1)s) AND services.deleted = %(deleted_1)s AND services.forced_down = false GROUP BY services.`binary`] 2025-08-27 08:59:04.973 10 ERROR nova.utils [parameters: {'deleted_1': 0, 'binary_1_1': 'nova-compute'}] 2025-08-27 08:59:04.973 10 ERROR nova.utils (Background on this error at: https://sqlalche.me/e/14/f405)
I can create a different issue for the above, if needed.