-
Bug
-
Resolution: Done-Errata
-
Normal
-
rhos-18.0.4
-
None
-
1
-
False
-
-
False
-
?
-
octavia-operator-container-1.0.7-8
-
None
-
-
Bug Fix
-
Done
-
-
-
VANS-010
-
1
-
Important
To Reproduce Steps to reproduce the behavior:
Deploy rhoso with octavia and enable multi-AZ management network:
spec:
octavia:
template:
lbMgmtNetwork:
availabilityZoneCIDRs:
az1: 172.34.0.0/16
az2: 172.44.0.0/16
createDefaultLbMgmtNetwork: false
The CIDR of the AZs are passed to the octavia-healthmanager pods via env vars:
$ oc get daemonsets.apps octavia-healthmanager -o yaml | grep -A1 MGMT_CIDR - name: MGMT_CIDR value: 172.24.0.0/16 - name: MGMT_CIDR0 value: 172.34.0.0/16 - name: MGMT_CIDR1 value: 172.44.0.0/16
The issue is that the order of those env vars may differ in each reconciliation loop, we may also get
$ oc get daemonsets.apps octavia-healthmanager -o yaml | grep -A1 MGMT_CIDR - name: MGMT_CIDR value: 172.24.0.0/16 - name: MGMT_CIDR0 value: 172.44.0.0/16 - name: MGMT_CIDR1 value: 172.34.0.0/16
when the order changes, that changes the input parameters of the daemonset and recreates the pods.
This behavior is not 100% reproducible, and occurs randomly, it can trigger an infinite loop of pod recreation.
Expected behavior
- the input parameters should be stable and only update the daemonset when necessary
Bug impact
- octavia-healthmanager may be randomly restarted and make octavia unusable in DCN mode
Known workaround
- no
Note
- Octavia DCN is not officially supported in 18.0.4
- links to
-
RHBA-2025:146727 Release of containers for RHOSO OpenStack Podified operator
- mentioned on