-
Bug
-
Resolution: Not a Bug
-
Undefined
-
None
-
4.14, 4.17
-
Important
-
None
-
False
-
Description of problem:
In a new installation or when a new node is added if the clusternetwork is configured with 2 subnets pods in one subnet cannot communicate with pods in another subnet until the ovnkube-controller container is restarted.
Version-Release number of selected component (if applicable):
reproduced in 4.14.31 and 4.14.42
How reproducible:
100% always
Steps to Reproduce:
1. Install a cluster with 2 subnets in the clusternetwork
2. Try the connectivity between pods residing on different CIDRs
Actual results:
When a cluster is installed or a new node is created, pods on different CIDRs cannot communicate until ovnkube-controller is manually restarted.
Expected results:
The clusternetwork should work without manually restart the ovnkube-controller container.
Additional info:
The workaround identified is just restarting the ovnkube-controller but for customers using clusterautoscaler this workaround is not sustainable.
The only relevant error visible in the ovnkube-controller logs when the clusternetwork doesn't work is the following:
W1217 09:13:09.431454 3934 node_tracker.go:233] Failed to get node host CIDRs for [worker-0: k8s.ovn.org/host-cidrs annotation not found for node "worker-0"
This error is not visible after manually restarting the container.
- is documented by
-
OCPBUGS-47532 Please add note the network hostprefix should keep same if there are multiple network when installation
-
- Closed
-
- is related to
-
OCPBUGS-48089 Installer should fail if multiple clusternetwork CIDRs for same IP family have different hostPrefix
-
- ON_QA
-
- links to