-
Bug
-
Resolution: Not a Bug
-
Undefined
-
None
-
4.16.0
-
Quality / Stability / Reliability
-
False
-
-
None
-
Important
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
During the live migration, after MTU migration phase is completed and while the cluster nodes are migrating to the new CNI plugin (i.e. during the second MCO rollout), the nodes already migrated to OVN-Kubernetes cannot communicate from their host network to the pods on nodes that remain in openshift-sdn if the project has a *correct* networkpolicy to allow the traffic from the host network.
Version-Release number of selected component (if applicable):
4.16.40
How reproducible:
Always
Steps to Reproduce:
1. Create an application project with an "allow-from-host" networkpolicy that allows traffic from the host network (optionally, double-check that it works as expected)
2. Start live migration.
3. Let the MTU migration phase complete as expected.
4. Once the CNI node migration phase has started (2nd MCO rollout) and some nodes are migrated while others aren't, use a pod disruption budget to halt the MCO rollout (so that the cluster is stuck with some nodes migrated and some nodes not migrated). Make sure that some application pods remainedd on a non-migrated node.
5. Test connectivity.
Actual results:
It only works from openshift-sdn host network to the pods of the openshift-sdn nodes, even if nodes are different.
Expected results:
Communication from host network to pods to work even cross-sdn during the live migration.
Additional info:
*IMPORTANT: This is **NOT* https://issues.redhat.com/browse/OCPBUGS-42605 (or any of its backports). That bug was fixed in 4.16.23 and this is reproducible in 4.16.40 (and the situation is not exactly the same).
More information in internal comments.