-
Bug
-
Resolution: Done-Errata
-
Undefined
-
4.14
-
Important
-
No
-
Rejected
-
False
-
Description of problem:
Issue was found when analyzing bug https://issues.redhat.com/browse/OCPBUGS-19817
Version-Release number of selected component (if applicable):
4.15.0-0.ci-2023-09-25-165744
How reproducible:
everytime
Steps to Reproduce:
The cluster is ipsec cluster and enabled NS extension and ipsec service. 1. enable e-w ipsec & wait for cluster to settle 2. disable ipsec & wait for cluster to settle you'll observer ipsec pods are deleted
Actual results:
no pods
Expected results:
pods should stay see https://github.com/openshift/cluster-network-operator/blob/master/pkg/network/ovn_kubernetes.go#L314 // If IPsec is enabled for the first time, we start the daemonset. If it is // disabled after that, we do not stop the daemonset but only stop IPsec. // // TODO: We need to do this as, by default, we maintain IPsec state on the // node in order to maintain encrypted connectivity in the case of upgrades. // If we only unrender the IPsec daemonset, we will be unable to cleanup // the IPsec state on the node and the traffic will continue to be // encrypted.
Additional info:
- blocks
-
OCPBUGS-19955 when disabling ipsec, ds pods are deleted
- Closed
- clones
-
OCPBUGS-19817 The traffic between worker node and external host got broken after delete ipsec-host pods
- Closed
- is cloned by
-
OCPBUGS-19955 when disabling ipsec, ds pods are deleted
- Closed
- links to
-
RHEA-2023:7198 rpm