• Important
    • No
    • Rejected
    • False
    • Hide

      None

      Show
      None

      Description of problem:

      Issue was found when analyzing  bug https://issues.redhat.com/browse/OCPBUGS-19817
      
      

      Version-Release number of selected component (if applicable):

      4.15.0-0.ci-2023-09-25-165744
      
      

      How reproducible:

      everytime 
      

      Steps to Reproduce:

      The cluster is ipsec cluster and enabled NS extension and ipsec service.
      1.  enable e-w ipsec & wait for cluster to settle
      2.  disable ipsec & wait for cluster to settle
      
      you'll observer ipsec pods are deleted
      
      

      Actual results:

      no pods
      

      Expected results:

      pods should stay
      see https://github.com/openshift/cluster-network-operator/blob/master/pkg/network/ovn_kubernetes.go#L314
      	// If IPsec is enabled for the first time, we start the daemonset. If it is
      	// disabled after that, we do not stop the daemonset but only stop IPsec.
      	//
      	// TODO: We need to do this as, by default, we maintain IPsec state on the
      	// node in order to maintain encrypted connectivity in the case of upgrades.
      	// If we only unrender the IPsec daemonset, we will be unable to cleanup
      	// the IPsec state on the node and the traffic will continue to be
      	// encrypted.
      
      

      Additional info:

      
      

            [OCPBUGS-19918] when disabling ipsec, ds pods are deleted

            Errata Tool added a comment -

            Since the problem described in this issue should be resolved in a recent advisory, it has been closed.

            For information on the advisory (Critical: OpenShift Container Platform 4.15.0 bug fix and security update), and where to find the updated files, follow the link below.

            If the solution does not work for you, open a new bug report.
            https://access.redhat.com/errata/RHSA-2023:7198

            Errata Tool added a comment - Since the problem described in this issue should be resolved in a recent advisory, it has been closed. For information on the advisory (Critical: OpenShift Container Platform 4.15.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:7198

            The docs team is preparing the bug text for the 4.15 release notes. Based on the fix and affects version, this bug needs to be included in the release notes. Please update your issue by 2/12.

            Set the Release Note Type to Bug Fix and provide the Release Note Text in the following format:

            Cause: What actions or circumstances cause this bug to present.
            Consequence: What happens when the bug presents.
            Fix: What was done to fix the bug.
            Result: Bug doesn’t present anymore.

            If your bug was actually found and fixed in 4.15 or should be internal only, set the Release Note Type to Release Note Not Required.

            Kathryn Alexander added a comment - The docs team is preparing the bug text for the 4.15 release notes. Based on the fix and affects version, this bug needs to be included in the release notes. Please update your issue by 2/12. Set the Release Note Type to Bug Fix and provide the Release Note Text in the following format: Cause : What actions or circumstances cause this bug to present. Consequence : What happens when the bug presents. Fix : What was done to fix the bug. Result : Bug doesn’t present anymore. If your bug was actually found and fixed in 4.15 or should be internal only, set the Release Note Type to Release Note Not Required .

            Huiran Wang added a comment -

            Verified in 4.15.0-0.nightly-2023-10-05-021229

            The cluster is ipsec cluster and enabled NS extension and ipsec service.
            1.  enable e-w ipsec & wait for cluster to settle
            2. disable  ipsec & wait for cluster to settle
             % oc patch networks.operator.openshift.io cluster --type=merge -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"ipsecConfig":null}}}}' 
            network.operator.openshift.io/cluster patched
            % oc -n openshift-ovn-kubernetes rollout status daemonset/ovnkube-node
            Waiting for daemon set "ovnkube-node" rollout to finish: 3 out of 6 new pods have been updated...
            Waiting for daemon set "ovnkube-node" rollout to finish: 3 out of 6 new pods have been updated...
            Waiting for daemon set "ovnkube-node" rollout to finish: 4 out of 6 new pods have been updated...
            Waiting for daemon set "ovnkube-node" rollout to finish: 4 out of 6 new pods have been updated...
            Waiting for daemon set "ovnkube-node" rollout to finish: 4 out of 6 new pods have been updated...
            Waiting for daemon set "ovnkube-node" rollout to finish: 5 out of 6 new pods have been updated...
            Waiting for daemon set "ovnkube-node" rollout to finish: 5 out of 6 new pods have been updated...
            Waiting for daemon set "ovnkube-node" rollout to finish: 5 out of 6 new pods have been updated...
            Waiting for daemon set "ovnkube-node" rollout to finish: 5 of 6 updated pods are available...
            daemon set "ovnkube-node" successfully rolled out
            % oc rsh -n openshift-ovn-kubernetes ovnkube-node-6457z 
            Defaulted container "ovn-controller" out of: ovn-controller, ovn-acl-logging, kube-rbac-proxy-node, kube-rbac-proxy-ovn-metrics, northd, nbdb, sbdb, ovnkube-controller, kubecfg-setup (init)
            sh-5.1# ovn-nbctl --no-leader-only get nb_global . ipsec
            false
            ipsec pods are not removed
             % oc get pods -n openshift-ovn-kubernetes
            NAME                                     READY   STATUS    RESTARTS        AGE
            ovn-ipsec-containerized-ctwsw            1/1     Running   3 (18s ago)     71m
            ovn-ipsec-containerized-gzxjf            1/1     Running   2               64m
            ovn-ipsec-containerized-jvdsw            1/1     Running   4 (18s ago)     71m
            ovn-ipsec-containerized-t4l6x            1/1     Running   1               63m
            ovn-ipsec-containerized-vw94n            1/1     Running   1               63m
            ovn-ipsec-containerized-z4k7j            1/1     Running   5 (18s ago)     71m
            ovn-ipsec-host-52ks2                     1/1     Running   0               71m
            ovn-ipsec-host-64d8m                     1/1     Running   5 (2m23s ago)   63m
            ovn-ipsec-host-9qnk4                     1/1     Running   6 (2m38s ago)   64m
            ovn-ipsec-host-nwgg5                     1/1     Running   4 (118s ago)    63m
            ovn-ipsec-host-x2f4j                     1/1     Running   0               71m
            ovn-ipsec-host-zpt2r                     1/1     Running   0               71m
            ovnkube-control-plane-6fb56dfbb9-nw57s   2/2     Running   0               71m
            ovnkube-control-plane-6fb56dfbb9-s8bvb   2/2     Running   0               71m
            ovnkube-control-plane-6fb56dfbb9-sszgm   2/2     Running   1 (61m ago)     71m
            ovnkube-node-6457z                       8/8     Running   0               8m47s
            ovnkube-node-gplg7                       8/8     Running   0               15m
            ovnkube-node-jlj9t                       8/8     Running   0               13m
            ovnkube-node-kwqxn                       8/8     Running   0               10m
            ovnkube-node-qm8wp                       8/8     Running   0               17m
            ovnkube-node-sp7xs                       8/8     Running   0               12m
            
            

            Huiran Wang added a comment - Verified in 4.15.0-0.nightly-2023-10-05-021229 The cluster is ipsec cluster and enabled NS extension and ipsec service. 1. enable e-w ipsec & wait for cluster to settle 2. disable ipsec & wait for cluster to settle % oc patch networks. operator .openshift.io cluster --type=merge -p '{ "spec" :{ "defaultNetwork" :{ "ovnKubernetesConfig" :{ "ipsecConfig" : null }}}}' network. operator .openshift.io/cluster patched % oc -n openshift-ovn-kubernetes rollout status daemonset/ovnkube-node Waiting for daemon set "ovnkube-node" rollout to finish: 3 out of 6 new pods have been updated... Waiting for daemon set "ovnkube-node" rollout to finish: 3 out of 6 new pods have been updated... Waiting for daemon set "ovnkube-node" rollout to finish: 4 out of 6 new pods have been updated... Waiting for daemon set "ovnkube-node" rollout to finish: 4 out of 6 new pods have been updated... Waiting for daemon set "ovnkube-node" rollout to finish: 4 out of 6 new pods have been updated... Waiting for daemon set "ovnkube-node" rollout to finish: 5 out of 6 new pods have been updated... Waiting for daemon set "ovnkube-node" rollout to finish: 5 out of 6 new pods have been updated... Waiting for daemon set "ovnkube-node" rollout to finish: 5 out of 6 new pods have been updated... Waiting for daemon set "ovnkube-node" rollout to finish: 5 of 6 updated pods are available... daemon set "ovnkube-node" successfully rolled out % oc rsh -n openshift-ovn-kubernetes ovnkube-node-6457z Defaulted container "ovn-controller" out of: ovn-controller, ovn-acl-logging, kube-rbac-proxy-node, kube-rbac-proxy-ovn-metrics, northd, nbdb, sbdb, ovnkube-controller, kubecfg-setup (init) sh-5.1# ovn-nbctl --no-leader-only get nb_global . ipsec false ipsec pods are not removed % oc get pods -n openshift-ovn-kubernetes NAME READY STATUS RESTARTS AGE ovn-ipsec-containerized-ctwsw 1/1 Running 3 (18s ago) 71m ovn-ipsec-containerized-gzxjf 1/1 Running 2 64m ovn-ipsec-containerized-jvdsw 1/1 Running 4 (18s ago) 71m ovn-ipsec-containerized-t4l6x 1/1 Running 1 63m ovn-ipsec-containerized-vw94n 1/1 Running 1 63m ovn-ipsec-containerized-z4k7j 1/1 Running 5 (18s ago) 71m ovn-ipsec-host-52ks2 1/1 Running 0 71m ovn-ipsec-host-64d8m 1/1 Running 5 (2m23s ago) 63m ovn-ipsec-host-9qnk4 1/1 Running 6 (2m38s ago) 64m ovn-ipsec-host-nwgg5 1/1 Running 4 (118s ago) 63m ovn-ipsec-host-x2f4j 1/1 Running 0 71m ovn-ipsec-host-zpt2r 1/1 Running 0 71m ovnkube-control-plane-6fb56dfbb9-nw57s 2/2 Running 0 71m ovnkube-control-plane-6fb56dfbb9-s8bvb 2/2 Running 0 71m ovnkube-control-plane-6fb56dfbb9-sszgm 2/2 Running 1 (61m ago) 71m ovnkube-node-6457z 8/8 Running 0 8m47s ovnkube-node-gplg7 8/8 Running 0 15m ovnkube-node-jlj9t 8/8 Running 0 13m ovnkube-node-kwqxn 8/8 Running 0 10m ovnkube-node-qm8wp 8/8 Running 0 17m ovnkube-node-sp7xs 8/8 Running 0 12m

              ykashtan Yuval Kashtan
              huirwang Huiran Wang
              Huiran Wang Huiran Wang
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

                Created:
                Updated:
                Resolved: