• Important
    • No
    • Rejected
    • False
    • Hide

      None

      Show
      None

      This is a clone of issue OCPBUGS-19918. The following is the description of the original issue:

      Description of problem:

      Issue was found when analyzing  bug https://issues.redhat.com/browse/OCPBUGS-19817
      
      

      Version-Release number of selected component (if applicable):

      4.15.0-0.ci-2023-09-25-165744
      
      

      How reproducible:

      everytime 
      

      Steps to Reproduce:

      The cluster is ipsec cluster and enabled NS extension and ipsec service.
      1.  enable e-w ipsec & wait for cluster to settle
      2.  disable ipsec & wait for cluster to settle
      
      you'll observer ipsec pods are deleted
      
      

      Actual results:

      no pods
      

      Expected results:

      pods should stay
      see https://github.com/openshift/cluster-network-operator/blob/master/pkg/network/ovn_kubernetes.go#L314
      	// If IPsec is enabled for the first time, we start the daemonset. If it is
      	// disabled after that, we do not stop the daemonset but only stop IPsec.
      	//
      	// TODO: We need to do this as, by default, we maintain IPsec state on the
      	// node in order to maintain encrypted connectivity in the case of upgrades.
      	// If we only unrender the IPsec daemonset, we will be unable to cleanup
      	// the IPsec state on the node and the traffic will continue to be
      	// encrypted.
      
      

      Additional info:

      
      

            [OCPBUGS-19955] when disabling ipsec, ds pods are deleted

            Errata Tool added a comment -

            Since the problem described in this issue should be resolved in a recent advisory, it has been closed.

            For information on the advisory (Important: OpenShift Container Platform 4.14.0 bug fix and security update), and where to find the updated files, follow the link below.

            If the solution does not work for you, open a new bug report.
            https://access.redhat.com/errata/RHSA-2023:5006

            Errata Tool added a comment - Since the problem described in this issue should be resolved in a recent advisory, it has been closed. For information on the advisory (Important: OpenShift Container Platform 4.14.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:5006

            Huiran Wang added a comment -

            Verified in 4.14.0-0.nightly-2023-10-05-195247

            The cluster is ipsec cluster and enabled NS extension and ipsec service.
            1.  enable e-w ipsec & wait for cluster to settle
            2.  disable ipsec & wait for cluster to settle, ipsec pods were not removed
            
             oc patch networks.operator.openshift.io/cluster --type=json \
              -p='[{"op":"remove", "path":"/spec/defaultNetwork/ovnKubernetesConfig/ipsecConfig"}]'
            network.operator.openshift.io/cluster patched
            % oc -n openshift-ovn-kubernetes rollout status daemonset/ovnkube-node
            Waiting for daemon set "ovnkube-node" rollout to finish: 1 out of 6 new pods have been updated...
            Waiting for daemon set "ovnkube-node" rollout to finish: 1 out of 6 new pods have been updated...
            Waiting for daemon set "ovnkube-node" rollout to finish: 2 out of 6 new pods have been updated...
            Waiting for daemon set "ovnkube-node" rollout to finish: 2 out of 6 new pods have been updated...
            Waiting for daemon set "ovnkube-node" rollout to finish: 3 out of 6 new pods have been updated...
            Waiting for daemon set "ovnkube-node" rollout to finish: 3 out of 6 new pods have been updated...
            Waiting for daemon set "ovnkube-node" rollout to finish: 4 out of 6 new pods have been updated...
            Waiting for daemon set "ovnkube-node" rollout to finish: 4 out of 6 new pods have been updated...
            Waiting for daemon set "ovnkube-node" rollout to finish: 5 out of 6 new pods have been updated...
            
            Waiting for daemon set "ovnkube-node" rollout to finish: 5 out of 6 new pods have been updated...
            Waiting for daemon set "ovnkube-node" rollout to finish: 5 of 6 updated pods are available...
            daemon set "ovnkube-node" successfully rolled out
             % oc rsh -n openshift-ovn-kubernetes ovnkube-node-9vg76 
            Defaulted container "ovn-controller" out of: ovn-controller, ovn-acl-logging, kube-rbac-proxy-node, kube-rbac-proxy-ovn-metrics, northd, nbdb, sbdb, ovnkube-controller, kubecfg-setup (init)
            sh-5.1# ovn-nbctl --no-leader-only get nb_global . ipsec
            false
             % oc get pods -n openshift-ovn-kubernetes                      
            NAME                                     READY   STATUS    RESTARTS        AGE
            ovn-ipsec-containerized-62rt9            1/1     Running   2               83m
            ovn-ipsec-containerized-7hckw            1/1     Running   2 (87s ago)     93m
            ovn-ipsec-containerized-8kml5            1/1     Running   4 (27s ago)     93m
            ovn-ipsec-containerized-cb86h            1/1     Running   4 (86s ago)     93m
            ovn-ipsec-containerized-ct997            1/1     Running   2               83m
            ovn-ipsec-containerized-wdmkq            1/1     Running   2               83m
            ovn-ipsec-host-97w9r                     1/1     Running   5 (77s ago)     83m
            ovn-ipsec-host-9qnmj                     1/1     Running   0               93m
            ovn-ipsec-host-gk286                     1/1     Running   2               83m
            ovn-ipsec-host-k8k4q                     1/1     Running   0               93m
            ovn-ipsec-host-njvgh                     1/1     Running   3 (2m23s ago)   83m
            ovn-ipsec-host-pxp8r                     1/1     Running   0               93m
            ovnkube-control-plane-85f96b444b-cfkkc   2/2     Running   0               93m
            ovnkube-control-plane-85f96b444b-hq95g   2/2     Running   0               93m
            ovnkube-control-plane-85f96b444b-vkk2j   2/2     Running   1 (84m ago)     93m
            ovnkube-node-9vg76                       8/8     Running   0               5m16s
            ovnkube-node-drmwx                       8/8     Running   0               8m40s
            ovnkube-node-m5jcl                       8/8     Running   0               10m
            ovnkube-node-p45r9                       8/8     Running   0               13m
            ovnkube-node-xxbmw                       8/8     Running   0               6m58s
            ovnkube-node-znl5d                       8/8     Running   0               12m
            

            Huiran Wang added a comment - Verified in 4.14.0-0.nightly-2023-10-05-195247 The cluster is ipsec cluster and enabled NS extension and ipsec service. 1. enable e-w ipsec & wait for cluster to settle 2. disable ipsec & wait for cluster to settle, ipsec pods were not removed oc patch networks. operator .openshift.io/cluster --type=json \ -p= '[{ "op" : "remove" , "path" : "/spec/defaultNetwork/ovnKubernetesConfig/ipsecConfig" }]' network. operator .openshift.io/cluster patched % oc -n openshift-ovn-kubernetes rollout status daemonset/ovnkube-node Waiting for daemon set "ovnkube-node" rollout to finish: 1 out of 6 new pods have been updated... Waiting for daemon set "ovnkube-node" rollout to finish: 1 out of 6 new pods have been updated... Waiting for daemon set "ovnkube-node" rollout to finish: 2 out of 6 new pods have been updated... Waiting for daemon set "ovnkube-node" rollout to finish: 2 out of 6 new pods have been updated... Waiting for daemon set "ovnkube-node" rollout to finish: 3 out of 6 new pods have been updated... Waiting for daemon set "ovnkube-node" rollout to finish: 3 out of 6 new pods have been updated... Waiting for daemon set "ovnkube-node" rollout to finish: 4 out of 6 new pods have been updated... Waiting for daemon set "ovnkube-node" rollout to finish: 4 out of 6 new pods have been updated... Waiting for daemon set "ovnkube-node" rollout to finish: 5 out of 6 new pods have been updated... Waiting for daemon set "ovnkube-node" rollout to finish: 5 out of 6 new pods have been updated... Waiting for daemon set "ovnkube-node" rollout to finish: 5 of 6 updated pods are available... daemon set "ovnkube-node" successfully rolled out % oc rsh -n openshift-ovn-kubernetes ovnkube-node-9vg76 Defaulted container "ovn-controller" out of: ovn-controller, ovn-acl-logging, kube-rbac-proxy-node, kube-rbac-proxy-ovn-metrics, northd, nbdb, sbdb, ovnkube-controller, kubecfg-setup (init) sh-5.1# ovn-nbctl --no-leader-only get nb_global . ipsec false % oc get pods -n openshift-ovn-kubernetes NAME READY STATUS RESTARTS AGE ovn-ipsec-containerized-62rt9 1/1 Running 2 83m ovn-ipsec-containerized-7hckw 1/1 Running 2 (87s ago) 93m ovn-ipsec-containerized-8kml5 1/1 Running 4 (27s ago) 93m ovn-ipsec-containerized-cb86h 1/1 Running 4 (86s ago) 93m ovn-ipsec-containerized-ct997 1/1 Running 2 83m ovn-ipsec-containerized-wdmkq 1/1 Running 2 83m ovn-ipsec-host-97w9r 1/1 Running 5 (77s ago) 83m ovn-ipsec-host-9qnmj 1/1 Running 0 93m ovn-ipsec-host-gk286 1/1 Running 2 83m ovn-ipsec-host-k8k4q 1/1 Running 0 93m ovn-ipsec-host-njvgh 1/1 Running 3 (2m23s ago) 83m ovn-ipsec-host-pxp8r 1/1 Running 0 93m ovnkube-control-plane-85f96b444b-cfkkc 2/2 Running 0 93m ovnkube-control-plane-85f96b444b-hq95g 2/2 Running 0 93m ovnkube-control-plane-85f96b444b-vkk2j 2/2 Running 1 (84m ago) 93m ovnkube-node-9vg76 8/8 Running 0 5m16s ovnkube-node-drmwx 8/8 Running 0 8m40s ovnkube-node-m5jcl 8/8 Running 0 10m ovnkube-node-p45r9 8/8 Running 0 13m ovnkube-node-xxbmw 8/8 Running 0 6m58s ovnkube-node-znl5d 8/8 Running 0 12m

            Hi Yuval Kashtan,

            Bugs cannot be moved to Verified without first providing a Release Note Type("Bug Fix" or "No Doc Update") and for type "Bug Fix" the Release Note Text must also be provided. Please populate the necessary fields before moving the Bug to Verified.

            OpenShift Jira Bot added a comment - Hi Yuval Kashtan, Bugs cannot be moved to Verified without first providing a Release Note Type("Bug Fix" or "No Doc Update") and for type "Bug Fix" the Release Note Text must also be provided. Please populate the necessary fields before moving the Bug to Verified.

              ykashtan Yuval Kashtan
              openshift-crt-jira-prow OpenShift Prow Bot
              Huiran Wang Huiran Wang
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

                Created:
                Updated:
                Resolved: