Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-44669

OVN control plane is down after power cycle of cluster with localnet

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Duplicate
    • Icon: Undefined Undefined
    • None
    • 4.17
    • None
    • None
    • 0.42
    • False
    • Hide

      None

      Show
      None

      Description of problem:

      Configure a localnet on the cluster as follows
      
      apiVersion: nmstate.io/v1
      kind: NodeNetworkConfigurationPolicy
      metadata:
        name: mapping 
      spec:
        desiredState:
          ovn:
            bridge-mappings:
            - localnet: localnet-home
              bridge: br-ex
              state: present
      
      apiVersion: k8s.cni.cncf.io/v1
      kind: NetworkAttachmentDefinition
      metadata:
        name: home
        namespace: homelab
      spec:
        config: |2
          {
                  "cniVersion": "0.3.1", 
                  "name": "localnet-home", 
                  "type": "ovn-k8s-cni-overlay",
                  "topology": "localnet",
                  "mtu": 1500,
                  "netAttachDefName": "homelab/home"
          }
      
      Power cycle the cluster.
      
      The OVN control plane is down!
      
      openshift-ovn-kubernetes                           ovnkube-control-plane-858464464d-65mcr                       1/2     CrashLoopBackOff           6 (81s ago)      3d17h
      openshift-ovn-kubernetes                           ovnkube-control-plane-858464464d-tb74v                       1/2     CrashLoopBackOff           6 (30s ago)      3d17h
      
      
      $ oc logs -n openshift-ovn-kubernetes ovnkube-control-plane-858464464d-65mcr
      Defaulted container "kube-rbac-proxy" out of: kube-rbac-proxy, ovnkube-cluster-manager
      2024-11-17T23:24:14+00:00 INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy
      W1117 23:24:15.135490       1 deprecated.go:66] 
      ==== Removed Flag Warning ======================
      
      
      logtostderr is removed in the k8s upstream and has no effect any more.
      
      
      ===============================================
      		
      I1117 23:24:15.135869       1 kube-rbac-proxy.go:233] Valid token audiences: 
      I1117 23:24:15.135923       1 kube-rbac-proxy.go:347] Reading certificate files
      I1117 23:24:15.136469       1 kube-rbac-proxy.go:395] Starting TCP socket on :9108
      I1117 23:24:15.136667       1 kube-rbac-proxy.go:402] Listening securely on :9108
      I1117 23:26:10.601170       1 log.go:245] http: proxy error: dial tcp 127.0.0.1:29108: connect: connection refused
      I1117 23:26:13.024815       1 log.go:245] http: proxy error: dial tcp 127.0.0.1:29108: connect: connection refused
      I1117 23:26:40.591442       1 log.go:245] http: proxy error: dial tcp 127.0.0.1:29108: connect: connection refused
      I1117 23:26:43.020509       1 log.go:245] http: proxy error: dial tcp 127.0.0.1:29108: connect: connection refused
      I1117 23:27:10.590915       1 log.go:245] http: proxy error: dial tcp 127.0.0.1:29108: connect: connection refused
      I1117 23:27:13.020779       1 log.go:245] http: proxy error: dial tcp 127.0.0.1:29108: connect: connection refused
      
      # oc logs -n openshift-ovn-kubernetes ovnkube-control-plane-858464464d-65mcr -c ovnkube-cluster-manager
      + [[ -f /env/_master ]]
      + ovn_v4_join_subnet_opt=
      + [[ '' != '' ]]
      + ovn_v6_join_subnet_opt=
      + [[ '' != '' ]]
      + ovn_v4_transit_switch_subnet_opt=
      + [[ '' != '' ]]
      + ovn_v6_transit_switch_subnet_opt=
      + [[ '' != '' ]]
      + dns_name_resolver_enabled_flag=
      + [[ false == \t\r\u\e ]]
      + persistent_ips_enabled_flag=
      + [[ false == \t\r\u\e ]]
      + network_segmentation_enabled_flag=
      + multi_network_enabled_flag=
      + [[ false == \t\r\u\e ]]
      ++ date '+%m%d %H:%M:%S.%N'
      + echo 'I1117 23:27:10.715764799 - ovnkube-control-plane - start ovnkube --init-cluster-manager control-1.home.arpa'
      I1117 23:27:10.715764799 - ovnkube-control-plane - start ovnkube --init-cluster-manager control-1.home.arpa
      + exec /usr/bin/ovnkube --enable-interconnect --init-cluster-manager control-1.home.arpa --config-file=/run/ovnkube-config/ovnkube.conf --loglevel 4 --metrics-bind-address 127.0.0.1:29108 --metrics-enable-pprof --metrics-enable-config-duration
      I1117 23:27:10.735113       1 config.go:2200] Parsed config file /run/ovnkube-config/ovnkube.conf
      I1117 23:27:10.735197       1 config.go:2201] Parsed config: {Default:{MTU:1400 RoutableMTU:0 ConntrackZone:64000 HostMasqConntrackZone:0 OVNMasqConntrackZone:0 HostNodePortConntrackZone:0 ReassemblyConntrackZone:0 EncapType:geneve EncapIP: EncapPort:6081 InactivityProbe:100000 OpenFlowProbe:180 OfctrlWaitBeforeClear:0 MonitorAll:true OVSDBTxnTimeout:1m40s LFlowCacheEnable:true LFlowCacheLimit:0 LFlowCacheLimitKb:1048576 RawClusterSubnets:10.128.0.0/14/23 ClusterSubnets:[] EnableUDPAggregation:true Zone:global} Logging:{File: CNIFile: LibovsdbFile:/var/log/ovnkube/libovsdb.log Level:4 LogFileMaxSize:100 LogFileMaxBackups:5 LogFileMaxAge:0 ACLLoggingRateLimit:20} Monitoring:{RawNetFlowTargets: RawSFlowTargets: RawIPFIXTargets: NetFlowTargets:[] SFlowTargets:[] IPFIXTargets:[]} IPFIX:{Sampling:400 CacheActiveTimeout:60 CacheMaxFlows:0} CNI:{ConfDir:/etc/cni/net.d Plugin:ovn-k8s-cni-overlay} OVNKubernetesFeature:{EnableAdminNetworkPolicy:true EnableEgressIP:true EgressIPReachabiltyTotalTimeout:1 EnableEgressFirewall:true EnableEgressQoS:true EnableEgressService:true EgressIPNodeHealthCheckPort:9107 EnableMultiNetwork:true EnableNetworkSegmentation:false EnableMultiNetworkPolicy:false EnableStatelessNetPol:false EnableInterconnect:false EnableMultiExternalGateway:true EnablePersistentIPs:false EnableDNSNameResolver:false EnableServiceTemplateSupport:false} Kubernetes:{BootstrapKubeconfig: CertDir: CertDuration:10m0s Kubeconfig: CACert: CAData:[] APIServer:https://api-int.home.arpa:6443 Token: TokenFile: CompatServiceCIDR: RawServiceCIDRs:172.30.0.0/16 ServiceCIDRs:[] OVNConfigNamespace:openshift-ovn-kubernetes OVNEmptyLbEvents:false PodIP: RawNoHostSubnetNodes: NoHostSubnetNodes:<nil> HostNetworkNamespace:openshift-host-network PlatformType:None HealthzBindAddress:0.0.0.0:10256 CompatMetricsBindAddress: CompatOVNMetricsBindAddress: CompatMetricsEnablePprof:false DNSServiceNamespace:openshift-dns DNSServiceName:dns-default} Metrics:{BindAddress: OVNMetricsBindAddress: ExportOVSMetrics:false EnablePprof:false NodeServerPrivKey: NodeServerCert: EnableConfigDuration:false EnableScaleMetrics:false} OvnNorth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:<nil>} OvnSouth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:<nil>} Gateway:{Mode:shared Interface: EgressGWInterface: NextHop: VLANID:0 NodeportEnable:true DisableSNATMultipleGWs:false V4JoinSubnet:100.64.0.0/16 V6JoinSubnet:fd98::/64 V4MasqueradeSubnet:169.254.169.0/29 V6MasqueradeSubnet:fd69::/125 MasqueradeIPs:{V4OVNMasqueradeIP:169.254.169.1 V6OVNMasqueradeIP:fd69::1 V4HostMasqueradeIP:169.254.169.2 V6HostMasqueradeIP:fd69::2 V4HostETPLocalMasqueradeIP:169.254.169.3 V6HostETPLocalMasqueradeIP:fd69::3 V4DummyNextHopMasqueradeIP:169.254.169.4 V6DummyNextHopMasqueradeIP:fd69::4 V4OVNServiceHairpinMasqueradeIP:169.254.169.5 V6OVNServiceHairpinMasqueradeIP:fd69::5} DisablePacketMTUCheck:false RouterSubnet: SingleNode:false DisableForwarding:false AllowNoUplink:false} MasterHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} ClusterMgrHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} HybridOverlay:{Enabled:false RawClusterSubnets: ClusterSubnets:[] VXLANPort:4789} OvnKubeNode:{Mode:full DPResourceDeviceIdsMap:map[] MgmtPortNetdev: MgmtPortDPResourceName:} ClusterManager:{V4TransitSwitchSubnet:100.88.0.0/16 V6TransitSwitchSubnet:fd97::/64}}
      I1117 23:27:10.736268       1 metrics.go:532] Starting metrics server at address "127.0.0.1:29108"
      I1117 23:27:10.736272       1 leaderelection.go:250] attempting to acquire leader lease openshift-ovn-kubernetes/ovn-kubernetes-master...
      I1117 23:27:10.756736       1 leaderelection.go:260] successfully acquired lease openshift-ovn-kubernetes/ovn-kubernetes-master
      I1117 23:27:10.757050       1 ovnkube.go:387] Won leader election; in active mode
      I1117 23:27:10.757443       1 secondary_network_cluster_manager.go:40] Creating secondary network cluster manager
      I1117 23:27:10.757518       1 egressservice_cluster.go:97] Setting up event handlers for Egress Services
      I1117 23:27:10.757547       1 event.go:377] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-ovn-kubernetes", Name:"ovn-kubernetes-master", UID:"c03b5275-d7ee-4af9-994a-a13e78df149e", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"2637494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ovnkube-control-plane-858464464d-65mcr became leader
      I1117 23:27:10.757837       1 clustermanager.go:146] Starting the cluster manager
      I1117 23:27:10.757844       1 factory.go:426] Starting watch factory
      I1117 23:27:10.757913       1 reflector.go:296] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160
      I1117 23:27:10.758267       1 reflector.go:332] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:160
      I1117 23:27:10.758012       1 reflector.go:296] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160
      I1117 23:27:10.758484       1 reflector.go:332] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:160
      I1117 23:27:10.758029       1 reflector.go:296] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160
      I1117 23:27:10.758582       1 reflector.go:332] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:160
      I1117 23:27:10.758039       1 reflector.go:296] Starting reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160
      I1117 23:27:10.758689       1 reflector.go:332] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:160
      I1117 23:27:10.767175       1 reflector.go:359] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160
      I1117 23:27:10.768690       1 reflector.go:359] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160
      I1117 23:27:10.769848       1 reflector.go:359] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:160
      I1117 23:27:10.819081       1 reflector.go:359] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:160
      I1117 23:27:10.858030       1 reflector.go:296] Starting reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140
      I1117 23:27:10.858091       1 reflector.go:332] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140
      I1117 23:27:10.861925       1 reflector.go:359] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140
      I1117 23:27:10.958897       1 reflector.go:296] Starting reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140
      I1117 23:27:10.958954       1 reflector.go:332] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140
      I1117 23:27:10.960875       1 reflector.go:359] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140
      I1117 23:27:11.059077       1 reflector.go:296] Starting reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140
      I1117 23:27:11.059092       1 reflector.go:332] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140
      I1117 23:27:11.060896       1 reflector.go:359] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140
      I1117 23:27:11.159427       1 reflector.go:296] Starting reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140
      I1117 23:27:11.159445       1 reflector.go:332] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140
      I1117 23:27:11.161292       1 reflector.go:359] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140
      I1117 23:27:11.259624       1 reflector.go:296] Starting reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140
      I1117 23:27:11.259640       1 reflector.go:332] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140
      I1117 23:27:11.261134       1 reflector.go:359] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140
      I1117 23:27:11.360651       1 reflector.go:296] Starting reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117
      I1117 23:27:11.360672       1 reflector.go:332] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117
      I1117 23:27:11.362722       1 reflector.go:359] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117
      I1117 23:27:11.461754       1 node_allocator.go:441] Expected 1 subnets on node control-2.home.arpa, found 1: [10.129.0.0/23]
      I1117 23:27:11.461851       1 node_allocator.go:455] Valid subnet 10.129.0.0/23 allocated on node control-2.home.arpa
      I1117 23:27:11.461865       1 node_allocator.go:477] Allowed existing subnets [10.129.0.0/23] on node control-2.home.arpa
      I1117 23:27:11.461958       1 node_allocator.go:441] Expected 1 subnets on node control-3.home.arpa, found 1: [10.130.0.0/23]
      I1117 23:27:11.461982       1 node_allocator.go:455] Valid subnet 10.130.0.0/23 allocated on node control-3.home.arpa
      I1117 23:27:11.461994       1 node_allocator.go:477] Allowed existing subnets [10.130.0.0/23] on node control-3.home.arpa
      I1117 23:27:11.462093       1 node_allocator.go:441] Expected 1 subnets on node green.home.arpa, found 1: [10.129.2.0/23]
      I1117 23:27:11.462104       1 node_allocator.go:455] Valid subnet 10.129.2.0/23 allocated on node green.home.arpa
      I1117 23:27:11.462112       1 node_allocator.go:477] Allowed existing subnets [10.129.2.0/23] on node green.home.arpa
      I1117 23:27:11.462182       1 node_allocator.go:441] Expected 1 subnets on node violet.home.arpa, found 1: [10.130.2.0/23]
      I1117 23:27:11.462192       1 node_allocator.go:455] Valid subnet 10.130.2.0/23 allocated on node violet.home.arpa
      I1117 23:27:11.462198       1 node_allocator.go:477] Allowed existing subnets [10.130.2.0/23] on node violet.home.arpa
      I1117 23:27:11.462281       1 node_allocator.go:441] Expected 1 subnets on node white.home.arpa, found 1: [10.131.2.0/23]
      I1117 23:27:11.462304       1 node_allocator.go:455] Valid subnet 10.131.2.0/23 allocated on node white.home.arpa
      I1117 23:27:11.462315       1 node_allocator.go:477] Allowed existing subnets [10.131.2.0/23] on node white.home.arpa
      I1117 23:27:11.462412       1 node_allocator.go:441] Expected 1 subnets on node yellow.home.arpa, found 1: [10.128.2.0/23]
      I1117 23:27:11.462421       1 node_allocator.go:455] Valid subnet 10.128.2.0/23 allocated on node yellow.home.arpa
      I1117 23:27:11.462427       1 node_allocator.go:477] Allowed existing subnets [10.128.2.0/23] on node yellow.home.arpa
      I1117 23:27:11.462520       1 node_allocator.go:441] Expected 1 subnets on node black.home.arpa, found 1: [10.131.0.0/23]
      I1117 23:27:11.462529       1 node_allocator.go:455] Valid subnet 10.131.0.0/23 allocated on node black.home.arpa
      I1117 23:27:11.462557       1 node_allocator.go:477] Allowed existing subnets [10.131.0.0/23] on node black.home.arpa
      I1117 23:27:11.462655       1 node_allocator.go:441] Expected 1 subnets on node control-1.home.arpa, found 1: [10.128.0.0/23]
      I1117 23:27:11.462664       1 node_allocator.go:455] Valid subnet 10.128.0.0/23 allocated on node control-1.home.arpa
      I1117 23:27:11.462670       1 node_allocator.go:477] Allowed existing subnets [10.128.0.0/23] on node control-1.home.arpa
      I1117 23:27:11.462694       1 zone_cluster_controller.go:204] Node control-3.home.arpa has the id 4 set
      I1117 23:27:11.462703       1 zone_cluster_controller.go:204] Node green.home.arpa has the id 7 set
      I1117 23:27:11.462706       1 zone_cluster_controller.go:204] Node violet.home.arpa has the id 8 set
      I1117 23:27:11.462709       1 zone_cluster_controller.go:204] Node white.home.arpa has the id 9 set
      I1117 23:27:11.462711       1 zone_cluster_controller.go:204] Node yellow.home.arpa has the id 6 set
      I1117 23:27:11.462714       1 zone_cluster_controller.go:204] Node black.home.arpa has the id 5 set
      I1117 23:27:11.462716       1 zone_cluster_controller.go:204] Node control-1.home.arpa has the id 2 set
      I1117 23:27:11.462720       1 zone_cluster_controller.go:204] Node control-2.home.arpa has the id 3 set
      I1117 23:27:11.462736       1 kube.go:128] Setting annotations map[k8s.ovn.org/node-id:4 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.4/16"}] on node control-3.home.arpa
      I1117 23:27:11.471773       1 kube.go:128] Setting annotations map[k8s.ovn.org/node-id:7 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.7/16"}] on node green.home.arpa
      I1117 23:27:11.490323       1 kube.go:128] Setting annotations map[k8s.ovn.org/node-id:8 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.8/16"}] on node violet.home.arpa
      I1117 23:27:11.508863       1 kube.go:128] Setting annotations map[k8s.ovn.org/node-id:9 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.9/16"}] on node white.home.arpa
      I1117 23:27:11.524331       1 kube.go:128] Setting annotations map[k8s.ovn.org/node-id:6 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.6/16"}] on node yellow.home.arpa
      I1117 23:27:11.541201       1 kube.go:128] Setting annotations map[k8s.ovn.org/node-id:5 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.5/16"}] on node black.home.arpa
      I1117 23:27:11.552835       1 kube.go:128] Setting annotations map[k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node control-1.home.arpa
      I1117 23:27:11.561032       1 kube.go:128] Setting annotations map[k8s.ovn.org/node-id:3 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.3/16"}] on node control-2.home.arpa
      I1117 23:27:11.567680       1 secondary_network_cluster_manager.go:67] Starting secondary network cluster manager
      I1117 23:27:11.567765       1 controller.go:103] Adding controller [cluster-manager NAD controller] event handlers
      I1117 23:27:11.567794       1 shared_informer.go:313] Waiting for caches to sync for [cluster-manager NAD controller]
      I1117 23:27:11.567803       1 shared_informer.go:320] Caches are synced for [cluster-manager NAD controller]
      I1117 23:27:11.567941       1 controller.go:127] Starting controller [cluster-manager NAD controller] with 1 workers
      I1117 23:27:11.567999       1 network_manager.go:180] [cluster-manager network manager]: syncing all networks
      I1117 23:27:11.568012       1 network_manager.go:111] [cluster-manager network manager]: finished syncing network localnet-home, took 3.903µs
      I1117 23:27:11.568030       1 ovnkube.go:580] Stopping ovnkube...
      I1117 23:27:11.568069       1 network_attach_def_controller.go:160] [cluster-manager NAD controller]: finished syncing NAD homelab/home, took 97.091µs
      I1117 23:27:11.568087       1 reflector.go:302] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117
      I1117 23:27:11.568118       1 reflector.go:302] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140
      I1117 23:27:11.568148       1 reflector.go:302] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160
      I1117 23:27:11.568183       1 reflector.go:302] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160
      I1117 23:27:11.568201       1 factory.go:542] Stopping watch factory
      I1117 23:27:11.568201       1 reflector.go:302] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160
      I1117 23:27:11.568224       1 reflector.go:302] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140
      I1117 23:27:11.568255       1 reflector.go:302] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140
      I1117 23:27:11.568292       1 reflector.go:302] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140
      I1117 23:27:11.568302       1 reflector.go:302] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160
      I1117 23:27:11.568358       1 reflector.go:302] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140
      I1117 23:27:11.568378       1 ovnkube.go:584] Stopped ovnkube
      E1117 23:27:11.568387       1 ovnkube.go:389] failed to run ovnkube: failed to start cluster manager: initial sync failed: failed to sync network localnet-home: [cluster-manager network manager]: failed to create network localnet-home: no cluster network controller to manage topology
      I1117 23:27:11.568409       1 metrics.go:552] Stopping metrics server at address "127.0.0.1:29108"
      I1117 23:27:11.571482       1 ovnkube.go:396] No longer leader; exiting
      
      
      Once I remove the NAD/NNCP, the control plane comes up again (until the next cluster reboot).
      
      I'm hitting this in my cluster only, but raising this as Urgent because it may affect customers and the side effects can be severe.
      
      

      Version-Release number of selected component (if applicable):

      OCP 4.17.3
      CNV 4.17.0

      How reproducible:

      Always

              phoracek@redhat.com Petr Horacek
              rhn-support-gveitmic Germano Veit Michel
              Nir Rozen Nir Rozen
              Shikha Jhala Shikha Jhala
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: