Uploaded image for project: 'Red Hat Advanced Cluster Management'
  1. Red Hat Advanced Cluster Management
  2. ACM-6579

[ACM 2.8.z/Submariner 0.15] Submariner operator fails on OCP 4.14

XMLWordPrintable

    • Submariner Sprint 2023-9, Submariner Sprint 2023-10, Submariner Sprint 2023-11, Submariner Sprint 2023-12, Submariner Sprint 2023-13
    • Important
    • No

      Description of problem:

      On OCP 4.14 cluster, Submariner operator fails and goes into CrashLoopBackOff state. 

      submariner-operator-cf6dc84b9-dhxvf               0/1     CrashLoopBackOff   22 (5m1s ago)   93m    10.129.2.64   compute-1         <none>           <none>

      Version-Release number of selected component (if applicable):

      OCP: 4.14.0-0.nightly-2023-07-26-001154

      ODF :  4.14.0-84

      ACM:  2.8.0

      Submariner: 0.15.1+0.1688118533.p

      How reproducible:

      1/1

      Steps to Reproduce:

      1. Deploy 3 OCP 4.14 clusters
      2. Install RHACM on hub cluster 
      3. Using RHACM console, import other two clusters and connect using Submariner add-ons with globalnet enabled
      4. Deploy ODF 4.14 on both managed clusters
      5. Observe the submariner operator pod status

      Actual results:

      Submariner operator in CrashLoopBackOff state.

      RHACM console shows Multicluster network status as degraded

      Expected results:

      Submariner operator should not fail on OCP 4.14

      Additional info:

      Submariner operator pod logs from one of the managed cluster:

      2023-07-26T11:34:30.162Z INF ..e-source/app/main.go:94 cmd                  Starting submariner-operator
      2023-07-26T11:34:30.162Z INF ..e-source/app/main.go:67 cmd                  Go Version: go1.20.4
      2023-07-26T11:34:30.162Z INF ..e-source/app/main.go:68 cmd                  Go OS/Arch: linux/amd64
      2023-07-26T11:34:30.162Z INF ..e-source/app/main.go:69 cmd                  Submariner operator version: devel
      2023-07-26T11:34:30.163Z INF ..lib/leader/leader.go:96 leader               Trying to become the leader.
      panic: runtime error: invalid memory address or nil pointer dereference
      [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x132d7f0]
       
      goroutine 1 [running]:
      k8s.io/client-go/discovery.convertAPIResource(...)
              /remote-source/app/vendor/k8s.io/client-go/discovery/aggregated_discovery.go:114
      k8s.io/client-go/discovery.convertAPIGroup({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0006123d8, 0x15}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...})
              /remote-source/app/vendor/k8s.io/client-go/discovery/aggregated_discovery.go:95 +0x6f0
      k8s.io/client-go/discovery.SplitGroupsAndResources({{{0xc00013c3c0, 0x15}, {0xc0000563a0, 0x1b}}, {{0x0, 0x0}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...})
              /remote-source/app/vendor/k8s.io/client-go/discovery/aggregated_discovery.go:49 +0x125
      k8s.io/client-go/discovery.(*DiscoveryClient).downloadAPIs(0xc0000eace0?)
              /remote-source/app/vendor/k8s.io/client-go/discovery/discovery_client.go:328 +0x3de
      k8s.io/client-go/discovery.(*DiscoveryClient).GroupsAndMaybeResources(0xc0000eb110?)
              /remote-source/app/vendor/k8s.io/client-go/discovery/discovery_client.go:203 +0x65
      k8s.io/client-go/discovery.ServerGroupsAndResources({0x1cb42e0, 0xc0006457a0})
              /remote-source/app/vendor/k8s.io/client-go/discovery/discovery_client.go:413 +0x59
      k8s.io/client-go/discovery.(*DiscoveryClient).ServerGroupsAndResources.func1()
              /remote-source/app/vendor/k8s.io/client-go/discovery/discovery_client.go:376 +0x25
      k8s.io/client-go/discovery.withRetries(0x2, 0xc0000e3128)
              /remote-source/app/vendor/k8s.io/client-go/discovery/discovery_client.go:651 +0x71
      k8s.io/client-go/discovery.(*DiscoveryClient).ServerGroupsAndResources(0x0?)
              /remote-source/app/vendor/k8s.io/client-go/discovery/discovery_client.go:375 +0x3a
      k8s.io/client-go/restmapper.GetAPIGroupResources({0x1cb42e0?, 0xc0006457a0?})
              /remote-source/app/vendor/k8s.io/client-go/restmapper/discovery.go:148 +0x42
      sigs.k8s.io/controller-runtime/pkg/client/apiutil.NewDynamicRESTMapper.func1()
              /remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/client/apiutil/dynamicrestmapper.go:94 +0x25
      sigs.k8s.io/controller-runtime/pkg/client/apiutil.(*dynamicRESTMapper).setStaticMapper(...)
              /remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/client/apiutil/dynamicrestmapper.go:130
      sigs.k8s.io/controller-runtime/pkg/client/apiutil.NewDynamicRESTMapper(0xc0002e66c0?, {0x0, 0x0, 0x1a20cb4?})
              /remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/client/apiutil/dynamicrestmapper.go:110 +0x182
      sigs.k8s.io/controller-runtime/pkg/client.newClient(0xc0002e66c0?, {0x0?, {0x0?, 0x0?}, {0x1?, 0xfb?}})
              /remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/client/client.go:109 +0x1d1
      sigs.k8s.io/controller-runtime/pkg/client.New(...)
              /remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/client/client.go:77
      github.com/operator-framework/operator-lib/leader.(*Config).setDefaults(0xc0005038f0)
              /remote-source/app/vendor/github.com/operator-framework/operator-lib/leader/leader.go:65 +0x45
      github.com/operator-framework/operator-lib/leader.Become({0x1cadc00, 0xc000046048}, {0x1a270e4, 0x18}, {0x0, 0x0, 0x0?})
              /remote-source/app/vendor/github.com/operator-framework/operator-lib/leader/leader.go:106 +0x108
      main.main()
              /remote-source/app/main.go:113 +0x295

              skitt@redhat.com Stephen Kitt
              sagrawal@redhat.com Sidhant Agrawal
              Maxim Babushkin Maxim Babushkin
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

                Created:
                Updated:
                Resolved: