-
Bug
-
Resolution: Done-Errata
-
Major
-
CNV v4.14.0
-
None
-
False
-
-
False
-
---
-
---
-
-
High
-
None
Description of the problem:
Pods randomly fail with this kind of error lately:
panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x138b6d0]goroutine 1 [running]: k8s.io/client-go/discovery.convertAPIResource(...) /remote-source/app/vendor/k8s.io/client-go/discovery/aggregated_discovery.go:88 k8s.io/client-go/discovery.convertAPIGroup({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00055b290, 0x15}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...}) /remote-source/app/vendor/k8s.io/client-go/discovery/aggregated_discovery.go:69 +0x570 k8s.io/client-go/discovery.SplitGroupsAndResources({{{0xc000060cf0, 0x15}, {0xc0004ee460, 0x1b}}, {{0x0, 0x0}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...}) /remote-source/app/vendor/k8s.io/client-go/discovery/aggregated_discovery.go:35 +0x118 k8s.io/client-go/discovery.(*DiscoveryClient).downloadAPIs(0x1295ef9?) /remote-source/app/vendor/k8s.io/client-go/discovery/discovery_client.go:310 +0x47c k8s.io/client-go/discovery.(*DiscoveryClient).GroupsAndMaybeResources(0x138f93f?) /remote-source/app/vendor/k8s.io/client-go/discovery/discovery_client.go:198 +0x5c k8s.io/client-go/discovery.ServerGroupsAndResources({0x2546540, 0xc00077f710}) /remote-source/app/vendor/k8s.io/client-go/discovery/discovery_client.go:392 +0x59 k8s.io/client-go/discovery.(*DiscoveryClient).ServerGroupsAndResources.func1() /remote-source/app/vendor/k8s.io/client-go/discovery/discovery_client.go:356 +0x25 k8s.io/client-go/discovery.withRetries(0x2, 0xc00041b0e0) /remote-source/app/vendor/k8s.io/client-go/discovery/discovery_client.go:621 +0x71 k8s.io/client-go/discovery.(*DiscoveryClient).ServerGroupsAndResources(0x0?) /remote-source/app/vendor/k8s.io/client-go/discovery/discovery_client.go:355 +0x3a k8s.io/client-go/restmapper.GetAPIGroupResources({0x2546540?, 0xc00077f710?}) /remote-source/app/vendor/k8s.io/client-go/restmapper/discovery.go:148 +0x42 sigs.k8s.io/controller-runtime/pkg/client/apiutil.NewDynamicRESTMapper.func1() /remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/client/apiutil/dynamicrestmapper.go:94 +0x25 sigs.k8s.io/controller-runtime/pkg/client/apiutil.(*dynamicRESTMapper).setStaticMapper(...) /remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/client/apiutil/dynamicrestmapper.go:130 sigs.k8s.io/controller-runtime/pkg/client/apiutil.NewDynamicRESTMapper(0xc00067f5c0?, {0x0, 0x0, 0x1be1c0e?}) /remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/client/apiutil/dynamicrestmapper.go:110 +0x182 sigs.k8s.io/controller-runtime/pkg/cluster.setOptionsDefaults.func1(0x20ee140?) /remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/cluster/cluster.go:217 +0x25 sigs.k8s.io/controller-runtime/pkg/cluster.New(0xc00076ab40, {0xc00067fa80, 0x1, 0x0?}) /remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/cluster/cluster.go:159 +0x18d sigs.k8s.io/controller-runtime/pkg/manager.New(_, {0x0, 0x0, 0x0, {{0x2541498, 0xc00047fcc0}, 0x0}, 0x1, {0x2198da2, 0x6}, ...}) /remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/manager/manager.go:351 +0xf9 main.main() /remote-source/app/cmd/mtq-operator/mtq-operator.go:73 +0x418
Version-Release number of selected component (if applicable):
4.14.0
How reproducible: Can't tell. It looks random.
Steps to reproduce: No clear steps. Install latest D/S CNV version. It seems like this issue happens after some hours while the cluster is idle, but it's not clear.
- links to
-
RHEA-2024:125986 OpenShift Virtualization 4.14.3 Images