-
Bug
-
Resolution: Done
-
Critical
-
OADP 1.1.1
-
None
-
False
-
-
False
-
oadp-velero-container-1.1.1-18
-
ToDo
-
0
-
Very Likely
-
0
-
None
-
Unset
-
Unknown
-
No
Description of problem:
After creating multiple default vsclass/storage class resources, the Velero pod is crashing with nil pointer issue.
Version-Release number of selected component (if applicable):
OADP 1.1.1
Volsync 0.5.1
How reproducible:
Always
Steps to Reproduce:
1. Create multiple default storage class or vsclass resources.
2. Create a backup with DataMover
Actual results:
Velero pod crashed with nil pointer issue.
Expected results:
Velero pod shouldn't get crashed due to nil pointer issue.
Additional info:
2022/10/17 12:39:00 error failed to wait for VolumeSnapshotBackups to be completed: volumesnapshotbackup vsb-4r2fh has failed status time="2022-10-17T12:39:00Z" level=error msg="volumesnapshotbackup vsb-4r2fh has failed status" backup=openshift-adp/test-datamover logSource="/remote-source/velero/app/pkg/controller/backup_controller.go:669" E1017 12:39:01.010379 1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 1764 [running]: k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1cf7e00?, 0x32c9640}) /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/runtime/runtime.go:74 +0x86 k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00102c0c0?}) /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/runtime/runtime.go:48 +0x75 panic({0x1cf7e00, 0x32c9640}) /usr/lib/golang/src/runtime/panic.go:884 +0x212 github.com/vmware-tanzu/velero/pkg/datamover.DeleteTempVSClass({0xc000ea29a0?, 0x2?}, {0x2362d00, 0xc0007ca7b0}, 0xc000640960) /remote-source/velero/app/pkg/datamover/datamover.go:139 +0xf5 github.com/vmware-tanzu/velero/pkg/controller.(*backupController).runBackup(0xc0001e3b80, 0xc0009b80d0) /remote-source/velero/app/pkg/controller/backup_controller.go:673 +0xfdb github.com/vmware-tanzu/velero/pkg/controller.(*backupController).processBackup(0xc0001e3b80, {0xc0011ba440, 0x1c}) /remote-source/velero/app/pkg/controller/backup_controller.go:295 +0x75c github.com/vmware-tanzu/velero/pkg/controller.(*genericController).processNextWorkItem(0xc000788720) /remote-source/velero/app/pkg/controller/generic_controller.go:132 +0xeb github.com/vmware-tanzu/velero/pkg/controller.(*genericController).runWorker(0xc000834ea8?) /remote-source/velero/app/pkg/controller/generic_controller.go:119 +0x25 k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000834f82?) /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/wait/wait.go:155 +0x3e k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000834fd0?, {0x235ce00, 0xc00087eb40}, 0x1, 0xc000e56300) /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/wait/wait.go:156 +0xb6 k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000bc4780?, 0x3b9aca00, 0x0, 0x5?, 0x0?) /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/wait/wait.go:133 +0x89 k8s.io/apimachinery/pkg/util/wait.Until(...) /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/wait/wait.go:90 github.com/vmware-tanzu/velero/pkg/controller.(*genericController).Run.func2() /remote-source/velero/app/pkg/controller/generic_controller.go:92 +0x6e created by github.com/vmware-tanzu/velero/pkg/controller.(*genericController).Run /remote-source/velero/app/pkg/controller/generic_controller.go:91 +0x45a panic: runtime error: invalid memory address or nil pointer dereference [recovered] panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x1a2f675] goroutine 1764 [running]: k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00102c0c0?}) /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/runtime/runtime.go:55 +0xd7 panic({0x1cf7e00, 0x32c9640}) /usr/lib/golang/src/runtime/panic.go:884 +0x212 github.com/vmware-tanzu/velero/pkg/datamover.DeleteTempVSClass({0xc000ea29a0?, 0x2?}, {0x2362d00, 0xc0007ca7b0}, 0xc000640960) /remote-source/velero/app/pkg/datamover/datamover.go:139 +0xf5 github.com/vmware-tanzu/velero/pkg/controller.(*backupController).runBackup(0xc0001e3b80, 0xc0009b80d0) /remote-source/velero/app/pkg/controller/backup_controller.go:673 +0xfdb github.com/vmware-tanzu/velero/pkg/controller.(*backupController).processBackup(0xc0001e3b80, {0xc0011ba440, 0x1c}) /remote-source/velero/app/pkg/controller/backup_controller.go:295 +0x75c github.com/vmware-tanzu/velero/pkg/controller.(*genericController).processNextWorkItem(0xc000788720) /remote-source/velero/app/pkg/controller/generic_controller.go:132 +0xeb github.com/vmware-tanzu/velero/pkg/controller.(*genericController).runWorker(0xc000834ea8?) /remote-source/velero/app/pkg/controller/generic_controller.go:119 +0x25 k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000834f82?) /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/wait/wait.go:155 +0x3e k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000834fd0?, {0x235ce00, 0xc00087eb40}, 0x1, 0xc000e56300) /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/wait/wait.go:156 +0xb6 k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000bc4780?, 0x3b9aca00, 0x0, 0x5?, 0x0?) /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/wait/wait.go:133 +0x89 k8s.io/apimachinery/pkg/util/wait.Until(...) /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/wait/wait.go:90 github.com/vmware-tanzu/velero/pkg/controller.(*genericController).Run.func2() /remote-source/velero/app/pkg/controller/generic_controller.go:92 +0x6e created by github.com/vmware-tanzu/velero/pkg/controller.(*genericController).Run /remote-source/velero/app/pkg/controller/generic_controller.go:91 +0x45a
$ oc get backup -o yaml spec: csiSnapshotTimeout: 10m0s defaultVolumesToRestic: false hooks: {} includedNamespaces: oadp-812 storageLocation: ts-1 ttl: 720h0m0s status: completionTimestamp: "2022-10-17T12:39:16Z" expiration: "2022-11-16T12:38:00Z" failureReason: get a backup with status "InProgress" during the server starting, mark it as "Failed" formatVersion: 1.1.0 phase: Failed progress: itemsBackedUp: 46 totalItems: 46 startTimestamp: "2022-10-17T12:38:00Z" version: 1
$ oc get vsb -o ymal status: conditions: - lastTransitionTime: "2022-10-17T12:38:51Z" message: cannot have more than one default storageClass reason: Error status: "False" type: Reconciled phase: Failed sourcePVCData: {}
- blocks
-
OADP-858 [RedHat QE] Verify bug OADP-612 - Data mover Backup & Restore needs to fail if a validation check fails
-
- Release Pending
-
-
OADP-859 [IBM QE-Z] Verify bug OADP-612 - Data mover Backup & Restore needs to fail if a validation check fails
-
- Release Pending
-
-
OADP-866 [IBM QE-P] Verify Bug OADP-612 - Data mover Backup & Restore needs to fail if a validation check fails
-
- Release Pending
-
- links to
Since the problem described in this issue should be resolved in a recent advisory, it has been closed.
For information on the advisory, and where to find the updated files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2022:8634