-
Bug
-
Resolution: Unresolved
-
Normal
-
OADP 1.4.4
-
Incidents & Support
-
3
-
False
-
-
False
-
ToDo
-
-
-
Moderate
-
Very Likely
-
0
-
5
-
None
-
Unset
-
Unknown
-
None
Description of problem:
The customer experienced a panic during a running backup, which caused the Velero pod to restart.
Version-Release number of selected component (if applicable):
OADP Operator version 1.4.4
velero version
Client:
Version: v1.14.1-OADP
Git commit: -
Server:
Version: v1.14.1-OADP
How reproducible:
During a velero backup, we hit a panic :
time="2025-07-21T21:33:47Z" level=info msg="Validating BackupStorageLocation" backup-storage-location=openshift-adp/lhind-dpa-4 controller=backup-storage-location logSource="/remote-source/velero/app/pkg/controller/backup_storage_location_controller.go:141"
time="2025-07-21T21:33:47Z" level=info msg="BackupStorageLocations is valid, marking as available" backup-storage-location=openshift-adp/lhind-dpa-4 controller=backup-storage-location logSource="/remote-source/velero/app/pkg/controller/backup_storage_location_controller.go:126"
E0721 21:33:47.401693 1 runtime.go:77] Observed a panic: sync: negative WaitGroup counter
goroutine 37438364 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x26db1c0, 0x3161f00})
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/runtime/runtime.go:75 +0x85
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00125e8c0?})
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/runtime/runtime.go:49 +0x6b
panic({0x26db1c0?, 0x3161f00?})
/usr/lib/golang/src/runtime/panic.go:770 +0x132
sync.(*WaitGroup).Add(0xc00314bd50?, 0xc002864870?)
/usr/lib/golang/src/sync/waitgroup.go:62 +0xd8
sync.(*WaitGroup).Done(...)
/usr/lib/golang/src/sync/waitgroup.go:87
github.com/vmware-tanzu/velero/pkg/podvolume.newBackupper.func1({0x40c6f2?, 0xc0037bc060?}, {0x2c674c0?, 0xc003f138c8})
/remote-source/velero/app/pkg/podvolume/backupper.go:137 +0x175
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnUpdate(...)
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/controller.go:246
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/shared_informer.go:970 +0xea
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000d9bf70, {0x316b4a0, 0xc0026c3a40}, 0x1, 0xc002ea6780)
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0017eff70, 0x3b9aca00, 0x0, 0x1, 0xc002ea6780)
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0008e1cb0)
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/shared_informer.go:966 +0x69
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:72 +0x52
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start in goroutine 396
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:70 +0x73
panic: sync: negative WaitGroup counter [recovered]
panic: sync: negative WaitGroup counter
goroutine 37438364 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00125e8c0?})
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/runtime/runtime.go:56 +0xcd
panic({0x26db1c0?, 0x3161f00?})
/usr/lib/golang/src/runtime/panic.go:770 +0x132
sync.(*WaitGroup).Add(0xc00314bd50?, 0xc002864870?)
/usr/lib/golang/src/sync/waitgroup.go:62 +0xd8
sync.(*WaitGroup).Done(...)
/usr/lib/golang/src/sync/waitgroup.go:87
github.com/vmware-tanzu/velero/pkg/podvolume.newBackupper.func1({0x40c6f2?, 0xc0037bc060?}, {0x2c674c0?, 0xc003f138c8})
/remote-source/velero/app/pkg/podvolume/backupper.go:137 +0x175
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnUpdate(...)
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/controller.go:246
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/shared_informer.go:970 +0xea
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000d9bf70, {0x316b4a0, 0xc0026c3a40}, 0x1, 0xc002ea6780)
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0017eff70, 0x3b9aca00, 0x0, 0x1, 0xc002ea6780)
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0008e1cb0)
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/shared_informer.go:966 +0x69
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:72 +0x52
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start in goroutine 396
/remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:70 +0x73
Actual results:
Experienced a panic during a running backup, which caused the Velero pod to restart.
Expected results:
No crash.