-
Bug
-
Resolution: Unresolved
-
Critical
-
None
-
Quality / Stability / Reliability
-
3
-
False
-
-
False
-
ToDo
-
Important
-
Very Likely
-
0
-
None
-
Unset
-
Unknown
-
None
Description of problem:
Velero 1.14 and above implemented LoadAffinity as an array, but only the first element is accepted.
If multiple expressions are required they must be added in the matchExpressions array.
This causes datamover Pods to be scheduled to unexpected Nodes during Backup.
Version-Release number of selected component (if applicable):
1.5.0 oadp-operator
How reproducible:
See attached logs for the example DPAs, configmaps, and node-agent logs.
This conifg works and both expressions are picked up and end up in datamovers
loadAffinity:
- nodeSelector:
matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- key: kubernetes.io/arch
operator: In
values:
- amd64
{"level":"info","logSource":"/remote-source/velero/app/pkg/cmd/cli/nodeagent/server.go:322","msg":"Using customized loadAffinity \u0026{{map[] [
{kubernetes.io/os In [linux]}]}}","time":"2025-07-25T19:55:59Z"}This is allowed by the second nodeSelector is ignored.
nodeAgent:
enable: true
loadAffinity:
- nodeSelector:
matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- nodeSelector:
matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
{"level":"info","logSource":"/remote-source/velero/app/pkg/cmd/cli/nodeagent/server.go:322","msg":"Using customized loadAffinity \u0026{{map[] [{kubernetes.io/os In [linux]}
{kubernetes.io/arch In [amd64]}]}}","time":"2025-07-25T19:59:13Z"}
Steps to Reproduce:
1. Create the DPAs in the attached logs
2. Do backups
3. The rquiredDuringSchedulingIgnoredDuringExecution fields of the datamover Pods will lack the additional expressions if used as an array of nodeSelectors instead of array of matchExpressions.
Actual results:
Datamovers get scheduled onto unecpected nodes.
Expected results:
Datamovers should only get scheduled onto nodes match all nodeSelector objects.
Additional info:
On clusters that require node selectors for the backup to succeed this is a breaking issue if it is hit.
The only reason it is not critical severity is the workaround to consolidate the individual nodeSelectors into a single one nodeSelector is the same expressive power.
The offending line is here from Velero:
pkg/cmd/cli/nodeagent/server.go Line 290 from Velero 1.14 development
The line number will change in Velero 1.15, Velero 1.16, and main.