-
Bug
-
Resolution: Unresolved
-
Normal
-
None
-
odf-4.21
Description of problem - Provide a detailed description of the issue encountered, including logs/command-output snippets and screenshots if the issue is observed in the UI:
- this is to request a mechanism to put labels in related ramen resources that will not need to be backedup / will need to be skipped during the cloud pak for data backup & restore
- volsync jobs and pvcs for cephfs do not need to be backed up and restored
- this jobs are temporary, which means it can come and go while user is trying to take a backup (~30 mins - 1h) and those jobs will be included in the resources backup by mistake
- what's worse is these jobs can block our cpd-cli offline backup. (linked issue https://github.ibm.com/PrivateCloud-analytics/Zen/issues/45438)
- we have 2 labels that could be useful for excluding those resources: icpdsupport/ignore-on-nd-backup: true & velero.io/exclude-from-backup: true.
- other consideration are these resources might need to be excluded from backup/restore
```
[10 Feb 2026@14:31:40CST] default/api-rdr-arft-svl-site-03-cp-fyre-ibm-com:6443/kube:admin/default ~/test/rdr/5.3.0 ❯ oc -n cpd-ins-1906 get rgs,replicationsources.volsync.backube,vgr,vr,submariners.submariner.io
NAME LAST SYNC DURATION NEXT SYNC SOURCE LAST SYNC START
replicationgroupsource.ramendr.openshift.io/cpd-ins-1906-fc9865bc0ff7f6b98ac3129019091c03 2026-02-10T20:30:56Z 56.851412694s 2026-02-10T20:35:00Z {"matchLabels":{"ramendr.openshift.io/consistency-group":"cpd-ins-1906-fc9865bc0ff7f6b98ac3129019091c03"}}
NAME SOURCE LAST SYNC DURATION NEXT SYNC
replicationsource.volsync.backube/activelogs-c-db2oltp-1770393560266235-db2u-mln-0 vs-activelogs-c-db2oltp-1770393560266235-db2u-mln-0 2026-02-10T20:30:46Z 13.385669951s
replicationsource.volsync.backube/c-db2oltp-1770393560266235-archivelogs vs-c-db2oltp-1770393560266235-archivelogs 2026-02-10T20:30:47Z 14.408940095s
replicationsource.volsync.backube/c-db2oltp-1770393560266235-backup vs-c-db2oltp-1770393560266235-backup 2026-02-10T20:30:55Z 22.46769369s
replicationsource.volsync.backube/c-db2oltp-1770393560266235-meta vs-c-db2oltp-1770393560266235-meta 2026-02-10T20:30:46Z 13.52024974s
replicationsource.volsync.backube/data-c-db2oltp-1770393560266235-db2u-mln-0 vs-data-c-db2oltp-1770393560266235-db2u-mln-0 2026-02-10T20:30:45Z 12.401053304s
replicationsource.volsync.backube/data-ibm-dmc-1770394151183561-rediscp-server-0 vs-data-ibm-dmc-1770394151183561-rediscp-server-0 2026-02-10T20:30:46Z 13.48604467s
replicationsource.volsync.backube/data-ibm-dmc-1770394151183561-rediscp-server-1 vs-data-ibm-dmc-1770394151183561-rediscp-server-1 2026-02-10T20:30:46Z 13.385787359s
replicationsource.volsync.backube/ibm-dmc-1770394151183561-data vs-ibm-dmc-1770394151183561-data 2026-02-10T20:30:52Z 19.482170311s
replicationsource.volsync.backube/tempts-c-db2oltp-1770393560266235-db2u-0 vs-tempts-c-db2oltp-1770393560266235-db2u-0 2026-02-10T20:30:55Z 22.536246587s
NAME VOLUMEGROUPREPLICATIONCLASS VOLUMEGROUPREPLICATIONCONTENT DESIREDSTATE CURRENTSTATE AGE
volumegroupreplication.replication.storage.openshift.io/vgr-48cc84f712b8dcb1f9eac4434bdb5c46-cpd-1906 rbd-volumegroupreplicationclass-1625360775-1602718344 vgrcontent-b4616b04-0569-4f14-aac9-ceeb7b0def4d primary Primary 23h
NAME AGE VOLUMEREPLICATIONCLASS SOURCEKIND SOURCENAME DESIREDSTATE CURRENTSTATE
volumereplication.replication.storage.openshift.io/vr-b4616b04-0569-4f14-aac9-ceeb7b0def4d 23h rbd-volumereplicationclass-1625360775 VolumeGroupReplication vgr-48cc84f712b8dcb1f9eac4434bdb5c46-cpd-1906 primary Primary
```
The OCP platform infrastructure and deployment type (AWS, Bare Metal, VMware, etc. Please clarify if it is platform agnostic deployment), (IPI/UPI): Agnostic of platofmr
The ODF deployment type (Internal, External, Internal-Attached (LSO), Multicluster, DR, Provider, etc): Internal
The version of all relevant components (OCP, ODF, RHCS, ACM whichever is applicable):
OCP 4.21
ODF :4.21.0-96.rohan
Does this issue impact your ability to continue to work with the product?
yes
Is there any workaround available to the best of your knowledge?
not for the temporary jobs and pvcs from volsync
Can this issue be reproduced? If so, please provide the hit rate
yes
Can this issue be reproduced from the UI?
yes
If this is a regression, please provide more details to justify this:
no
Steps to Reproduce:
1.
2.
3.
The exact date and time when the issue was observed, including timezone details:
Actual results:
Expected results:
Logs collected and log location:
Additional info: