-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
odf-4.19
Description of problem - Provide a detailed description of the issue encountered, including logs/command-output snippets and screenshots if the issue is observed in the UI:
```
oc get drpc -n openshift-dr-ops rdr-testapp-2 -o yaml
apiVersion: ramendr.openshift.io/v1alpha1
kind: DRPlacementControl
metadata:
annotations:
drplacementcontrol.ramendr.openshift.io/app-namespace: openshift-dr-ops
drplacementcontrol.ramendr.openshift.io/is-cg-enabled: "true"
drplacementcontrol.ramendr.openshift.io/last-app-deployment-cluster: cow-1
creationTimestamp: "2025-11-08T12:35:59Z"
finalizers:
- drpc.ramendr.openshift.io/finalizer
generation: 2
labels:
cluster.open-cluster-management.io/backup: ramen
name: rdr-testapp-2
namespace: openshift-dr-ops
ownerReferences: - apiVersion: cluster.open-cluster-management.io/v1beta1
blockOwnerDeletion: true
controller: true
kind: Placement
name: rdr-testapp-2-placement-1
uid: 8b7904c2-e149-42cc-9e7f-eed6e8d19477
resourceVersion: "24051592"
uid: 31d9f57f-9a94-42d0-bcf7-2877c258b256
spec:
drPolicyRef:
apiVersion: ramendr.openshift.io/v1alpha1
kind: DRPolicy
name: rdr-all-storages-5m
kubeObjectProtection:
captureInterval: 5m0s
recipeParameters:
ALL_NAMESPACES: - testapp-2
recipeRef:
name: test-rdr
namespace: testapp-2
placementRef:
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
name: rdr-testapp-2-placement-1
namespace: openshift-dr-ops
preferredCluster: cow-1
protectedNamespaces: - testapp-2
pvcSelector: {}
status:
actionStartTime: "2025-11-08T12:36:28Z"
conditions: - lastTransitionTime: "2025-11-08T12:35:59Z"
message: Initial deployment completed
observedGeneration: 2
reason: Deployed
status: "True"
type: Available - lastTransitionTime: "2025-11-08T12:35:59Z"
message: Ready
observedGeneration: 2
reason: Success
status: "True"
type: PeerReady - lastTransitionTime: "2025-11-08T12:35:59Z"
message: VolumeReplicationGroup (openshift-dr-ops/rdr-testapp-2) on cluster cow-1
is not reporting any status about workload resources readiness, retrying till
ClusterDataReady condition is met
observedGeneration: 2
reason: Unknown
status: Unknown
type: Protected
lastUpdateTime: "2025-11-13T13:59:20Z"
observedGeneration: 2
phase: Deployed
preferredDecision:
clusterName: cow-1
clusterNamespace: cow-1
progression: SettingUpVolSyncDest
resourceConditions:
conditions: - lastTransitionTime: "2025-11-08T12:35:59Z"
message: 'Failed to process list of PVCs to protect: failed to find replicationClass
matching peerClass for PVC testapp-2/db2-postgres-cluster-1'
observedGeneration: 1
reason: Error
status: "False"
type: DataReady - lastTransitionTime: "2025-11-08T12:35:59Z"
message: Initializing VolumeReplicationGroup
observedGeneration: 1
reason: Initializing
status: Unknown
type: DataProtected - lastTransitionTime: "2025-11-08T12:35:59Z"
message: Initializing VolumeReplicationGroup
observedGeneration: 1
reason: Initializing
status: Unknown
type: ClusterDataReady - lastTransitionTime: "2025-11-08T12:35:59Z"
message: Initializing VolumeReplicationGroup
observedGeneration: 1
reason: Initializing
status: Unknown
type: ClusterDataProtected - lastTransitionTime: "2025-11-08T12:35:59Z"
message: Initializing VolumeReplicationGroup
observedGeneration: 1
reason: Initializing
status: Unknown
type: KubeObjectsReady - lastTransitionTime: "2025-11-08T12:35:59Z"
message: Initializing VolumeReplicationGroup
observedGeneration: 1
reason: Initializing
status: Unknown
type: NoClusterDataConflict
resourceMeta:
generation: 1
kind: VolumeReplicationGroup
name: rdr-testapp-2
namespace: openshift-dr-ops
resourceVersion: "14315502"
```
I have 1 ACM: called {cow}2 managed clusters:
{cow-1}and
{cow-2}
volumegroupreplicationclasses has already present in both managed clusters
protected application has volume consistency group enabled
The OCP platform infrastructure and deployment type (AWS, Bare Metal, VMware, etc. Please clarify if it is platform agnostic deployment), (IPI/UPI):
IBM's fyre cluster
The ODF deployment type (Internal, External, Internal-Attached (LSO), Multicluster, DR, Provider, etc):
RDR
The version of all relevant components (OCP, ODF, RHCS, ACM whichever is applicable):
OCP 4.19
Does this issue impact your ability to continue to work with the product?
yes, I cannot test replication with iBM cloud pak for data
Is there any workaround available to the best of your knowledge?
No
Can this issue be reproduced? If so, please provide the hit rate
yes, consistent
Can this issue be reproduced from the UI?
yes
If this is a regression, please provide more details to justify this:
Steps to Reproduce:
1. Create testapp with one rbd and one cephfs,
2. Protect the application with identical recipe, check the enable consistency group
3.
The exact date and time when the issue was observed, including timezone details:
8 Nov 2025, 07:35 EST
Actual results:
drpc is critical for days
Expected results:
drpc should be healthy
Logs collected and log location:
odf-must-gather-2025-11-11-13-15-53_site-1.tar.gz
https://ibm.box.com/s/2kutpv72x7l6eu2ub5xd3rlwujln1mdo (please let me know if you can't access this, seems the must gather is too large to be uploaded here)
Additional info: