-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
CNV v4.17.1
-
0.42
-
False
-
-
False
-
None
-
---
-
---
-
-
High
-
None
Description of problem:
On a ODF Regional DR setup, we backup all the k8s resources matching the label being used using OADP. So we backup resources, fail the cluster, failover the DR protected workload to another cluster which is in a DR relationship, which automatically restores the backups and make the workload accessible again. We we tried this use case for busybox-workloads-1, VM was successfully restored but VM did not come up for busybox-workloads-2. Both these workloads are same CNV workload using the same template, the only difference is that busybox-workloads-1 has a cloned PVC while busybox-workloads-2 has a snapshot-restored PVC. These are namespace scoped resources so we do not expect a conflict having multiple such VMs in different namaspaces.
Version-Release number of selected component (if applicable):
OCP 4.17.0-0.nightly-2024-10-20-231827 ODF 4.17.0-126 ACM 2.12.0-DOWNSTREAM-2024-10-18-21-57-41 OpenShift Virtualization 4.17.1-19 Submariner 0.19 unreleased downstream image 846949 ceph version 18.2.1-229.el9cp (ef652b206f2487adfc86613646a4cac946f6b4e0) reef (stable) OADP 1.4.1 OpenShift GitOps 1.14.0 VolSync 0.10.1
How reproducible:
Not sure yet
Steps to Reproduce:
1. On a ODF Regional DR setup, deploy a CNV workload for discovered app using data volume template from https://github.com/RamenDR/ocm-ramen-samples/tree/main/workloads/kubevirt/vm-dvt/odr-regional. 2. Create a snapshot of the pvc and restore it as a PVC. 3. Delete the workload except the data volume and PVC. 4. Create the workload again in a way that it now consumes the existing snapshot-restored PVC already available. The VM should use this PVC and not create a new one. 5. DR protect this workload with a unique label on the required resources such as VM, Datavolume, PVC and secret. 6. During DR protection, ensure backups are being done every 5mins. 7. When we have a few successfull backups and data sync between the ODF clusters using snapshot based rbd-mirroring, bring down the primary cluster where workload is deployed and failover the workload to the other ODF cluster which is in a DR relationship. 8. The failover automatically restores the backup from noobaa bucket.. 9. Ensure that the VM, workload pod created by VM, secret, Datavolume and PVC are successfully restored, up and running on the available ODF cluster.
Actual results:
[RDR] VM restore failed with "failed to allocate requested mac address" error during failover of CNV discovered app
Expected results:
VM restoration should be successful.
Additional info:
Relevant context and logs are provided in https://redhat-internal.slack.com/archives/C019X3PEF2B/p1729600307597239