-
Bug
-
Resolution: Unresolved
-
Critical
-
None
-
odf-4.17
-
None
Description of problem (please be detailed as possible and provide log
snippests):
[RDR] When cephfs imperative apps are failed over drpc does not wait for Cleaning Up it moves to Completed
Version of all relevant components (if applicable):
OCP version:- 4.17.0-0.nightly-2024-09-05-034724
ODF version:- 4.17.0-92
CEPH version:- ceph version 19.1.0-42.el9cp (03ae7f7ffec5e7796d2808064c4766b35c4b5ffb) squid (rc)
ACM version:- 2.12.0-62
SUBMARINER version:- v0.19.0
VOLSYNC version:- volsync-product.v0.10.0
OADP version:- 1.4.0
VOLSYNC method:- destinationCopyMethod: Direct
Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Is there any workaround available to the best of your knowledge?
Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
Can this issue reproducible?
yes
Can this issue reproduce from the UI?
If this is a regression, please provide more details to justify this:
Steps to Reproduce:
1.Deploy RDR
2. Enable cephfs from ramenconfig map
3. Deploy and Protect the ceph fs Workload
Actual results:
$drpcow -w
NAMESPACE NAME AGE PREFERREDCLUSTER FAILOVERCLUSTER DESIREDSTATE CURRENTSTATE PROGRESSION START TIME DURATION PEER READY
openshift-dr-ops app-imp-1-cephfs 3d prsurve-c1 prsurve-vm-d Failover FailedOver Completed 2024-09-09T09:29:46Z 2m56.469398502s True
-
-
- pods list from failover cluster
-
$ pods
NAME READY STATUS RESTARTS AGE
dd-io-1-65cb46675f-svtdq 1/1 Running 0 3d
dd-io-2-846549744b-dqrsn 1/1 Running 0 3d
dd-io-3-56d765db6-pbrrc 1/1 Running 0 3d
dd-io-4-55c5dbfbf8-9fwjv 1/1 Running 0 3d
dd-io-5-c97445b66-999t2 1/1 Running 0 3d
dd-io-6-658954f474-prrt7 1/1 Running 0 3d
dd-io-7-7d68676f86-jlc4v 1/1 Running 0 3d
volsync-rsync-tls-src-dd-io-pvc-1-bmwzz 0/1 Error 0 3m11s
volsync-rsync-tls-src-dd-io-pvc-1-k62xm 1/1 Running 0 113s
volsync-rsync-tls-src-dd-io-pvc-2-g8g64 0/1 Error 0 3m12s
volsync-rsync-tls-src-dd-io-pvc-2-p2n99 1/1 Running 0 113s
volsync-rsync-tls-src-dd-io-pvc-3-x4sv4 0/1 Error 0 3m12s
volsync-rsync-tls-src-dd-io-pvc-3-xmkc4 1/1 Running 0 113s
volsync-rsync-tls-src-dd-io-pvc-5-hxjz6 0/1 Error 0 3m12s
volsync-rsync-tls-src-dd-io-pvc-5-t8p2r 1/1 Running 0 113s
volsync-rsync-tls-src-dd-io-pvc-6-99scd 0/1 Error 0 3m12s
volsync-rsync-tls-src-dd-io-pvc-6-csl8t 1/1 Running 0 113s
volsync-rsync-tls-src-dd-io-pvc-7-9ngd5 1/1 Running 0 113s
volsync-rsync-tls-src-dd-io-pvc-7-bcjs6 0/1 Error 0 3m12s
-
-
- pod list from failedover cluster
-
$pods
NAME READY STATUS RESTARTS AGE
dd-io-1-65cb46675f-z589h 1/1 Running 0 8m4s
dd-io-2-846549744b-hwnr5 1/1 Running 0 8m3s
dd-io-3-56d765db6-4d5fm 1/1 Running 0 8m3s
dd-io-4-55c5dbfbf8-7tgqb 1/1 Running 0 8m3s
dd-io-5-c97445b66-z9hvl 1/1 Running 0 8m3s
dd-io-6-658954f474-4j2d6 1/1 Running 0 8m3s
dd-io-7-7d68676f86-8gnnm 1/1 Running 0 8m3s
volsync-rsync-tls-src-dd-io-pvc-1-dmkv4 0/1 Error 0 76s
volsync-rsync-tls-src-dd-io-pvc-2-5m94h 0/1 Error 0 115s
volsync-rsync-tls-src-dd-io-pvc-2-l9kzg 1/1 Running 0 36s
volsync-rsync-tls-src-dd-io-pvc-3-2pfs7 0/1 Error 0 76s
volsync-rsync-tls-src-dd-io-pvc-4-ltx4d 0/1 Error 0 115s
volsync-rsync-tls-src-dd-io-pvc-4-tr4xr 1/1 Running 0 36s
volsync-rsync-tls-src-dd-io-pvc-5-zk9r9 0/1 Error 0 76s
volsync-rsync-tls-src-dd-io-pvc-6-4vjk9 0/1 Error 0 115s
volsync-rsync-tls-src-dd-io-pvc-6-qhcm9 1/1 Running 0 36s
volsync-rsync-tls-src-dd-io-pvc-7-bpzk8 0/1 Error 0 76s
Expected results:
drpc should wait for workload cleanup before marking it completed
Additional info:
- external trackers