-
Bug
-
Resolution: Done-Errata
-
Undefined
-
None
-
Submariner 0.17.0
-
False
-
None
-
False
-
-
-
Submariner Sprint 2024-20, Submariner Sprint 2024-21
-
Important
-
No
Description of problem:
Version-Release number of selected component (if applicable):
OCP 4.15.0-0.nightly-2024-03-12-010512
ACM 2.10.0-DOWNSTREAM-2024-03-14-14-53-38
ODF 4.15.0-158
Submariner brew.registry.redhat.io/rh-osbs/iib:684361
VolSync 0.8.1
ceph version 17.2.6-196.el9cp (cbbf2cfb549196ca18c0c9caff9124d83ed681a4) quincy (stable)
How reproducible:
Steps to Reproduce:
1. On a regional DR setup, depoy multiple CephFS workloads on C1 of both appset (push method) and subscription types.
2. Run IOs for a few hours. If data sync is progressing well, relocate all of them to C2 and reboot one of the worker nodes of C2 (preferredcluster) during the relocate operation. Turn it off for 2-3mins and bring it back online.
3. Check relocate status and ensure data sync resumes for all the relocated workloads.
4. Repeat steps 2&3 a couple of times and ensure data sync resumes well post successful relocate operation with node reboot.
Refer https://bugzilla.redhat.com/show_bug.cgi?id=2270064 for more details.
Actual results: [RDR] Submariner connectivity issue hinder cleanup and data sync for CephFS workload
Expected results: Submariner connectivity should work fine so as to allow proper cleanup and data sync b/w the managed clusters.
Additional info:
Slack thread- https://redhat-internal.slack.com/archives/C0134E73VH6/p1710824151445639
- links to
-
RHBA-2024:130604 RHBA: Submariner 0.17.1 - bug fix and enhancement update