-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
odf-4.18
Description of problem - Provide a detailed description of the issue encountered, including logs/command-output snippets and screenshots if the issue is observed in the UI:
The OCP platform infrastructure and deployment type (AWS, Bare Metal, VMware, etc. Please clarify if it is platform agnostic deployment), (IPI/UPI):
The ODF deployment type (Internal, External, Internal-Attached (LSO), Multicluster, DR, Provider, etc):
The version of all relevant components (OCP, ODF, RHCS, ACM whichever is applicable):
OCP 4.18.0-0.nightly-2024-11-07-215008
ODF 4.18.0-49.stable
ACM 2.12.0-DOWNSTREAM-2024-10-30-03-41-01
Does this issue impact your ability to continue to work with the product?
No
Is there any workaround available to the best of your knowledge?
No
Can this issue be reproduced? If so, please provide the hit rate
Yes, 100%
Can this issue be reproduced from the UI? Yes
If this is a regression, please provide more details to justify this:
Steps to Reproduce:
1. On a RDR setup, when the sync isn't progressing or when the lastGroupSyncTime is 3x or more behind the sync interval, try to relocate the app via ACM console.
2. Check the message under warning alert "Inconsistent data on target cluster".
3.
The exact date and time when the issue was observed, including timezone details:
Actual results: The current message (below) is applicable for Failover operation as it uses the word Failover.
Inconsistent data on target cluster
The target cluster's volumes contain data inconsistencies caused by synchronization delays. Performing the failover could lead to data loss. Refer to the corresponding VolumeSynchronizationDelay OpenShift alert(s) for more information.
Expected results: Either re-phrase the message to suit it for both Failover and Relocate operations or just change it for Relocate modal and keep it same for Failover.
Logs collected and log location:
Additional info: