Uploaded image for project: 'Data Foundation Bugs'
  1. Data Foundation Bugs
  2. DFBUGS-866

[RDR] When incorrect target cluster is selected for failover/relocate operations, the existing warning alert "Inconsistent data on target cluster" doesn't go away

XMLWordPrintable

    • False
    • Hide

      None

      Show
      None
    • False
    • Committed
    • ?
    • ?
    • 4.18.0-88
    • Committed
    • Hide
      Fixed an issue where the "Inconsistent data on target cluster" warning was not resetting when selecting a different target cluster during failover/relocate operations. Now, the warning alert is refreshed correctly when changing the target cluster for Subscription apps, and it no longer persists unnecessarily when failover/relocation is triggered for discovered applications.
      Show
      Fixed an issue where the "Inconsistent data on target cluster" warning was not resetting when selecting a different target cluster during failover/relocate operations. Now, the warning alert is refreshed correctly when changing the target cluster for Subscription apps, and it no longer persists unnecessarily when failover/relocation is triggered for discovered applications.
    • Bug Fix
    • Proposed
    • Moderate
    • None

      Description of problem - Provide a detailed description of the issue encountered, including logs/command-output snippets and screenshots if the issue is observed in the UI:

      The OCP platform infrastructure and deployment type (AWS, Bare Metal, VMware, etc. Please clarify if it is platform agnostic deployment), (IPI/UPI):

       

      The ODF deployment type (Internal, External, Internal-Attached (LSO), Multicluster, DR, Provider, etc):

       

       

      The version of all relevant components (OCP, ODF, RHCS, ACM whichever is applicable):

      OCP 4.18.0-0.nightly-2024-11-07-215008

      ODF 4.18.0-49.stable

      ACM 2.12.0-DOWNSTREAM-2024-10-30-03-41-01

       

      Does this issue impact your ability to continue to work with the product?

      No

       

      Is there any workaround available to the best of your knowledge?

      No

       

      Can this issue be reproduced? If so, please provide the hit rate

      Yes, 100%

       

      Can this issue be reproduced from the UI? Yes

      If this is a regression, please provide more details to justify this:

      Steps to Reproduce:

      1. On a RDR setup, deploy RBD or CephFS based Subscription workloads.

      2. Bring the primary cluster down or perform an operation to disrupt data sync.

      3. Wait for 3x the sync interval and then go to the Failover/Modal and fill all the params to perform an action on the workload. Do not initiate the operation. You should see an warning alert "Inconsistent data on target cluster" on this page.

      Now change the "Target cluster" selection to point it to the same cluster where the workload is running. The warning alert "Inconsistent data on target cluster" still exist and doesn't go away.

      The exact date and time when the issue was observed, including timezone details:

       

      Actual results: [RDR] When incorrect target cluster is selected for failover/relocate operations, the existing warning alert "Inconsistent data on target cluster" doesn't go away

       

      Expected results: When an incorrect target cluster is selected for the subscription workloads, the existing warning alert "Inconsistent data on target cluster" shouldn't be shown as the cluster selection itself is invalid and Failover/Relocate operation can not be performed on it. The warning alert in this case can confuse the user. 

       

      Logs collected and log location:

       

      Additional info:

       

              tjeyasin@redhat.com Timothy Asir Jeyasingh
              amagrawa@redhat.com Aman Agrawal
              Timothy Asir Jeyasingh
              Aman Agrawal Aman Agrawal
              Votes:
              0 Vote for this issue
              Watchers:
              18 Start watching this issue

                Created:
                Updated:
                Resolved: