Uploaded image for project: 'OpenShift API for Data Protection'
  1. OpenShift API for Data Protection
  2. OADP-3070

DataMover - datadownloads resources aren't spread evenly across the nodes

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • OADP 1.4.0
    • OADP 1.3.0
    • data-mover
    • False
    • Hide

      None

      Show
      None
    • False
    • ToDo
    • No
    • 0
    • 0
    • Very Likely
    • 0
    • None
    • Unset
    • Unknown

      Description of problem:

      Following the bug: https://issues.redhat.com/browse/OADP-2681

      The datauploads resource was fixed and it spread evenly as much as possible. The backup has been significantly improved from 50min to 25min (Tested with 100 pvs with different data usage)

      The datadownloads need to improve as well. The datadownloads queue per worker will be shorter and the restore time should be fasterOADP_1.3.0-Bug2681.txt

      Version-Release number of selected component (if applicable):

      OCP 4.12.9

      ODF 4.12.9-rhodf
      OADP 1.3.0-138

      How reproducible:

       

      Steps to Reproduce:
      1. Create a namespace with 100 PVs using different data usage (mix namespace)
      2. Run datamover backup
      3. Delete the 100 pvs namespace
      4. Run datamover restore

      5. Monitor datadownloads resource (oc get datadownloads)

       

      Actual results:

      Datadownloads are not spread evenly across the workers.

      Expected results:

      spread datadownloads evenly as much as possible

      Additional info:

      Example for datadownloads spread:
      worker000-r640 : 29
      worker001-r640 : 54
      worker002-r640 : 13
      worker003-r640 : 3
      worker004-r640 : 0
      worker005-r640 : 1

       

      worker000-r640 : 30
      worker001-r640 : 52
      worker002-r640 : 14
      worker003-r640 : 3
      worker004-r640 : 0
      worker005-r640 : 1

      Example for datauploads before the fix:
      worker000-r640 : 22
      worker001-r640 : 0
      worker002-r640 : 39
      worker003-r640 : 0
      worker004-r640 : 39
      worker005-r640 : 0

       

      datauploads after the fix:

      worker000-r640 : 19
      worker001-r640 : 20
      worker002-r640 : 21
      worker003-r640 : 15
      worker004-r640 : 12
      worker005-r640 : 13

       

            spampatt@redhat.com Shubham Pampattiwar
            dvaanunu@redhat.com David Vaanunu
            Amos Mastbaum Amos Mastbaum
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated: