Uploaded image for project: 'OpenShift API for Data Protection'
  1. OpenShift API for Data Protection
  2. OADP-4514

OADP Performance test request for DM blockvolume VM's

XMLWordPrintable

    • Icon: Task Task
    • Resolution: Unresolved
    • Icon: Critical Critical
    • OADP 1.4.1
    • None
    • velero
    • False
    • Hide

      None

      Show
      None
    • False
    • ToDo
    • 0
    • 0
    • Very Likely
    • 0
    • Customer Escalated, Customer Facing
    • None
    • Unset
    • Unknown
    • None

      Based on the initial findings in https://access.redhat.com/support/cases/#/case/03826642

      Engineering is requesting the following performance 2 scenarios.

      1. OCP, ODF, OpenShift-Virt, OADP-1.4.x.

      • A single namespace w/ the following
      • 30-50 VM's w/ two volumes attached
        • ODF Volumes
        • Block Volume-1: 30-60 GB of densely populated data
        • Block Volume-2: 100-150 GB of less populated data
      • ODF s3 endpoint on the same or remote cluster

      2. OCP, ODF, OpenShift-Virt, OADP-1.4.x

      • A single namespace w/ the following
      • 30-50 VM's w/ two volumes attached
        • ODF Volumes
        • Block Volume-1: 30-60 GB of densely populated data
        • Block Volume-2: 100-150 GB of less populated data
      • Remote S3 bucket that is not ODF, MCG, nooba.  Engineering can provide a bucket if needed.

      Engineering is requesting performance data to including time to complete and tuning recommendations. 

      FYI.. thus far the customer ALSO has issues on OADP-1.3.2.  Debug is in progress and no bugs have been opened yet.

      Customer Related Details:

      • Ally Bank has namespaces with more VM's than the above scenario
      • Ally Bank backs up VM's off cluster daily in an 8 hour window
      • Ally Bank is utilizing DataMover, not filesystem backups.
      • Ally Bank IS or will be using multiple current datauploads via the following config the node-agent concurrency documentation: https://velero.io/docs/main/node-agent-concurrency/

      Workflow requirements:

      • Do not cleanup or remove backups during the interations.
      • Collect the performance stats from the first  DM backup.
        • Size the data in the backup kopia repository (Example: aws s3 ls {}summarize{-}  -recursive --human-readable  s3://cvpbucketuswest2/velero/kopia/cirros-test/)
        • Calculate the time for dataupload to complete ( completionTimestamp - creationTimestamp)
      • Add data to all the VM's 
        • perhaps 1GB
      • Execute another DM backup
        • Collect the same size and timing 
      • Add data to all the VM's
        • Perhaps another 1GB for consistency 
        • execute the DM backup again
        • Collect the same size and timing
      • Add a much larger set of data 10GB
        • execute the DM backup again
        • Collect the same size and timing

       

            dvaanunu@redhat.com David Vaanunu
            wnstb Wes Hayutin
            Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

              Created:
              Updated: