Uploaded image for project: 'OpenShift Virtualization'
  1. OpenShift Virtualization
  2. CNV-31962

Performance continually mass migration - test fix CNV 4.13.3

XMLWordPrintable

    • Icon: Story Story
    • Resolution: Obsolete
    • Icon: Critical Critical
    • None
    • None
    • CNV Perf/Scale
    • None
    • None

      Clone of https://issues.redhat.com/browse/CNV-29264, testing fix CNV 4.13.3

      Test the following scenario :

      • Install 6 nodes cluster with OCS block storage
      • Verify VMS with io='native'
      • Create and start 1500 small Cirros VMS on 6 worker nodes.
      • Create and start 6 windows VMS
      • Create multiple namespaces with 15 NS, 100 VMS in each one
      • Create the following resource quotas on the Cirros VMS: 
      • requests.memory 64M
      • requests.cpu 100Mi
      • requests.storage 1Gi
      • Create the following resource quotas on the Windows VMS: 
      • requests.memory 1Gi
      • requests.cpu 1
      • requests.storage 16Gi
      • Create workload inside 100 VMS
      • Create workload OTO 6 worker nodes VMS
      • Check all VMS are alive (SSH connection)
      • Run continually mass migration for 12 hours with MCP rollout as in BUG 2207682
      • Record system metric’s and export it to the performance team Grafana server
      • Check general exit criteria passed successfully : 
      • No critical alerts at the Prometheus during the test
      • VM scheduling is well-balanced across the nodes
      • No pod crash/restarts
      • System resource utilization is within range and no degradation was noticed
      • All VMS stay alive during the migrations, no crashed or blue screen was notice
      • Storage system state is healthy
      • For all live migrations tests : 
      • All vmim objects were created
      • Status of vmim was finished successfully - Phase: Succeeded
      • VMI Migration State was finished successfully with no error and end timestamp
      • VMI node changed
      • VMI age stayed the same (live migrated)
      • Check all VMS are alive (SSH connection) and uptime indicated the VM was not rebooted

      Full test plan :

      https://docs.google.com/document/d/1dK5XPgU8fooMONmMGIlAh_LWAySBHue_wMo63XOwIO0/edit?usp=sharing

       

          There are no Sub-Tasks for this issue.

              guchen11 Guy Chen
              guchen11 Guy Chen
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated:
                Resolved: