Uploaded image for project: 'Red Hat 3scale API Management'
  1. Red Hat 3scale API Management
  2. THREESCALE-10984

Perf test 2.15 - "smallest" 1M SKU - multi profile

XMLWordPrintable

    • False
    • None
    • False
    • Not Started
    • Not Started
    • Not Started
    • Not Started
    • Not Started
    • Not Started
    • RHOAM Sprint 56, RHOAM Sprint 57

      WHAT
      Replicate as much as possible the performance testing done in 2.8 in order to compare to 2.15

      HOW

      1. Run a 6 hour sustained rate test at 12 rps with a large profile
      2. Run a 6 hour sustained rate test at 12 rps with a simple profile
      3. Run a 1 hour peak rate test at 48 rps with a large profile
      4. Run a 1 hour peak rate test at 48 rps with a simple profile

      rps is users_per_second

      Record parameters used in Hyperfoil. Add HTML to JIRA
      We can also consider switching to Locust if reasons are strong enough

      VERIFY / OBSERVE
      For each test ensure that:

      • Resources allocated are inline with SKU/Core/Daily API Requests recommendations and previous test runs.
      • Observe total CPU and memory usage across the 3 scaling pods, backend worker, backend listener, apicast prod
      • Compare totals to 2.8 totals
      • Ensure that 3scale remains healthy during the tests
      • No alerts firing, no pods crashing, 3scale not reporting errors (product analytics more accurate than testing tool)

      References
      See test results here Along with extrapolated spreadsheet with resources usage

            vmogilev_rhmi Valery Mogilevsky
            bgallagh@redhat.com Brian Gallagher
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated: