Uploaded image for project: 'Managed Service - API'
  1. Managed Service - API
  2. MGDAPI-4210

K04 - [DESTRUCTIVE] - performance - Run performance test against RHOAM (3scale + user SSO)

XMLWordPrintable

    • Icon: Task Task
    • Resolution: Obsolete
    • Icon: Major Major
    • 1.23.0
    • 1.23.0
    • Testing
    • False
    • None
    • False
    • No
    • RC1

      Origin: tests/performance/k04-run-performance-test-against-rhoam-3scale-user-sso.md

      Description

      Run performance tests against 3scale + user SSO to validate the advertised load. Time estimation does not include the cluster provisioning.

      Prerequisites

      • oc CLI v4.3
      • ocm CLI installed locally
      • jq v1.6 installed locally
      • Python environment (python 3.x, pip, pipenv)
      • RHOAM cluster ready
        • all the alerts should be green
        • all the automated tests should pass

      Steps

      1. Login via oc as a user with cluster-admin role (kubeadmin):
      oc login --token=<TOKEN> --server=https://api.<CLUSTER_NAME>.s1.devshift.org:6443
      
      1. Make sure nobody is using the cluster for performing the test case so that performance test results are not affected by any unrelated workload.
      2. Create customer-like application using customer-admin01 (or other user from dedicated-admin group)
      oc new-project httpbin
      oc new-app jsmadis/httpbin
      
      1. In terminal window #2, run the alerts-during-perf-testing script to capture alerts pending/firing during performance test run.
      2. Configure rate limiting to allow for enough requests per minute.
      3. go to redhat-rhmi-operator namespace
      4. see the sku-limits-managed-api-service Config Map
      5. edit the value of requests_per_unit
      6. wait for redeploy of ratelimit pods in redhat-rhmi-marin3r namespace
        • should be done automatically in a few minutes

      Note: This not possible for installations via addon-flow since Hive would revert your modifications to whatever
      is set in Managed Tenants repository in sku-limits.yaml.j2 file.

      1. In terminal window #2, run the following script for alert watching
      2. Run the performance test suite

      The way to do it is described in MGDAPI-238 and in Austin's doc. Use trepel fork. To validate the advertised load use rhsso_tokens benchmark. It is set to have 10% of login flow requests. You will need to change maxSessions (~6000), usersPerSec (~25 to validate 20M load), duration, maxDuration, and http.sharedConnections (~1200).

      1. create a perf-test-start-time.txt file as described in capture_resource_metrics script.
        • the actual performance test run doesn't start immediately, the performance test suite creates various 3scale (Product, Backend, Application, Application Plan) and SSO (realm, client, users) entities first
        • best to track the log of Hyperfoil controller to get the exact time the when the rampUp phase starts
        • create the file in the directory where the script resides
      1. Create a perf-test-end-time.txt for capture_resource_metrics script
      2. create the file in the directory where the script resides
      3. to get the exact time track the Hyperfoil controller log
      4. Collect the data about the performance test run
      5. from alerts-during-perf-testing
      6. from Hyperfoil
        • install Hyperfoil locally
        • bin/cli.sh
        • connect <hyperfoil-url-without-protocol> -p <port-8090-is-default>
        • runs # to see all the runs
        • status <your-run-name>
        • stats <your-run-name>
        • export -f json -d . <your-run-name> # to export the data
        • use report tool to generate the HTML out of the exported data
      7. review alerts based on the outcome of the script for alert watching
        • there should be no firings for 20M benchmark
      8. eye review of various Grafana Dashboards, see this guide on how to do it
      9. use capture_resource_metrics script to get the data
      10. add a new column about the run to the Load Testing spreadsheet
        • fill in the first few rows with all the relevant information about the benchmark used
        • rest of the rows should be filled in based on the capture_resource_metrics script
        • add any additional info (e.g. links to Hyperfoil report, Grafana Dashboard snapshots etc)
      1. Analyse the results
      2. compare with the previous runs
      3. Attach the spreadsheet to the JIRA ticket
      4. store the Hyperfoil report in Google Drive
      5. store the Grafana Dashboard snapshot(s) there too if needed

      General guidelines for testing

      https://github.com/integr8ly/integreatly-operator/tree/master/test-cases/common/general-guidelines.md

              Unassigned Unassigned
              mhesko Martin Hesko (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: