-
Story
-
Resolution: Unresolved
-
Major
-
None
-
None
-
None
-
2
-
False
-
-
False
-
-
-
RHDH Install 3286
Summary
Implement a local test runner infrastructure that enables developers to run e2e tests locally against any Kubernetes cluster (OCP, AKS, EKS, GKE), supporting both containerized test execution and headed browser mode for debugging.
Description
Running e2e tests currently requires the full CI pipeline, which makes debugging test failures slow and difficult. Developers cannot easily test PR images before merging, run tests with a visible browser for debugging, or reproduce CI failures locally.
This feature adds a local test runner that allows developers to deploy RHDH and run e2e tests against any cluster they are logged into. It supports OpenShift (OCP), Azure AKS, AWS EKS, Google GKE, and OSD-GCP clusters. Tests can run either in a container (matching the CI environment) or locally in headed mode with a visible browser for debugging.
The runner provides an interactive CLI for selecting job type, image repository, image tag, and run mode. It handles Vault authentication, secret management, and cluster access automatically.
Acceptance Criteria
- Developer can deploy RHDH to any cluster they are logged into (OCP, AKS, EKS, GKE)
- Developer can select job type via interactive prompts (Helm, Operator, Upgrade, Auth Providers, or custom)
- Developer can select image repository (community, Red Hat, or custom)
- Developer can select image tag (next, latest, PR-specific, or custom)
- Developer can run tests in a container that mirrors the CI environment
- Developer can deploy only and then run tests locally in headed mode for debugging
- Configuration is persisted between runs for quick iteration
- Image existence is verified on quay.io before deployment begins
- Secrets are fetched from Vault securely and not stored on disk
- Early crash detection fails fast if pods enter CrashLoopBackOff state
- Container drops into interactive shell on failure for debugging
- Documentation covers usage and examples for all supported clusters
Prerequisites
- Podman installed and running
- oc or kubectl CLI installed and logged into target cluster
- Vault CLI installed with access to OpenShift CI vault
- jq installed