-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
4.19
-
Moderate
-
None
-
False
-
Description of problem
CI is failing for our TechPreview feature because of test failures such as the following:
{ fail [github.com/openshift/origin/test/extended/quota/clusterquota.go:129]: unexpected error: configmaps "testvvc28" is forbidden: exceeded quota: overall-e2e-test-crq-v875j, requested: configmaps=1, used: configmaps=6, limited: configmaps=6 Ginkgo exit error 1: exit with code 1}
This particular failure comes from https://prow.ci.openshift.org/view/gs/test-platform-results/pr-logs/pull/29644/pull-ci-openshift-origin-main-e2e-gcp-ovn-techpreview/1909670847979720704. Search.ci has other similar failures.
Version-Release number of selected component (if applicable)
I have seen this in 4.19 CI jobs.
How reproducible
This is showing up on multiple techpreview CI jobs. Presently, search.ci shows the following stats for the past two days:
pull-ci-openshift-origin-main-e2e-aws-ovn-single-node-techpreview (all) - 18 runs, 78% failed, 21% of failures match = 17% impact pull-ci-openshift-origin-main-e2e-gcp-ovn-techpreview (all) - 26 runs, 77% failed, 30% of failures match = 23% impact pull-ci-openshift-origin-main-e2e-metal-ipi-ovn-dualstack-bgp-techpreview (all) - 61 runs, 62% failed, 5% of failures match = 3% impact
If I narrow the search to "techpreview" jobs and extend the time period to include the past 14 days, I get the following:
pull-ci-openshift-origin-main-e2e-aws-ovn-single-node-techpreview (all) - 36 runs, 81% failed, 34% of failures match = 28% impact pull-ci-openshift-origin-main-e2e-gcp-ovn-techpreview (all) - 71 runs, 85% failed, 23% of failures match = 20% impact pull-ci-openshift-origin-main-e2e-metal-ipi-ovn-dualstack-bgp-techpreview (all) - 196 runs, 68% failed, 4% of failures match = 3% impact
The oldest failure that I found was from 8 days ago.
Steps to Reproduce
1. Post a PR and have bad luck.
2. Check search.ci.
Actual results
CI fails.
Expected results
CI passes, or fails on some other test failure.
Additional info
On further investigation, I believe the issue only happens because our feature creates configmaps in all namespaces, which violates the ClusterResourceQuota test's expectations.
- is related to
-
OSSM-9076 Stop gateway instance from creating ConfigMaps everywhere
-
- Closed
-