-
Bug
-
Resolution: Cannot Reproduce
-
Critical
-
None
-
4.14.0
-
None
-
No
-
Approved
-
False
-
Description of problem:
The deploymentconfig api appears to still exist even when the deploymentconfig cap is not enabled. see [~trking]'s message here: https://redhat-internal.slack.com/archives/C032HSVS71T/p1696990707730369?thread_ts=1696989256.074269&cid=C032HSVS71T the controllers may not be running, but the openshift apiserver appears to still be serving the deploymentconfig api, which it should not be.
Version-Release number of selected component (if applicable):
4.14.0
How reproducible:
Always
Steps to Reproduce:
1. install a cluster without the DeploymentConfig capability enabled 2. run "oc get deploymentconfig"
Actual results:
it fails with "No resources found in $current namespace"
Expected results:
it should fail with "error: the server doesn't have a resource type "deploymentconfig""
There is also metric related logic here that needs to be conditional on whether or not the capability is enabled:
https://github.com/openshift/openshift-state-metrics/blob/774cb2ff4b9e21c452650643528c6fa190c7885a/pkg/collectors/deployment_config.go#L106
because the listwatcher in that code is going to fail when the DC api endpoint does not exist.
This is a blocker bug because if a customer installs a 4.14.0 cluster with the DC cap disabled, and then they create some DCs, the DCs will be stored to etcd despite no controller acting on them. If they then upgrade to a 4.14.z where we fixed the issue, those etcd objects will be orphaned/stuck in etcd with no clean way to access/remove them.