-
Bug
-
Resolution: Unresolved
-
Critical
-
None
-
4.16.z
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
Important
-
None
-
Production
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
The Kube-apiserver operator in the cluster is in a degraded state.
$ oc get co kube-apiserver
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
kube-apiserver 4.16.30 True False True 109d ConfigObservationDegraded: secret "initial-kube-apiserver-server-ca" not found
Secret with that name is absent in the openshift-config namespace:
$ oc -n openshift-config get secret initial-kube-apiserver-server-ca
Error from server (NotFound): secrets "initial-kube-apiserver-server-ca" not found
In the openshift-config namespace configmap with that name exists:
$ oc -n openshift-config get configmap initial-kube-apiserver-server-ca NAME DATA AGE initial-kube-apiserver-server-ca 1 109d
We have tried to recreate the kube-apiserver-operator pod, but it's still logging that error:
E1007 18:42:42.550863 1 base_controller.go:268] ConfigObserver reconciliation failed: secret "initial-kube-apiserver-server-ca" not found
The configmap exists and is readable, but it's not a secret, as the error suggests; it's a configmap.
E1007 18:42:48.586856 1 base_controller.go:268] ConfigObserver reconciliation failed: secret "initial-kube-apiserver-server-ca" not found I1007 18:42:48.592590 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"c3c0ddf9-499c-48f4-a027-3cced7f9dd0b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "ConfigObservationDegraded: secret \"initial-kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-47,config-47,etcd-serving-ca-47,kube-apiserver-audit-policies-47,kube-apiserver-cert-syncer-kubeconfig-47,kube-apiserver-pod-47,kubelet-serving-ca-47,sa-token-signing-certs-47]" to "ConfigObservationDegraded: secret \"initial-kube-apiserver-server-ca\" not found"
So there is a controller somewhere called ConfigObserver that is trying to read this config and is failing. The ConfigObserver controller/reconciler doesn't exist within the openshift-kube-apiserver-operator code, but rather is imported from here: https://github.com/openshift/library-go/tree/master/pkg/operator/configobserver and the openshift-kube-apiserver-operator is just reporting the error it's getting back from that.
We assume the root of the issue is an OCP bug with the ConfigObserver, either not able to read a configmap that exists, or it thinks it's a secret instead, and so fails, as there are no secrets in openshift-config that are titled initial-kube-apiserver-server-ca (only a configmap).
It's likely that this is something specific to the customer's environment since we're not seeing this in other clusters.