-
Bug
-
Resolution: Not a Bug
-
Normal
-
None
-
4.17.z
-
None
-
None
-
False
-
-
None
-
Moderate
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
Customer reported an issue where they are seeing some old "serving-cert-*" Secrets in the openshift-kube-controller-manager namespace:
serving-cert-64 kubernetes.io/tls 2 334d serving-cert-65 kubernetes.io/tls 2 334d serving-cert-66 kubernetes.io/tls 2 334d serving-cert-67 kubernetes.io/tls 2 271d serving-cert-68 kubernetes.io/tls 2 271d serving-cert-82 kubernetes.io/tls 2 71d serving-cert-83 kubernetes.io/tls 2 8d serving-cert-84 kubernetes.io/tls 2 8d serving-cert-85 kubernetes.io/tls 2 8d serving-cert-86 kubernetes.io/tls 2 8d
The customer is aware of OCPBUGS-42468 and OCPBUGS-49343. However in this case the number is > 6 and there is no "lastFailedRevision" entry in kubeapiserver:
latestAvailableRevision: 393 latestAvailableRevisionReason: "" nodeStatuses: - currentRevision: 393 nodeName: example-bvxvb-master-2 - currentRevision: 393 nodeName: example-bvxvb-master-1 - currentRevision: 393 nodeName: example-bvxvb-master-0
Can these Secrets be deleted? Is it a bug that these are not cleaned up?
Version-Release number of selected component (if applicable):
OpenShift Container Platform 4.17.45
How reproducible:
Only on customer cluster
Steps to Reproduce:
- Observe a running cluster
- Review "serving-cert-" Secrets in the openshift-kube-controller-manager namespace
Actual results:
Both the last 5 serving-cert- Secrets and also additional Secrets exist
Expected results:
We expect only the last 5 serving-cert- Secrets to exist
Additional info:
- must-gather is available in Support Case 04365354
- A very similar issue is described in OCPBUGS-42468 and
OCPBUGS-49343
- relates to
-
OCPBUGS-49343 On some clusters, old "serving-cert-*" Secrets in openshift-kube-controller-manager are not cleaned up
-
- Closed
-