-
Bug
-
Resolution: Can't Do
-
Undefined
-
None
-
4.16.z
-
Quality / Stability / Reliability
-
False
-
-
None
-
Important
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
The kube-apiserver operator is stuck for more than 48h - lastTransitionTime: "2025-04-26T08:32:37Z" message: 'EncryptionMigrationControllerProgressing: migrating resources to a new write key: [core/configmaps core/secrets]' reason: EncryptionMigrationController_Migrating status: "True" type: Progressing There is no indication that it still progresses. Action done: - we tried to refresh the kube-apisever-operator pod by scaling it down and up - did not help - we tried to start new revision of the kube-apiserver to see if perhaps there is some old state that isn't cleared - did not help We see that from the kube-apiserver operator pod, there is throttling on the requests/responses. Cluster seems fine from the etcd point, but from the Kube API server it seems that some requests are taking more time than it should. There were earlier issues reported for the similar problems: - https://access.redhat.com/solutions/6515171 - however there is not errors or failing webhooks, although there are additional apiservices that error 503 when request sent from kube-apiserver - https://access.redhat.com/solutions/7062880 - issue with time sync - waiting for confirmation if the issue disappeared after chrony restart However, both issues were opened in older versions, hence opening a new bug.
Version-Release number of selected component (if applicable):
OCP 4.16.24
How reproducible:
n/a - not reproducible locally - persistent on customer cluster
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info: