-
Bug
-
Resolution: Done-Errata
-
Minor
-
4.15
-
None
-
Moderate
-
No
-
False
-
-
N/A
-
Release Note Not Required
Description of problem:
It was renamed between ec.1 and ec.2:
$ oc adm release extract --to ec.1 quay.io/openshift-release-dev/ocp-release:4.15.0-ec.1-x86_64 $ oc adm release extract --to ec.2 quay.io/openshift-release-dev/ocp-release:4.15.0-ec.2-x86_64 $ yaml2json <ec.1/0000_30_cluster-api_10_webhooks.yaml | jq -r .metadata.name validating-webhook-configuration $ yaml2json <ec.2/0000_30_cluster-api_10_webhooks.yaml | jq -r .metadata.name cluster-capi-operator
And the presence of the old config breaks updates across the gap, as the operator tries to act on resources that are still guarded by a webhook config, despite there no longer being anything serving the hooks it had pointed at. Or something like that. In any case, the cluster-api ClusterOperator goes Degraded=True on SyncingFailed with {{Failed to resync for operator: 4.15.0-ec.2 because &
{%!e(string=unable to reconcile CoreProvider: unable to create or update CoreProvider: Internal error occurred: failed calling webhook "vcoreprovider.operator.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource)}}} until the old ValidatingWebhookConfiguration is deleted, and after that deletion, the ClusterOperator recovers.
Version-Release number of selected component (if applicable):
4.15.0-ec.2.
How reproducible:
Untested, but I'd guess 100%.
Steps to Reproduce:
1. Install a tech-preview 4.15.0-ec.1 cluster.
2. Request an update to 4.15.0-ec.2.
3. Wait an hour or so.
Actual results:
cluster-api ConsoleOperator is Degraded=True, blocking further progress in the ClusterVersion update.
Expected results:
ClusterVersion update happily completes.