-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
4.20.z
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
Important
-
Yes
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
During cluster provisioning using ProvisioningRequest, the cluster configuration shows failed due to a timeout. It looks like it's related to the .status.extensions.clusterDetails.nonCompliantAt field being updated at the same time as clusterProvisionStartedAt so the configuration timeout starts as soon as the cluster provisioning starts, not
Version-Release number of selected component (if applicable):
quay.io/redhat-user-workloads/telco-5g-tenant/o-cloud-manager-fbc-4-20@sha256:824e3f7edd9f5808086ed6af2b9fb417c24b8162782db7a548ac62f22cf8f1e0
How reproducible:
consistently and even after hub reinstall <tbd but may not appear when the policy generator template is not present>
Steps to Reproduce:
1. Have policy generator with clustertemplates.clcm.openshift.io/templates annotation added
2. Create ProvisioningRequest with a template listed in the annotation on the policies
3. Wait for ClusterInstance to complete provisioning in more time than cluster configuration timeout. ProvisioningRequest will fail fatally and stop being reconciled
Actual results:
ClusterInstance finishes, policies all become compliant, but ProvisioningRequest shows cluster configuration timed out with lastTransitionTime before provisioning finished
Expected results:
cluster configuration continues successfully
Additional info:
- blocks
-
OCPBUGS-64703 cluster configuration times out too early
-
- Closed
-
- is cloned by
-
OCPBUGS-64703 cluster configuration times out too early
-
- Closed
-
- is related to
-
OCPBUGS-63592 ProvisioningRequest reports status fulfilled with outdated policies
-
- Closed
-
- links to