-
Bug
-
Resolution: Unresolved
-
Normal
-
None
-
None
-
None
-
False
-
None
-
False
-
-
-
Moderate
-
None
Description of problem:
failed to reconcile cluster_network_provisioned hs-mc cluster:
E0205 08:23:12.411922 1 logger.go:188] gitlab.cee.redhat.com/service/osd-fleet-manager/internal/osdfm/internal/workers.processClusterCreation: failed to reconcile cluster_network_provisioned hs-mc cluster cuhht82papvtf5aj9llg status: failed to create cluster for request 'cuhht82papvtf5aj9llg': failed to create cluster: failed to create OCM cluster: FLEET-MGMT-9: status is 400, identifier is '400', code is 'CLUSTERS-MGMT-400', at '2025-02-05T08:23:12Z' and operation identifier is '54d3c5e9-6eed-4e46-92a8-1d542c4e3dce': Failed to assume role with ARN 'arn:aws:iam::112795846264:role/Installer': operation error STS: AssumeRole, https response error StatusCode: 403, RequestID: 33674cee-ef33-4335-9778-32a5fc2a5cfb, api error AccessDenied: User: arn:aws:sts::896164604406:assumed-role/RH-Managed-OpenShift-Installer/OCM is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::112795846264:role/Installer
Version-Release number of selected component (if applicable):
N/A
How reproducible:
Randomly appears. Happened yesterday when testing SC autoscaler in the qe-testing sector in integration. I removed the failed cluster and autoscaling manager created another MC with the same error. After config change (and redeployment of qe-testing pod, the issue was gone
Steps to Reproduce:
- ...
Actual results:
Expected results:
Additional info:
- is cloned by
-
ACM-17648 failed to create multiregion cloud trail
-
- New
-