-
Bug
-
Resolution: Done
-
Critical
-
4.20.0
-
None
-
Quality / Stability / Reliability
-
False
-
-
0
-
None
-
None
-
None
-
Proposed
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
When installing any cluster with Assisted-Installer, one of the CP nodes acts as the bootstrap. After the other CP nodes form a full control plane (and run all of the required pods), it then installs itself as a CP node. So when installing TNF, a full control plane needs to be formed with just a single CP node (the one that is not the bootstrap) which doesn't happen.
Version-Release number of selected component (if applicable):
How reproducible:
Start an installation of a TNF cluster with one of the CP nodes acting as the bootstrap
Steps to Reproduce:
1.Start an installation of a TNF cluster with one of the CP nodes acting as the bootstrap 2. 3.
Actual results:
Cluster installation is stuck, the state of the control plane pods on the CP node (not the bootstrap): 1. The scheduler is running 2. The etcd pod is running and ready, sometimes it becomes the leader and sometimes it doesn't 3. The installer pod for kube-controller-manager is failing because a configmap named client-ca is missing in openshift-kube-controller-manager 4. There's nothing for kube-api, not an installer pod or a static manifest for it
Expected results:
Cluster installed successfully
Additional info:
https://redhat-internal.slack.com/archives/C07ABRBBDK3/p1749566749032899