-
Bug
-
Resolution: Cannot Reproduce
-
Normal
-
None
-
4.11
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
Moderate
-
No
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
Customer is using RHACM to add new nodes to existing clusters. Clusters at this customer are typically of a mixed fashion, with some nodes being run on vSphere (Master Nodes, Infra Nodes) and some being Bare Metal Hosts.
The customer has been using the outlined methods in the documentation to add Bare Metal Hosts in the past and would now like to use RHACM. However, when the customer tried to do that, it seems that the Bare Metal Host is configured as a vSphere node, leading to the kubelet not starting correctly:
Mar 23 14:27:44 host.example.com hyperkube[480577]: W0323 14:27:44.380125 480577 plugins.go:132] WARNING: vsphere built-in cloud provider is now deprecated. The vSphere provider is deprecated and >
Mar 23 14:27:44 host.example.com hyperkube[480577]: E0323 14:27:44.380426 480577 vsphere.go:523] Failed to get uuid. err: failed to match Prefix, UUID read from the file is GQRCXM3
Mar 23 14:27:44 host.example.com hyperkube[480577]: Error: failed to run Kubelet: could not init cloud provider "vsphere": failed to match Prefix, UUID read from the file is GQRCXM3
Mar 23 14:27:44 host.example.com hyperkube[480577]: Usage:
Mar 23 14:27:44 host.example.com hyperkube[480577]: kubelet [flags]
When setting up new bare metal hosts, the vSphere integration obviously should not be installed.
Version-Release number of selected component (if applicable):
RHACM 2.8.0
How reproducible:
Customer switched back to his original method to create clusters, so no reproduction at the moment.
Steps to Reproduce:
1. Create a vSphere-based cluster
2. Import this cluster into RHACM
3. Add a new bare metal host to this cluster
Actual results:
The node fails to start with the above error message.
Expected results:
The node can be added and starts using the bare metal method
Additional info:
- Customer raised this issue in March 2023, but did not provide the logs until this month. Environment where the issue appeared is no longer available.