-
Bug
-
Resolution: Done
-
Normal
-
None
-
None
-
None
-
False
-
False
-
Undefined
-
Description of problem:
When the remote machines are in Failed PHASE, we can’t see any conditions in machinepool or ClusterDeployment status, so that we don’t know the remote machines hit the issue from hive side
Version-Release number of selected component (if applicable):
How reproducible:
always
Steps to Reproduce:
1. Install a cluster via hive
2. Terminate a worker instance from platform console manually, this will make machines in a fail status
3. Login to the target(remote) cluster, check machines
$ oc get machines -n openshift-machine-api
NAME PHASE TYPE REGION ZONE AGE
lwanhive0427-6zljw-master-0 Running m4.xlarge us-east-2 us-east-2a 4h40m
lwanhive0427-6zljw-master-1 Running m4.xlarge us-east-2 us-east-2b 4h40m
lwanhive0427-6zljw-master-2 Running m4.xlarge us-east-2 us-east-2c 4h40m
lwanhive0427-6zljw-worker-us-east-2a-msrgf Failed m4.xlarge us-east-2 us-east-2a 4h29m
lwanhive0427-6zljw-worker-us-east-2b-kv8fn Failed m4.xlarge us-east-2 us-east-2b 4h29m
lwanhive0427-6zljw-worker-us-east-2c-cpmdc Failed m4.xlarge us-east-2 us-east-2c 83m
4. Back to hive cluster, Check machinepool and clusterdeployment status
Actual results:
It shows everything works well
Expected results:
We can get the info in machinepool or clusterdeployment to show the remote machines are in abnormal status
Additional info: