-
Bug
-
Resolution: Done
-
Medium
-
None
-
OCP 4.11
-
1
-
False
-
False
-
-
Known Issue
-
Done
-
Kata Sprint #213, Kata Sprint #214, Kata Sprint #218, Kata Sprint #243
-
0
-
0.000
Impact
Status information will not be updated during installation when the cluster is configured in a certain way.
Description
When the cluster has nodes in the master pool but none in the worker pool the operator
will not update status information in the KataConfig CR. This is due to a bug in the way we
update the nodes, we have a hardcoded node/roles=worker in there.
Env
This was seen by a customer on a 4.8.10 cluster.
Example helpful info:
Cluster version: 4.8.10
Operator version: 1.0.0 (more recent versions have the same bug)
Container type:
Polarion test case ID:
Test steps
This is not super easy to reproduce, you have to change the machine config pools.
Create a cluster, install the operator
1. take all nodes out of the worker machine config pool, remove the worker label from the nodes
2. create a KataConfig, dont select a custom node selector
3. watch during the installation that the status in the KataConfig is not updated, however the installation will still go through, but the runtime will be automatically installed on the maser nodes
Expected result
The status should be updats as usual with nodes in progress, competed, failed...
Actual result
No status updates can be seen.