Kata Sprint #213, Kata Sprint #214, Kata Sprint #218
Status information will not be updated during installation when the cluster is configured in a certain way.
When the cluster has nodes in the master pool but none in the worker pool the operator
will not update status information in the KataConfig CR. This is due to a bug in the way we
update the nodes, we have a hardcoded node/roles=worker in there.
This was seen by a customer on a 4.8.10 cluster.
Example helpful info:
Cluster version: 4.8.10
Operator version: 1.0.0 (more recent versions have the same bug)
Polarion test case ID:
This is not super easy to reproduce, you have to change the machine config pools.
Create a cluster, install the operator
1. take all nodes out of the worker machine config pool, remove the worker label from the nodes
2. create a KataConfig, dont select a custom node selector
3. watch during the installation that the status in the KataConfig is not updated, however the installation will still go through, but the runtime will be automatically installed on the maser nodes
The status should be updats as usual with nodes in progress, competed, failed...
No status updates can be seen.