-
Story
-
Resolution: Unresolved
-
Undefined
-
None
-
None
After team discussion, the decided route is as follows:
Booting into a custom MCP should be possible but it is not permitted by openshift/kubernetes. To get around this fact and to mitigate the amount of updates the process will be as follows:
using an initial node annotation, the MCS will pass all annotations including the "signal" to the MCD who, upon first boot, will see this annotation and pass it along to the non-bootstrap MCD and MCC.
once the cluster has a way to access this information, we can either tell the MCD not to apply a config or tell te MCC not to send the config over for a specific node until that config matches the annotation name.
Once everything matches, we continue on with business as usual.
The specifics of the annotation name and MCC/MCD "stopping" are up in the air but will relate to halting the config application/updating process.
In depth process being investigated:
1) user adds MC level annotation indicating drop into custom pool
2) firstboot happens as expected since it seems this process does not change or apply any node annotations
3) when syncMachineConfigPools and/or add/updateNode rolls around, the annotations will be read by the MCC. We will see the halt annotation by reading all `node-role` labels and if we have one that is NOT worker or master, we add that custom role with the specific name but remove the node-role label so we do not repeatedly do this process.
4) This should be enough to stave off the process of falling into worker and then after a few reboots/updates falling into the custom pool.
- is incorporated by
-
OCPSTRAT-958 Boot new workers directly into custom pool configuration
- New
- relates to
-
MCO-205 [Spike] Preventing custom pool race conditions
- To Do
- links to