-
Story
-
Resolution: Done
-
Undefined
-
None
-
None
-
None
-
None
-
False
-
-
False
-
None
-
None
-
None
-
None
-
None
nodeSelectors and tolerations can be used by consumers e.g. to ensure hive workloads don't get scheduled to ARM nodes.
We cleverly copy nodeSelector and tolerations from the hive-operator Deployment into all the other controllers (hive-controllers, hive-clustersync, hiveadmission).
We don't copy them into the Jobs we create for imageset, provision, or uninstall.
This means those guys can end up on the wrong nodes, as is being seen in the OpenShift CI environment today.
We create those Jobs from hive-controllers. I would say we could have hive-operator stuff the nodeSelector and tolerations into environment variables that we could just read... but since they're complex types, we would have to encode the values (probably JSON=>base64) and decode them on the other side – le yuck.
It would work to copy the fields from any of the hive Deployments, StatefulSets, ReplicaSets, or Pods. Since anything but the last would entail twiddling RBAC, let's go with Pod.
- We can use this trick in the hive-controllers manifest to get the pod name into an environment variable
- Add a util func with logic similar to what's in hive-operator to pull that pod manifest.
- Add logic in the appropriate controllers (clusterdeployment for imageset; clusterprovision for provision; clusterdeprovision for uninstall) to call that util func, extract the nodeSelector and tolerations from the result, and stuff them into their reconciler structs (so we don't have to look them up on each reconcile).
- Add logic into the funcs that generate the Jobs to inject those values appropriately.
- is related to
-
HIVE-2487 Hive Operator won't run on non-linux/amd64 by default
-
- Closed
-
- links to