-
Feature Request
-
Resolution: Unresolved
-
Undefined
-
None
-
None
-
None
-
False
-
None
-
False
-
Not Selected
-
-
-
-
customers who are deploying minimal clusters with the intention of scaling up later are inadvertently creating cluster that are catogrised as single node - single node clusters get different versions of alerts with regards to control plane load - this is creating false positive alerts in the cluster when it is later scaled up
It does clearly detail this behaviour in our documentation [1]
"In the Infrastructure API, the infrastructureTopology status expresses the expectations for infrastructure services that do not run on control plane nodes, usually indicated by a node selector for a role value other than master. The controlPlaneTopology status expresses the expectations for Operands that normally run on control plane nodes."
"When the worker replica count is 1, the infrastructureTopology is set to SingleReplica. Otherwise, it is set to HighlyAvailable."
but this does not stop users deploying cluster as follows:
~~~
$ omc -n kube-system get cm cluster-config-v1 -o yaml
...
compute:
- architecture: amd64
hyperthreading: Enabled
name: worker
replicas: 1 <==
controlPlane:
architecture: amd64
hyperthreading: Enabled
name: master
replicas: 3 <==
~~~
which then results in cases for "Suspected false positive alert for `ExtremelyHighIndividualControlPlaneCPU`" [2]
This RFE is a request that an "alert" be added to the console output of the installer script:
"Do you really mean to specify just one worker node?"
Customers should be specifing ether zero, 2 or more
There is also a bug raised here questioning if "SNO detection should be on `controlPlaneTopology` rather than `infrastructureTopology`" [3]
[1] - https://docs.openshift.com/container-platform/4.14/operators/operator_sdk/osdk-ha-sno.html