-
Bug
-
Resolution: Won't Do
-
Undefined
-
None
-
4.11
-
Quality / Stability / Reliability
-
False
-
-
None
-
None
-
None
-
None
-
None
-
None
-
ETCD Sprint 231, ETCD Sprint 232, ETCD Sprint 234
-
3
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
On the latest 4.11 nightly builds, I am seeing HighOverallControlPlaneMemory alerts being fired on the same master instance types that used to work fine earlier. Wondering if there is any new performance regression introduced. Seeing the alert on AWS and Nutanix so far. Haven't tried other cloud providers recently. Actual alert reported is: "Given three control plane nodes, the overall memory utilization may only be about 2/3 of all available capacity. This is because if a single control plane node fails, the kube-apiserver and etcd my be slow to respond. To fix this, increase memory of the control plane nodes."
Version-Release number of selected component (if applicable):
Kustomize Version: v4.5.4 Server Version: 4.11.0-0.nightly-2022-12-01-040657 Kubernetes Version: v1.24.6+5658434
How reproducible:
Run kube-burner cluster density tests on a Nutanix (I tried with 40 worker nodes) or AWS cluster (I tried with 30 worker nodes) with latest 4.11 nightly
Steps to Reproduce:
1. Create an AWS or Nutanix cluster with 30 to 40 worker nodes using latest 4.11 nightly 2. Run kube burner test 3. See the alerts reported
Actual results:
HighOverallControlPlaneMemory alerts are reported
Expected results:
HighOverallControlPlaneMemory alerts not expected
Additional info:
- is related to
-
OCPBUGS-18431 Alert HighOverallControlPlaneMemory firing on 4.11+
-
- Closed
-
- links to