-
Bug
-
Resolution: Unresolved
-
Major
-
4.18.z, 4.19.z, 4.20.0
-
None
When master nodes or kube api servers are taken offline, for mc update, revision rollout, etc .. a single kube api server will contain the majority of long lived connection and live connections.
After all 3 masters are back online a single kube-apiserver will continue to receive the majority of live api connections resulting in the master node cpu hitting 100%
Restarting the kube api server pod resolves the issue.
The expectation is that after the 3 masters are up the live api connections would get balanced between the 3 master nodes.
Looking for assistance in determining why the live connections are not getting evenly distributed between the kube api servers whenever quorum is reestablished.
- blocks
-
OCPBUGS-60121 uneven distribution of kube api traffic
-
- Closed
-
- is cloned by
-
OCPBUGS-60121 uneven distribution of kube api traffic
-
- Closed
-
- relates to
-
OCPSTRAT-2096 Add support for goaway-chance in the Kube API Server Operator
-
- Closed
-
- links to