-
Bug
-
Resolution: Done
-
Major
-
4.18.z, 4.19.z, 4.20.0
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
Important
-
None
-
None
-
Rejected
-
None
-
Done
-
Enhancement
-
-
None
-
None
-
None
-
None
This is a clone of issue OCPBUGS-43521. The following is the description of the original issue:
—
When master nodes or kube api servers are taken offline, for mc update, revision rollout, etc .. a single kube api server will contain the majority of long lived connection and live connections.
After all 3 masters are back online a single kube-apiserver will continue to receive the majority of live api connections resulting in the master node cpu hitting 100%
Restarting the kube api server pod resolves the issue.
The expectation is that after the 3 masters are up the live api connections would get balanced between the 3 master nodes.
Looking for assistance in determining why the live connections are not getting evenly distributed between the kube api servers whenever quorum is reestablished.
- blocks
-
OCPBUGS-61039 uneven distribution of kube api traffic
-
- Closed
-
- clones
-
OCPBUGS-43521 uneven distribution of kube api traffic
-
- Verified
-
- is blocked by
-
OCPBUGS-43521 uneven distribution of kube api traffic
-
- Verified
-
- is cloned by
-
OCPBUGS-61039 uneven distribution of kube api traffic
-
- Closed
-
- links to