-
Bug
-
Resolution: Done-Errata
-
Undefined
-
4.19.0
-
None
It appears that all upgrade jobs now have interval charts showing three separate bars of mass disruption during kube-apiserver upgrade:
We suspect that this kind of disruption is actually expected given it's being measured from localhost as each master is updated.
We can't leave this disruption in the charts as it will be an eternal pain point for anyone examining them, it looks quite alarming.
There is precedent for shutting down disruption monitors when nodes are updating, specifically in the in-cluster disruption monitoring. I believe this was done by monitoring EndpointSlice somewhere around here https://github.com/openshift/origin/blob/main/pkg/monitortests/network/disruptionpodnetwork/monitortest.go to determine when to shut down and restart the monitor. Doing this when the apiserver is rolling out would be one option.
Another option might be to alter or omit them when intervals are being returned in the monitortest, if they overlap with a progressing interval.
Abu points out on slack: "if we want to skip the ones that overlaps with a roll out, one thing we need to keep in mind is that, the rollout interval for an apiserver takes into account [termination start ... termination end], but it should be
[termination start ... termination end (old instance) ... ready to accept request (new instance)]"
Not critical for 4.19 release.
- blocks
-
OCPBUGS-59868 New disruption monitoring reporting 3 bars of disruption during kube-apiserver progressing
-
- New
-
- is cloned by
-
OCPBUGS-59868 New disruption monitoring reporting 3 bars of disruption during kube-apiserver progressing
-
- New
-
- links to
-
RHBA-2025:12341 OpenShift Container Platform 4.19.7 bug fix update