-
Bug
-
Resolution: Done
-
Critical
-
None
-
4.13, 4.12
-
Critical
-
None
-
Approved
-
False
-
-
N/A
-
Bug Fix
-
Done
Exposed via the fact that the periodic-ci-openshift-release-master-nightly-4.12-e2e-metal-ipi-sdn-serial-ipv4 job is at 0% for at least the past two weeks over approximatesly 65 runs.
Testgrid shows that this job started failing in a very consistent way on Oct 25th at about 8am UTC: https://testgrid.k8s.io/redhat-openshift-ocp-release-4.12-informing#periodic-ci-openshift-release-master-nightly-4.12-e2e-metal-ipi-sdn-serial-ipv4
6 disruption tests fail, all with alarming consistency virtually always claiming exactly 8s of disruption, max allowed 1s.
And then openshift-tests.[sig-arch] events should not repeat pathologically fails with an odd signature:
{ 6 events happened too frequently event happened 35 times, something is wrong: node/master-2 - reason/NodeHasNoDiskPressure roles/control-plane,master Node master-2 status is now: NodeHasNoDiskPressure event happened 35 times, something is wrong: node/master-2 - reason/NodeHasSufficientMemory roles/control-plane,master Node master-2 status is now: NodeHasSufficientMemory event happened 35 times, something is wrong: node/master-2 - reason/NodeHasSufficientPID roles/control-plane,master Node master-2 status is now: NodeHasSufficientPID event happened 35 times, something is wrong: node/master-1 - reason/NodeHasNoDiskPressure roles/control-plane,master Node master-1 status is now: NodeHasNoDiskPressure event happened 35 times, something is wrong: node/master-1 - reason/NodeHasSufficientMemory roles/control-plane,master Node master-1 status is now: NodeHasSufficientMemory event happened 35 times, something is wrong: node/master-1 - reason/NodeHasSufficientPID roles/control-plane,master Node master-1 status is now: NodeHasSufficientPID}
The two types of tests started failing together exactly, and the disruption measurements are bizzarely consistent, every single time we see precisely 8s for kube-api, cache-kube-api, openshift-api, cache-openshift-api, oauth-api, cache-oauth-api. It's always these 6, and it seems to be always exactly 8 seconds. I cannot state enough how strange this is. It almost implies that something is happening on a very consistent schedule.
Occasionally these are accompanied by 1-2s of disruption for those backends with new connections, but sometimes not as well.
It looks like all of the disruption consistently happens within two very long tests:
4s within: [sig-network] services when running openshift ipv4 cluster ensures external ip policy is configured correctly on the cluster [Serial] [Suite:openshift/conformance/serial]
4s within: [sig-network] services when running openshift ipv4 cluster on bare metal [apigroup:config.openshift.io] ensures external auto assign cidr is configured correctly on the cluster [Serial] [Suite:openshift/conformance/serial]
Both tests appear to have run prior to oct 25, so I don't think it's a matter of new tests breaking something or getting unskipped. Both tests also always pass, but appear to be impacting the cluster?
The master's going NotReady also appears to fall within the above two tests as well, though it does not seem to directly match with when we measure disruption, but bear in mind there's a 40s delay before the node goes NotReady.
Focusing on https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-nightly-4.12-e2e-metal-ipi-sdn-serial-ipv4/1590640492373086208 where the above are from:
Two of the three master nodes appear to be going NodeNotReady a couple times throughout the run, as visible in the spyglass chart under the node state row on the left. master-0 does not appear here, but it does exist. (I suspect it has leader and thus is the node reporting the others going not ready)
From the master-0 kubelet log in must-gather we can see one of these examples where it reports that master-2 has not checked in:
2022-11-10T10:38:35.874090961Z I1110 10:38:35.873975 1 node_lifecycle_controller.go:1137] node master-2 hasn't been updated for 40.00700561s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-10 1 0:36:10 +0000 UTC,LastTransitionTime:2022-11-10 10:29:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,} 2022-11-10T10:38:35.874090961Z I1110 10:38:35.874056 1 node_lifecycle_controller.go:1137] node master-2 hasn't been updated for 40.007097549s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:False,LastHeartb eatTime:2022-11-10 10:36:10 +0000 UTC,LastTransitionTime:2022-11-10 10:29:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,} 2022-11-10T10:38:35.874090961Z I1110 10:38:35.874067 1 node_lifecycle_controller.go:1137] node master-2 hasn't been updated for 40.007110285s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatT ime:2022-11-10 10:36:10 +0000 UTC,LastTransitionTime:2022-11-10 10:29:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,} 2022-11-10T10:38:35.874090961Z I1110 10:38:35.874076 1 node_lifecycle_controller.go:1137] node master-2 hasn't been updated for 40.007119541s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTim e:2022-11-10 10:36:10 +0000 UTC,LastTransitionTime:2022-11-10 10:29:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,} 2022-11-10T10:38:35.881749410Z I1110 10:38:35.881705 1 controller_utils.go:181] "Recording status change event message for node" status="NodeNotReady" node="master-2" 2022-11-10T10:38:35.881749410Z I1110 10:38:35.881733 1 controller_utils.go:120] "Update ready status of pods on node" node="master-2" 2022-11-10T10:38:35.881820988Z I1110 10:38:35.881799 1 controller_utils.go:138] "Updating ready status of pod to false" pod="metal3-b7b69fdbb-rfbdj" 2022-11-10T10:38:35.881893234Z I1110 10:38:35.881858 1 topologycache.go:179] Ignoring node master-2 because it has an excluded label 2022-11-10T10:38:35.881893234Z W1110 10:38:35.881886 1 topologycache.go:199] Can't get CPU or zone information for worker-0 node 2022-11-10T10:38:35.881903023Z I1110 10:38:35.881892 1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, false) 2022-11-10T10:38:35.881932172Z I1110 10:38:35.881917 1 controller.go:271] Node changes detected, triggering a full node sync on all loadbalancer services 2022-11-10T10:38:35.882290428Z I1110 10:38:35.882270 1 event.go:294] "Event occurred" object="master-2" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node master-2 status is now: NodeNotReady"
Now from master-2's kubelet log around that time, 40 seconds earlier puts us at 10:37:55, so we'd be looking for something odd around there.
A few potential lines:
Nov 10 10:37:55.232537 master-2 kubenswrapper[1930]: I1110 10:37:55.232495 1930 patch_prober.go:29] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.111.22:10257/healthz\": dial tcp 192.168.111.22:10257: connect: connection refused" start-of-body= Nov 10 10:37:55.232537 master-2 kubenswrapper[1930]: I1110 10:37:55.232549 1930 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID=8be2c6c1-f8f6-4bf0-b26d-53ce487354bd containerName="guard" probeResult=failure output="Get \"https://192.168.111.22:10257/healthz\": dial tcp 192.168.111.22:10257: connect: connection refused" Nov 10 10:38:12.238273 master-2 kubenswrapper[1930]: E1110 10:38:12.238229 1930 controller.go:187] failed to update lease, error: Put "https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Nov 10 10:38:13.034109 master-2 kubenswrapper[1930]: E1110 10:38:13.034077 1930 kubelet_node_status.go:487] "Error updating node status, will retry" err="error getting node \"master-2\": Get \"https://api-int.ostest.test.metalkube.org:6443/api/v1/nodes/master-2?resourceVersion=0&timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
At 10:38:40 all kinds of master-2 watches time out with messages like:
Nov 10 10:38:40.244399 master-2 kubenswrapper[1930]: W1110 10:38:40.244272 1930 reflector.go:347] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
And then suddenly we're back online:
Nov 10 10:38:40.252149 master-2 kubenswrapper[1930]: I1110 10:38:40.252131 1930 kubelet_node_status.go:590] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Nov 10 10:38:40.252149 master-2 kubenswrapper[1930]: I1110 10:38:40.252156 1930 kubelet_node_status.go:590] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Nov 10 10:38:40.252268 master-2 kubenswrapper[1930]: I1110 10:38:40.252165 1930 kubelet_node_status.go:590] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Nov 10 10:38:40.252268 master-2 kubenswrapper[1930]: I1110 10:38:40.252177 1930 kubelet_node_status.go:590] "Recording event message for node" node="master-2" event="NodeReady" Nov 10 10:38:47.904430 master-2 kubenswrapper[1930]: I1110 10:38:47.904373 1930 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Nov 10 10:38:47.904842 master-2 kubenswrapper[1930]: I1110 10:38:47.904662 1930 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Nov 10 10:38:47.907900 master-2 kubenswrapper[1930]: I1110 10:38:47.907872 1930 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Nov 10 10:38:48.431448 master-2 kubenswrapper[1930]: I1110 10:38:48.431414 1930 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Nov 10 10:38:54.764069 master-2 kubenswrapper[1930]: I1110 10:38:54.764029 1930 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" status=Running Nov 10 10:38:54.764069 master-2 kubenswrapper[1930]: I1110 10:38:54.764059 1930 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-master-2" status=Running Nov 10 10:38:54.764069 master-2 kubenswrapper[1930]: I1110 10:38:54.764077 1930 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-master-2" status=Running Nov 10 10:38:54.764069 master-2 kubenswrapper[1930]: I1110 10:38:54.764086 1930 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-master-2" status=Running Nov 10 10:38:54.764492 master-2 kubenswrapper[1930]: I1110 10:38:54.764106 1930 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-master-2" status=Running Nov 10 10:38:54.764492 master-2 kubenswrapper[1930]: I1110 10:38:54.764113 1930 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" status=Running
Also curious:
Nov 10 10:37:50.318237 master-2 ovs-vswitchd[1324]: ovs|00251|connmgr|INFO|br0<->unix#468: 2 flow_mods in the last 0 s (2 deletes) Nov 10 10:37:50.342965 master-2 ovs-vswitchd[1324]: ovs|00252|connmgr|INFO|br0<->unix#471: 4 flow_mods in the last 0 s (4 deletes) Nov 10 10:37:50.364271 master-2 ovs-vswitchd[1324]: ovs|00253|bridge|INFO|bridge br0: deleted interface vethcb8d36e6 on port 41 Nov 10 10:37:53.579562 master-2 NetworkManager[1336]: <info> [1668076673.5795] dhcp4 (enp2s0): state changed new lease, address=192.168.111.22
These look like they could be related to the tests these problems appear to coincide with?
- blocks
-
OCPBUGS-3874 masters repeatedly losing connection to API and going NotReady
- Closed
- is cloned by
-
OCPBUGS-3874 masters repeatedly losing connection to API and going NotReady
- Closed
- links to