-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
4.18.0
-
Important
-
Yes
-
Rejected
-
False
-
Description of problem:
During deletes of many UDNs we see ovnk-node become unavailable.
We have this running nightly in prow so here are the artifacts from the run - https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/logs/periodic-ci-openshift-qe-ocp-qe-perfscale-ci-main-aws-4.18-nightly-x86-qe-perfscale-aws-ovn-small-udn-density-l3/1850719438769229824/
time="2024-10-28 04:28:31" level=warning msg="🚨 alert at 2024-10-28T04:26:54Z: '10.0.101.84:9103/ovnkube-node-hh5r7 ovnkube-node down'" file="alert_manager.go:217" time="2024-10-28 04:28:31" level=warning msg="🚨 alert at 2024-10-28T04:26:54Z: '10.0.121.246:9103/ovnkube-node-c25ml ovnkube-node down'" file="alert_manager.go:217" time="2024-10-28 04:28:31" level=warning msg="🚨 alert at 2024-10-28T04:26:54Z: '10.0.33.146:9103/ovnkube-node-phprm ovnkube-node down'" file="alert_manager.go:217" time="2024-10-28 04:28:31" level=warning msg="🚨 alert at 2024-10-28T04:26:54Z: '10.0.35.211:9103/ovnkube-node-8xcxf ovnkube-node down'" file="alert_manager.go:217" time="2024-10-28 04:28:31" level=warning msg="🚨 alert at 2024-10-28T04:26:54Z: '10.0.63.195:9103/ovnkube-node-lfkvr ovnkube-node down'" file="alert_manager.go:217" time="2024-10-28 04:28:31" level=warning msg="🚨 alert at 2024-10-28T04:28:24Z: '10.0.81.49:9103/ovnkube-node-pf9lz ovnkube-node down'" file="alert_manager.go:217" time="2024-10-28 04:28:31" level=warning msg="🚨 alert at 2024-10-28T04:26:54Z: '10.0.96.100:9103/ovnkube-node-zjhj9 ovnkube-node down'" file="alert_manager.go:217" time="2024-10-28 04:28:31" level=warning msg="🚨 alert at 2024-10-28T04:26:54Z: '10.0.104.247:9105/ovnkube-node-29kx4 ovnkube-node down'" file="alert_manager.go:217" time="2024-10-28 04:28:31" level=warning msg="🚨 alert at 2024-10-28T04:26:54Z: '10.0.12.234:9105/ovnkube-node-q2z8b ovnkube-node down'" file="alert_manager.go:217" time="2024-10-28 04:28:31" level=warning msg="🚨 alert at 2024-10-28T04:26:54Z: '10.0.2.135:9105/ovnkube-node-7kjt9 ovnkube-node down'" file="alert_manager.go:217" time="2024-10-28 04:28:31" level=warning msg="🚨 alert at 2024-10-28T04:26:54Z: '10.0.33.220:9105/ovnkube-node-lchw6 ovnkube-node down'" file="alert_manager.go:217"
Another execution which exhibited this - https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/logs/periodic-ci-openshift-qe-ocp-qe-perfscale-ci-main-aws-4.18-nightly-x86-qe-perfscale-aws-ovn-small-udn-density-l3/1850357026966736896/artifacts/
Version-Release number of selected component (if applicable):
How reproducible: 100%
Steps to Reproduce:
1. Run kube-burner-ocp UDN workload
2. After kube-burner executes, and cleans up, the issue produces.
Actual results:
Expected results:
Additional info:
Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.
Affected Platforms:
Is it an
- internal CI failure
- customer issue / SD
- internal RedHat testing failure
If it is an internal RedHat testing failure:
- Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).
If it is a CI failure:
- Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
- Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
- Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
- When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
- If it's a connectivity issue,
- What is the srcNode, srcIP and srcNamespace and srcPodName?
- What is the dstNode, dstIP and dstNamespace and dstPodName?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
If it is a customer / SD issue:
- Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
- Don’t presume that Engineering has access to Salesforce.
- Do presume that Engineering will access attachments through supportshell.
- Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
- Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
- If the issue is in a customer namespace then provide a namespace inspect.
- If it is a connectivity issue:
- What is the srcNode, srcNamespace, srcPodName and srcPodIP?
- What is the dstNode, dstNamespace, dstPodName and dstPodIP?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
- Please provide the UTC timestamp networking outage window from must-gather
- Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
- If it is not a connectivity issue:
- Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
- When showing the results from commands, include the entire command in the output.
- For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
- For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
- Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
- Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
- For guidance on using this template please see
OCPBUGS Template Training for Networking components