-
Bug
-
Resolution: Not a Bug
-
Major
-
None
-
4.14
-
None
-
Important
-
None
-
False
-
-
Description of problem:
Nokia while deleting a pod in a namespace with 6 VF attached (pim) and until this pod is recreated, they face alarms in a different namespace with the same deployment, same application.
Alarms are based on a layer 2 heartbeat and are triggered for above 500 ms.
They have 5 SR-IOV deployments of the same application in this same cluster as they're doing load tests .
This happens even if the new pod is scheduled in a different worker than the original pod.
We've tried to reproduce the same issue in a lab but it doesn't get close to it.
Some OVS warning messages in journal makes us think this could be a lack of resources in the number of reserved CPUs (10).
ovs-vswitchd[2769]: ovs|00037|ovs_rcu(urcu4)|WARN|blocked 1000 ms waiting for main to quiesce
We requested Nokia to add 2 dedicated CPUs to the reserved profile and they're still getting the same OVS errors during the VF attach and detach operations.
Version-Release number of selected component (if applicable):
4.14.29
How reproducible:
Always reproducible at customer environment.
Steps to Reproduce:
1. Create a pod with several SR-IOV definitions
2. Repeat the same in several other namespaces
3. Remove the original pod
Actual results:
Heatbeats in a different namespace take longer than the expected 500ms.
Here is the explanation from the customer:
"Internal-media (named also ‘backplane network’) is a layer 2 network used by PIM and MCM to exchange data. In official deployments, there are always two links between components, one is standby for redundancy.
Backplane is used between PIM and MCM (to send data for transcoding) and between PIMs (there are several cases where this communication is needed).
There is no direct connection between MCMs ( MCMs do not communicate with each other).
To test connectivity, each component sends heartbeat (HB) packets (and check the reply). Heartbeats are sent independently through each link. Because this is layer 2 network, appropriate MAC address is used to address the HB destination.
Heartbeats are sent every 500ms. HBs are sent through both links. The status of the link is checked every 600ms + random 0~20ms (MGW-27966).
HB message has sequence-id incremented by one for each sent message. The receiving side sends this sequence id back in the response. This sequence id is used to calculate lost packets. If 3 consecutive packets are lost, given link is marked as ‘failed’. At this moment alarm is triggered that single link is down. The second link is selected for communication (promote to Active). But if this link also has problems (3 or more HBs are lost), it is also marked as ‘failed’. At that time, alarm is triggered that both links are down.
It is enough to get one successful response to mark the link back ‘online’. "
Expected results:
VF detach/attach operation to go smoother and no impact in another namespaces.
Additional info:
Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.
Affected Platforms:
Is it an
- internal CI failure
- customer issue / SD
- internal RedHat testing failure
This is a customer issue. There are several tests reproduced with self-explanatory text files of each test in supportshell.
We also have several sosreports and must-gathers.
If it is an internal RedHat testing failure:
- Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).
If it is a CI failure:
- Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
- Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
- Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
- When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
- If it's a connectivity issue,
- What is the srcNode, srcIP and srcNamespace and srcPodName?
- What is the dstNode, dstIP and dstNamespace and dstPodName?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
If it is a customer / SD issue:
- Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
- Don’t presume that Engineering has access to Salesforce.
- Do presume that Engineering will access attachments through supportshell.
- Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
- Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
- If the issue is in a customer namespace then provide a namespace inspect.
- If it is a connectivity issue:
- What is the srcNode, srcNamespace, srcPodName and srcPodIP?
- What is the dstNode, dstNamespace, dstPodName and dstPodIP?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
- Please provide the UTC timestamp networking outage window from must-gather
- Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
- If it is not a connectivity issue:
- Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
- When showing the results from commands, include the entire command in the output.
- For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
- For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
- Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
- Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
- For guidance on using this template please see
OCPBUGS Template Training for Networking components