-
Bug
-
Resolution: Unresolved
-
Normal
-
None
-
4.18
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
None
-
Yes
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Create a OCP4.18 rc3 on Baremetal
2. Enable CPU Manager and ToplogyManager and Deploy NFD, GPU Operator, SRIOV Operator
Detailed Step can be found in the doc
https://docs.google.com/document/d/1zRJwh0VJ8LQ1SKa6ascvSEzmEpDO2kmfFoQ2_FDTLWo/edit?tab=t.0
3. Restart kubelet, The detailed step can be found in the test case
https://polarion.engineering.redhat.com/polarion/#/project/OSE/workitem?id=OCP-30720
Actual results:
After restart kubelet, We failed to deploy pod, with below error, If we reboot the worker node, the node failed to connect(notReady) after reboot nodes.
It's not hardware issue, the same testing step succeed in OCP4.17.z in the same BM servers.
': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Warning FailedCreatePodSandBox 15s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod4cpu12sriov4_default_09779815-73a4-4fa5-8d9b-e5b1081171c0_0(b1aae05ec01c352f480a3f43a9ff43531c2d187f918438637f22b355316b732b): error adding pod default_pod4cpu12sriov4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b1aae05ec01c352f480a3f43a9ff43531c2d187f918438637f22b355316b732b" Netns:"/var/run/netns/51fca26e-7189-4a51-bc76-051050acc2e8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=default;K8S_POD_NAME=pod4cpu12sriov4;K8S_POD_INFRA_CONTAINER_ID=b1aae05ec01c352f480a3f43a9ff43531c2d187f918438637f22b355316b732b;K8S_POD_UID=09779815-73a4-4fa5-8d9b-e5b1081171c0" Path:"" ERRORED: error configuring pod [default/pod4cpu12sriov4] networking: [default/pod4cpu12sriov4/09779815-73a4-4fa5-8d9b-e5b1081171c0:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[default/pod4cpu12sriov4 b1aae05ec01c352f480a3f43a9ff43531c2d187f918438637f22b355316b732b network default NAD default] [default/pod4cpu12sriov4 b1aae05ec01c352f480a3f43a9ff43531c2d187f918438637f22b355316b732b network default NAD default] failed to configure pod interface: failure in plugging pod interface: failed to run 'ovs-vsctl --timeout=30 --may-exist add-port br-int b1aae05ec01c352 other_config:transient=true -- set interface b1aae05ec01c352 external_ids:attached_mac=0a:58:0a:80:02:38 external_ids:iface-id=default_pod4cpu12sriov4 external_ids:iface-id-ver=09779815-73a4-4fa5-8d9b-e5b1081171c0 external_ids:sandbox=b1aae05ec01c352f480a3f43a9ff43531c2d187f918438637f22b355316b732b external_ids:ip_addresses=10.128.2.56/23 -- --if-exists remove interface b1aae05ec01c352 external_ids k8s.ovn.org/network -- --if-exists remove interface b1aae05ec01c352 external_ids k8s.ovn.org/nad': exit status 1 "2024-12-31T04:02:41Z|00002|ovsdb_idl|WARN|transaction error: {\"details\":\"/etc/openvswitch/conf.db: cannot truncate to length 1371807\",\"error\":\"I/O error\",\"io-error\":\"Permission denied\"}\novs-vsctl: transaction error: {\"details\":\"/etc/openvswitch/conf.db: cannot truncate to length 1371807\",\"error\":\"I/O error\",\"io-error\":\"Permission denied\"}\n" "" ' ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
Expected results:
After restart kubelet, we can deploy new pods.
Additional info:
Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.
Affected Platforms:
Is it an
- internal CI failure
- customer issue / SD
- internal RedHat testing failure
If it is an internal RedHat testing failure:
- Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).
If it is a CI failure:
- Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
- Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
- Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
- When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
- If it's a connectivity issue,
- What is the srcNode, srcIP and srcNamespace and srcPodName?
- What is the dstNode, dstIP and dstNamespace and dstPodName?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
If it is a customer / SD issue:
- Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
- Don’t presume that Engineering has access to Salesforce.
- Do presume that Engineering will access attachments through supportshell.
- Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
- Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
- If the issue is in a customer namespace then provide a namespace inspect.
- If it is a connectivity issue:
- What is the srcNode, srcNamespace, srcPodName and srcPodIP?
- What is the dstNode, dstNamespace, dstPodName and dstPodIP?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
- Please provide the UTC timestamp networking outage window from must-gather
- Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
- If it is not a connectivity issue:
- Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
- When showing the results from commands, include the entire command in the output.
- For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
- For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
- Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
- Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
- For guidance on using this template please see
OCPBUGS Template Training for Networking components