-
Bug
-
Resolution: Duplicate
-
Major
-
None
-
4.19
-
Quality / Stability / Reliability
-
False
-
-
None
-
Important
-
None
-
None
-
None
-
Rejected
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
Version-Release number of selected component (if applicable):
4.19.0-rc.5
How reproducible:
Always
Steps to Reproduce:
1. The UDN service case failed in IPI cluster (ppc64le) for both layer3 and layer2
failed: (4m18s) 2025-06-10T11:35:25 "[sig-networking] SDN udn services Author:huirwang-Critical-75942-Validate pod2Service/nodePortService for UDN(Layer3)
2. After troubleshooting, the case failed at step not able to access the nodePort service which the node didn't have the endpoint pods deployed on that node when ETP=Cluster
Environment information
1. UDN in the namespace e2e-test-udn-networking-udn-lztcr
% oc get UserDefinedNetwork -n e2e-test-udn-networking-udn-lztcr -o yaml apiVersion: v1 items: - apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"k8s.ovn.org/v1","kind":"UserDefinedNetwork","metadata":{"annotations":{},"name":"udn-network-ss-75942","namespace":"e2e-test-udn-networking-udn-lztcr"},"spec":{"layer3":{"mtu":1400,"role":"Primary","subnets":[{"cidr":"10.150.0.0/16","hostSubnet":24}]},"topology":"Layer3"}} creationTimestamp: "2025-06-10T11:31:42Z" finalizers: - k8s.ovn.org/user-defined-network-protection generation: 1 name: udn-network-ss-75942 namespace: e2e-test-udn-networking-udn-lztcr resourceVersion: "463975" uid: 2909463c-cf29-4843-9772-0f5e615efa64 spec: layer3: mtu: 1400 role: Primary subnets: - cidr: 10.150.0.0/16 hostSubnet: 24 topology: Layer3 status: conditions: - lastTransitionTime: "2025-06-10T11:31:42Z" message: NetworkAttachmentDefinition has been created reason: NetworkAttachmentDefinitionCreated status: "True" type: NetworkCreated - lastTransitionTime: "2025-06-10T11:31:42Z" message: Network allocation succeeded for all synced nodes. reason: NetworkAllocationSucceeded status: "True" type: NetworkAllocationSucceeded kind: List metadata: resourceVersion: ""
2. NodePort service in udn namespace
% oc get svc -n e2e-test-udn-networking-udn-lztcr -o yaml apiVersion: v1 items: - apiVersion: v1 kind: Service metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"test-service"},"name":"test-service","namespace":"e2e-test-udn-networking-udn-lztcr"},"spec":{"externalTrafficPolicy":"","internalTrafficPolicy":"Cluster","ipFamilyPolicy":"SingleStack","ports":[{"name":"http","port":27017,"protocol":"TCP","targetPort":8080}],"selector":{"name":"hello-pod"},"type":"NodePort"}} creationTimestamp: "2025-06-10T11:34:46Z" labels: name: test-service name: test-service namespace: e2e-test-udn-networking-udn-lztcr resourceVersion: "464933" uid: fecac2bc-d17c-40cc-baf8-9b1c2b690cff spec: clusterIP: 172.30.250.140 clusterIPs: - 172.30.250.140 externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: http nodePort: 31064 port: 27017 protocol: TCP targetPort: 8080 selector: name: hello-pod sessionAffinity: None type: NodePort status: loadBalancer: {} kind: List metadata: resourceVersion: ""
2. Pods in udn namespace, only hello-pod-1 is the backend pod of nodePort service
% oc get pods -n e2e-test-udn-networking-udn-lztcr --show-labels NAME READY STATUS RESTARTS AGE LABELS hello-pod-1 1/1 Running 0 67m name=hello-pod hello-pod-2 1/1 Running 0 66m name=hello-pod-2 hello-pod-3 1/1 Running 0 66m name=hello-pod-3 % oc get pods -n e2e-test-udn-networking-udn-lztcr -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES hello-pod-1 1/1 Running 0 67m 10.129.2.38 ipi415rc5syd-q87t8-worker-85snp <none> <none> hello-pod-2 1/1 Running 0 67m 10.128.2.50 ipi415rc5syd-q87t8-worker-g6pmr <none> <none> hello-pod-3 1/1 Running 0 66m 10.129.2.39 ipi415rc5syd-q87t8-worker-85snp <none> <none>
3. Check the nodes
% oc get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ipi415rc5syd-q87t8-master-0 Ready control-plane,master 30h v1.32.5 192.168.0.10 192.168.0.10 Red Hat Enterprise Linux CoreOS 9.6.20250530-0 (Plow) 5.14.0-570.19.1.el9_6.ppc64le cri-o://1.32.4-2.rhaos4.19.git98d1c09.el9 ipi415rc5syd-q87t8-master-1 Ready control-plane,master 30h v1.32.5 192.168.0.11 192.168.0.11 Red Hat Enterprise Linux CoreOS 9.6.20250530-0 (Plow) 5.14.0-570.19.1.el9_6.ppc64le cri-o://1.32.4-2.rhaos4.19.git98d1c09.el9 ipi415rc5syd-q87t8-master-2 Ready control-plane,master 30h v1.32.5 192.168.0.13 192.168.0.13 Red Hat Enterprise Linux CoreOS 9.6.20250530-0 (Plow) 5.14.0-570.19.1.el9_6.ppc64le cri-o://1.32.4-2.rhaos4.19.git98d1c09.el9 ipi415rc5syd-q87t8-worker-85snp Ready worker 29h v1.32.5 192.168.0.14 192.168.0.14 Red Hat Enterprise Linux CoreOS 9.6.20250530-0 (Plow) 5.14.0-570.19.1.el9_6.ppc64le cri-o://1.32.4-2.rhaos4.19.git98d1c09.el9 ipi415rc5syd-q87t8-worker-g6pmr Ready worker 29h v1.32.5 192.168.0.15 192.168.0.15 Red Hat Enterprise Linux CoreOS 9.6.20250530-0 (Plow) 5.14.0-570.19.1.el9_6.ppc64le cri-o://1.32.4-2.rhaos4.19.git98d1c09.el9 ipi415rc5syd-q87t8-worker-ttf4p Ready worker 29h v1.32.5 192.168.0.16 192.168.0.16 Red Hat Enterprise Linux CoreOS 9.6.20250530-0 (Plow) 5.14.0-570.19.1.el9_6.ppc64le cri-o://1.32.4-2.rhaos4.19.git98d1c09.el9
4. From a node, to access nodeIP:nodePort, the node has the end pod deployed
% oc debug node/ipi415rc5syd-q87t8-master-0 Starting pod/ipi415rc5syd-q87t8-master-0-debug-sn4d9 ... To use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`. Pod IP: 192.168.0.10 If you don't see a command prompt, try pressing enter. sh-5.1# curl 192.168.0.14:31064 Hello OpenShift!
4. From a node, to access nodeIP:nodePort, the node did NOT have the end pod deployed
% oc debug node/ipi415rc5syd-q87t8-master-0 Starting pod/ipi415rc5syd-q87t8-master-0-debug-5f8qp ... To use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`. Pod IP: 192.168.0.10 If you don't see a command prompt, try pressing enter. sh-5.1# curl 192.168.0.16:31064 --connect-timeout 5 curl: (28) Connection timed out after 5000 milliseconds sh-5.1# curl 192.168.0.15:31064 --connect-timeout 5 curl: (28) Connection timed out after 5001 milliseconds
The cluster is in LGW mode
% oc get network.operator -o yaml | grep routingViaHost routingViaHost: true
Actual results:
Expected results:
Additional info:
Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.
Affected Platforms:
Is it an
- internal CI failure
- customer issue / SD
- internal RedHat testing failure
If it is an internal RedHat testing failure:
- Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).
If it is a CI failure:
- Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
- Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
- Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
- When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
- If it's a connectivity issue,
- What is the srcNode, srcIP and srcNamespace and srcPodName?
- What is the dstNode, dstIP and dstNamespace and dstPodName?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
If it is a customer / SD issue:
- Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
- Don’t presume that Engineering has access to Salesforce.
- Do presume that Engineering will access attachments through supportshell.
- Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
- Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
- If the issue is in a customer namespace then provide a namespace inspect.
- If it is a connectivity issue:
- What is the srcNode, srcNamespace, srcPodName and srcPodIP?
- What is the dstNode, dstNamespace, dstPodName and dstPodIP?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
- Please provide the UTC timestamp networking outage window from must-gather
- Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
- If it is not a connectivity issue:
- Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
- When showing the results from commands, include the entire command in the output.
- For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
- For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
- Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
- Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
- For guidance on using this template please see
OCPBUGS Template Training for Networking components