-
Bug
-
Resolution: Unresolved
-
Normal
-
None
-
4.18
-
Important
-
No
-
Rejected
-
False
-
Description of problem:
[Azure] External to LoadBalancer service failed for UDN
Version-Release number of selected component (if applicable):
4.18.0-0.nightly-arm64-2025-02-07-011241
How reproducible:
Alwyas
Steps to Reproduce:
1. Ran auto case, it failed in azure platform.
SDN udn services Author:huirwang-High-76014-Validate LoadBalancer service for UDN pods (Layer3/Layer2)
Checking the post run environment:
% oc get svc -n e2e-test-udn-networking-udn-j95qd NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE test-service LoadBalancer 172.30.14.134 130.131.176.3 27017:32619/TCP 3m9s % oc get pods -n e2e-test-udn-networking-udn-j95qd NAME READY STATUS RESTARTS AGE hello-pod 1/1 Running 0 5m16s % oc exec -n e2e-test-udn-networking-udn-j95qd hello-pod -- ip a show ovn-udn1 3: ovn-udn1@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default link/ether 0a:58:0a:c8:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.200.0.4/24 brd 10.200.0.255 scope global ovn-udn1 valid_lft forever preferred_lft forever inet6 fe80::858:aff:fec8:4/64 scope link valid_lft forever preferred_lft forever From pod, the loadbalancer service can be accessed % oc rsh -n e2e-test-udn-networking-udn-j95qd hello-pod ~ $ curl 172.30.14.134:27017 Hello OpenShift! ~ $ curl -I 130.131.176.3:27017 HTTP/1.1 200 OK X-Request-Port: 8080 Date: Fri, 07 Feb 2025 10:03:31 GMT Content-Length: 17 Content-Type: text/plain; charset=utf-8 but from external (laptop), cannot access it, just can be pinged. % curl -I 130.131.176.3:27017 --connect-timeout 5 curl: (28) Failed to connect to 130.131.176.3 port 27017 after 5004 ms: Timeout was reached % ping 130.131.176.3 PING 130.131.176.3 (130.131.176.3): 56 data bytes 64 bytes from 130.131.176.3: icmp_seq=0 ttl=105 time=243.855 ms % oc get svc -n e2e-test-udn-networking-udn-j95qd -o yaml apiVersion: v1 items: - apiVersion: v1 kind: Service metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"test-service"},"name":"test-service","namespace":"e2e-test-udn-networking-udn-j95qd"},"spec":{"externalTrafficPolicy":"Cluster","internalTrafficPolicy":"Cluster","ipFamilyPolicy":"SingleStack","ports":[{"name":"http","port":27017,"protocol":"TCP","targetPort":8080}],"selector":{"name":"hello-pod"},"type":"LoadBalancer"}} creationTimestamp: "2025-02-07T09:58:20Z" finalizers: - service.kubernetes.io/load-balancer-cleanup labels: name: test-service name: test-service namespace: e2e-test-udn-networking-udn-j95qd resourceVersion: "46558" uid: dff1817d-8915-46ab-b12c-211a6c320dd1 spec: allocateLoadBalancerNodePorts: true clusterIP: 172.30.14.134 clusterIPs: - 172.30.14.134 externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: http nodePort: 32619 port: 27017 protocol: TCP targetPort: 8080 selector: name: hello-pod sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 130.131.176.3 ipMode: VIP kind: List metadata: resourceVersion: ""
2.
3.
Actual results:
Expected results:
Additional info:
Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.
Affected Platforms:
Is it an
- internal CI failure
- customer issue / SD
- internal RedHat testing failure
If it is an internal RedHat testing failure:
- Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).
If it is a CI failure:
- Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
- Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
- Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
- When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
- If it's a connectivity issue,
- What is the srcNode, srcIP and srcNamespace and srcPodName?
- What is the dstNode, dstIP and dstNamespace and dstPodName?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
If it is a customer / SD issue:
- Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
- Don’t presume that Engineering has access to Salesforce.
- Do presume that Engineering will access attachments through supportshell.
- Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
- Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
- If the issue is in a customer namespace then provide a namespace inspect.
- If it is a connectivity issue:
- What is the srcNode, srcNamespace, srcPodName and srcPodIP?
- What is the dstNode, dstNamespace, dstPodName and dstPodIP?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
- Please provide the UTC timestamp networking outage window from must-gather
- Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
- If it is not a connectivity issue:
- Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
- When showing the results from commands, include the entire command in the output.
- For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
- For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
- Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
- Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
- For guidance on using this template please see
OCPBUGS Template Training for Networking components