-
Bug
-
Resolution: Unresolved
-
Normal
-
None
-
4.18
-
Quality / Stability / Reliability
-
False
-
-
None
-
Critical
-
No
-
None
-
None
-
Rejected
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
External to LoadBalancer service failed in local gateway mode.
Version-Release number of selected component (if applicable):
build 4.19.0-0.nightly, openshift/ovn-kubernetes#2357, openshift/api#1997
How reproducible:
Always
Steps to Reproduce:
1. Run auto case in GCP cluster, it passed on SGW, but failed on LGW
In SGW mode:
passed: (3m40s) 2024-12-06T07:22:02 "[sig-networking] SDN udn services Author:huirwang-High-76014-Validate LoadBalancer service for UDN pods (Layer3/Layer2)"
1 pass, 0 skip (3m40s)
In LGW mode:
% oc get UserDefinedNetwork -n e2e-test-networking-udn-5tj4x -o yaml apiVersion: v1 items: - apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"k8s.ovn.org/v1","kind":"UserDefinedNetwork","metadata":{"annotations":{},"name":"udn-network-l3-76014","namespace":"e2e-test-networking-udn-5tj4x"},"spec":{"layer3":{"mtu":1400,"role":"Primary","subnets":[{"cidr":"10.200.0.0/16","hostSubnet":24}]},"topology":"Layer3"}} creationTimestamp: "2024-12-06T07:43:08Z" finalizers: - k8s.ovn.org/user-defined-network-protection generation: 1 name: udn-network-l3-76014 namespace: e2e-test-networking-udn-5tj4x resourceVersion: "111522" uid: 4a08cd6b-0de7-4270-9340-6856820d0ee1 spec: layer3: mtu: 1400 role: Primary subnets: - cidr: 10.200.0.0/16 hostSubnet: 24 topology: Layer3 status: conditions: - lastTransitionTime: "2024-12-06T07:43:08Z" message: Network allocation succeeded for all synced nodes. reason: NetworkAllocationSucceeded status: "True" type: NetworkAllocationSucceeded - lastTransitionTime: "2024-12-06T07:43:08Z" message: NetworkAttachmentDefinition has been created reason: NetworkAttachmentDefinitionReady status: "True" type: NetworkReady kind: List metadata: resourceVersion: "" % oc get svc -n e2e-test-networking-udn-5tj4x -o yaml apiVersion: v1 items: - apiVersion: v1 kind: Service metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"test-service"},"name":"test-service","namespace":"e2e-test-networking-udn-5tj4x"},"spec":{"externalTrafficPolicy":"Cluster","internalTrafficPolicy":"Cluster","ipFamilyPolicy":"SingleStack","ports":[{"name":"http","port":27017,"protocol":"TCP","targetPort":8080}],"selector":{"name":"hello-pod"},"type":"LoadBalancer"}} creationTimestamp: "2024-12-06T07:44:32Z" finalizers: - service.kubernetes.io/load-balancer-cleanup labels: name: test-service name: test-service namespace: e2e-test-networking-udn-5tj4x resourceVersion: "112380" uid: 891f8c08-824a-4124-9837-c143fbe757c5 spec: allocateLoadBalancerNodePorts: true clusterIP: 172.30.190.148 clusterIPs: - 172.30.190.148 externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: http nodePort: 31038 port: 27017 protocol: TCP targetPort: 8080 selector: name: hello-pod sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 130.211.224.232 ipMode: VIP kind: List metadata: resourceVersion: "" % oc get pods hello-pod -n e2e-test-networking-udn-5tj4x --show-labels NAME READY STATUS RESTARTS AGE LABELS hello-pod 1/1 Running 0 54m name=hello-pod % oc exec -n e2e-test-networking-udn-5tj4x hello-pod -- ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0@if96: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1360 qdisc noqueue state UP group default link/ether 0a:58:0a:81:02:13 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.129.2.19/23 brd 10.129.3.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::858:aff:fe81:213/64 scope link valid_lft forever preferred_lft forever 3: ovn-udn1@if97: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default link/ether 0a:58:0a:c8:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.200.0.4/24 brd 10.200.0.255 scope global ovn-udn1 valid_lft forever preferred_lft forever inet6 fe80::858:aff:fec8:4/64 scope link valid_lft forever preferred_lft forever
2.
3.
Actual results:
% curl 130.211.224.232:27017 --connect-timeout 30 curl: (28) Failed to connect to 130.211.224.232 port 27017 after 30006 ms: Timeout was reached
Created another test pod in same namespace, clusterIP service can be accessed.
% oc rsh -n e2e-test-networking-udn-5tj4x hello-pod-1 ~ $ curl 172.30.190.148:27017 Hello OpenShift!
Expected results:
Additional info:
Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.
Affected Platforms:
Is it an
- internal CI failure
- customer issue / SD
- internal RedHat testing failure
If it is an internal RedHat testing failure:
- Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).
If it is a CI failure:
- Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
- Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
- Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
- When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
- If it's a connectivity issue,
- What is the srcNode, srcIP and srcNamespace and srcPodName?
- What is the dstNode, dstIP and dstNamespace and dstPodName?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
If it is a customer / SD issue:
- Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
- Don’t presume that Engineering has access to Salesforce.
- Do presume that Engineering will access attachments through supportshell.
- Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
- Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
- If the issue is in a customer namespace then provide a namespace inspect.
- If it is a connectivity issue:
- What is the srcNode, srcNamespace, srcPodName and srcPodIP?
- What is the dstNode, dstNamespace, dstPodName and dstPodIP?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
- Please provide the UTC timestamp networking outage window from must-gather
- Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
- If it is not a connectivity issue:
- Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
- When showing the results from commands, include the entire command in the output.
- For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
- For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
- Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
- Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
- For guidance on using this template please see
OCPBUGS Template Training for Networking components