-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
4.20
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
Critical
-
Yes
-
None
-
None
-
Rejected
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
During the 4.20 ci failure anaylysis, found egressIP cases failed on some hypershift ci jobs,
4.20 jobs
https://qe-private-deck-ci.apps.ci.l2s4.p1.openshiftapps.com/view/gs/qe-private-deck/logs/periodic-ci-openshift-openshift-tests-private-release-4.20-amd64-nightly-aws-ipi-ovn-hypershift-guest-longduration-f14/1952312377118560256
checking the case 47019 or 47164
https://qe-private-deck-ci.apps.ci.l2s4.p1.openshiftapps.com/view/gs/qe-private-deck/logs/periodic-ci-openshift-openshift-tests-private-release-4.20-amd64-nightly-aws-ipi-ovn-hypershift-guest-f14-destructive/1952430876063174656
Failed in 4.19 jobs latest job as well
https://qe-private-deck-ci.apps.ci.l2s4.p1.openshiftapps.com/view/gs/qe-private-deck/logs/periodic-ci-openshift-openshift-tests-private-release-4.19-amd64-nightly-aws-ipi-ovn-hypershift-guest-f14-destructive/1952146995032363008
Passed in 4.19 previous job
https://qe-private-deck-ci.apps.ci.l2s4.p1.openshiftapps.com/view/gs/qe-private-deck/logs/periodic-ci-openshift-openshift-tests-private-release-4.19-amd64-nightly-aws-ipi-ovn-hypershift-guest-f14-destructive/1935839618792427520
Passed in 4.18 latest job
https://qe-private-deck-ci.apps.ci.l2s4.p1.openshiftapps.com/view/gs/qe-private-deck/logs/periodic-ci-openshift-openshift-tests-private-release-4.18-amd64-nightly-aws-ipi-ovn-hypershift-guest-f28-destructive/1945650098822189056
then we can reproduce it manually.
1. Create egressIP object
% oc get nodes NAME STATUS ROLES AGE VERSION ip-10-0-14-155.ec2.internal Ready worker 90m v1.33.2 ip-10-0-19-216.ec2.internal Ready worker 89m v1.33.2 ip-10-0-33-96.ec2.internal Ready worker 90m v1.33.2 % oc get egressip -o yaml apiVersion: v1 items: - apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: annotations: k8s.ovn.org/egressip-mark: "50001" creationTimestamp: "2025-08-07T04:08:38Z" generation: 4 name: egressip-47031 resourceVersion: "30254" uid: a211a004-9396-4cfa-8822-b0690b86b37f spec: egressIPs: - 10.0.3.19 namespaceSelector: matchLabels: org: qe podSelector: matchLabels: color: pink status: items: - egressIP: 10.0.3.19 node: ip-10-0-14-155.ec2.internal kind: List metadata: resourceVersion: ""
2. Create namespace test and test pod in it.
% oc get pods -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES hello-pod 1/1 Running 0 26m 10.133.2.27 ip-10-0-19-216.ec2.internal <none> <none>
3. Before add label to the namespace. and pod, egress traffic works
% oc rsh -n test hello-pod ~ $ curl www.google.com -I HTTP/1.1 200 OK Content-Type: text/html; charset=ISO-8859-1 Content-Security-Policy-Report-Only: object-src 'none';base-uri 'self';script-src 'nonce-SHeV_cAkA5NWHe-YXLRIxA' 'strict-dynamic' 'report-sample' 'unsafe-eval' 'unsafe-inline' https: http:;report-uri https://csp.withgoogle.com/csp/gws/other-hp P3P: CP="This is not a P3P policy! See g.co/p3phelp for more info." Date: Thu, 07 Aug 2025 04:27:09 GMT Server: gws X-XSS-Protection: 0 X-Frame-Options: SAMEORIGIN Transfer-Encoding: chunked Expires: Thu, 07 Aug 2025 04:27:09 GMT Cache-Control: private Set-Cookie: AEC=AVh_V2h9rIXMiLxc6g9Zn5D_Ss4cVgVXSnnLL39lR0szmE-Gd5FCIX687Do; expires=Tue, 03-Feb-2026 04:27:09 GMT; path=/; domain=.google.com; Secure; HttpOnly; SameSite=lax Set-Cookie: NID=525=C5SDJjIzCvlhaytuh7UZEb_vxDW0LsLj0D9xID_z2hgg_8RPqyziJjmTnobivF69OBI0VUcGc_UCSyRmOi7cQ2kaPCZkM9mQF1ax-WAZ4lLDobMvwLu1nwWCLX9JPnDW-RrtofxaypffeaMeBAZVkXuUNP3DsL_XJeNzK5yr6Krbhg3r01vt_qwX5eFZ2MeM7Mwampug3ruVJUhntgE; expires=Fri, 06-Feb-2026 04:27:09 GMT; path=/; domain=.google.com; HttpOnly
4. After add label , egress traffic got broken
% oc label ns test org=qe namespace/test labeled % oc label pod hello-pod -n test color=pink pod/hello-pod labeled % oc rsh -n test hello-pod ~ $ curl www.google.com -I --connect-timeout 5 curl: (28) Failed to connect to www.google.com port 80 after 4928 ms: Operation timed out
Actual results:
Expected results:
Additional info:
Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.
Affected Platforms:
Is it an
- internal CI failure
- customer issue / SD
- internal RedHat testing failure
If it is an internal RedHat testing failure:
- Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).
If it is a CI failure:
- Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
- Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
- Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
- When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
- If it's a connectivity issue,
- What is the srcNode, srcIP and srcNamespace and srcPodName?
- What is the dstNode, dstIP and dstNamespace and dstPodName?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
If it is a customer / SD issue:
- Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
- Don’t presume that Engineering has access to Salesforce.
- Do presume that Engineering will access attachments through supportshell.
- Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
- Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
- If the issue is in a customer namespace then provide a namespace inspect.
- If it is a connectivity issue:
- What is the srcNode, srcNamespace, srcPodName and srcPodIP?
- What is the dstNode, dstNamespace, dstPodName and dstPodIP?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
- Please provide the UTC timestamp networking outage window from must-gather
- Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
- If it is not a connectivity issue:
- Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
- When showing the results from commands, include the entire command in the output.
- For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
- For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
- Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
- Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
- For guidance on using this template please see
OCPBUGS Template Training for Networking components
- depends on
-
OCPBUGS-60144 Component Readiness: [Networking / ovn-kubernetes] [EgressIP] test regressed
-
- New
-