Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-50558

Intermittent traffic drop after recreating the egressIP object

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Important
    • No
    • None
    • None
    • Rejected
    • CORENET Sprint 271
    • 1
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

      Version-Release number of selected component (if applicable):
      % oc get clusterversion
      NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
      version 4.18.0-0.nightly-2025-02-10-142434 True False 5h24m Cluster version is 4.18.0-0.nightly-2025-02-10-142434

      How reproducible:
      Sometimes

      Steps to Reproduce:
      We have one auto case OCP-47163 and failed in ci runs. We can search the case ID 47163 in below log.
      https://gcsweb-qe-private-deck-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/qe-private-deck/logs/periodic-ci-openshift-openshift-tests-private-release-4.18-amd64-nightly-vsphere-ipi-ovn-shared-to-local-gw-migration-f28-destructive/1888035757449285632/artifacts/vsphere-ipi-ovn-shared-to-local-gw-migration-f28-destructive/openshift-extended-test-disruptive/build-log.txt

      like job: https://qe-private-deck-ci.apps.ci.l2s4.p1.openshiftapps.com/view/gs/qe-private-deck/logs/periodic-ci-openshift-openshift-tests-private-release-4.18-amd64-nightly-vsphere-ipi-ovn-shared-to-local-gw-migration-f28-destructive/1888035757449285632
      job: https://qe-private-deck-ci.apps.ci.l2s4.p1.openshiftapps.com/view/gs/qe-private-deck/logs/periodic-ci-openshift-openshift-tests-private-release-4.18-multi-nightly-aws-ipi-sno-etcd-encryption-amd-f28-destructive/1887267734719303680

      I also tried to reproduce it in local env and it's not easy to reproduce it, but still can run into this issue sometimes, like 1 failed / 5 run hit the issue or 2/5. After same egressIP object deletion and recreation, sometimes, the egress traffic got broken from pod or sometimes it used node IP, not egressIP.

      Below is a local run result, and post run, I kept the env to check, the egress traffic got broken.
      and then more than a couple of minutes later , it works again.
      I cannot say exactly how long it is back to work, in auto case, it has a 3 minutes iteration checking and not working and after that it also took me some time for manual checking.

       % oc rsh -n e2e-test-networking-6gxuhcma-p6rxv hello-pod   
      ~ $ ping 34.160.111.145
      PING 34.160.111.145 (34.160.111.145) 56(84) bytes of data.
      
      ^C
      --- 34.160.111.145 ping statistics ---
      48 packets transmitted, 0 received, 100% packet loss, time 48111ms
      
      ~ $ exit
      
      % oc get egressip -o yaml
      apiVersion: v1
      items:
      - apiVersion: k8s.ovn.org/v1
        kind: EgressIP
        metadata:
          annotations:
            k8s.ovn.org/egressip-mark: "50007"
          creationTimestamp: "2025-02-11T08:29:18Z"
          generation: 2
          name: egressip-47163
          resourceVersion: "97899"
          uid: 9aa37d79-5365-4fa9-9b94-b4e1b35f2057
        spec:
          egressIPs:
          - 10.0.137.215
          - 10.0.135.28
          namespaceSelector:
            matchLabels:
              name: test
        status:
          items:
          - egressIP: 10.0.137.215
            node: ip-10-0-129-158.us-east-2.compute.internal
      kind: List
      metadata:
        resourceVersion: ""
      
      % oc get ns e2e-test-networking-6gxuhcma-p6rxv --show-labels
      NAME                                 STATUS   AGE   LABELS
      e2e-test-networking-6gxuhcma-p6rxv   Active   46m   kubernetes.io/metadata.name=e2e-test-networking-6gxuhcma-p6rxv,name=test,pod-security.kubernetes.io/audit-version=latest,pod-security.kubernetes.io/audit=privileged,pod-security.kubernetes.io/enforce-version=latest,pod-security.kubernetes.io/enforce=privileged,pod-security.kubernetes.io/warn-version=latest,pod-security.kubernetes.io/warn=privileged
      
      

      Actual results:

      Expected results:

      Additional info:

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              sdn-team-bot sdn-team bot
              huirwang Huiran Wang
              None
              None
              Huiran Wang Huiran Wang
              None
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated: