Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-39391

[Azure]Egress traffic got broken when pod was applied egressIP

XMLWordPrintable

    • Critical
    • Yes
    • Rejected
    • False
    • Hide

      None

      Show
      None
    • Service Delivery Architecture Overview
    • Service Delivery Architecture Overview
    • Hide
      Previously, pods that are selected by an EgressIP and running on azure self managed clusters, could communicate with the internet.
      For new installed with the latest release, the aforementioned pods will be able to continue to communicate with internal azure endpoints but will not be able to communicate to the external internet.
      Upgraded clusters are not affected and previous behaviour is conserved.
      Show
      Previously, pods that are selected by an EgressIP and running on azure self managed clusters, could communicate with the internet. For new installed with the latest release, the aforementioned pods will be able to continue to communicate with internal azure endpoints but will not be able to communicate to the external internet. Upgraded clusters are not affected and previous behaviour is conserved.
    • Known Issue
    • In Progress

      Description of problem:

      Version-Release number of selected component (if applicable):
      4.17.0-0.nightly-2024-09-02-153841
      How reproducible:
      Always
      Steps to Reproduce:

      1. Create a normal azure cluster
      Label one node as egress node

      2. Create an egressIP object

      % oc get egressip -o yaml
      apiVersion: v1
      items:
      - apiVersion: k8s.ovn.org/v1
        kind: EgressIP
        metadata:
          creationTimestamp: "2024-09-03T05:33:52Z"
          generation: 2
          name: egressip-2
          resourceVersion: "84798"
          uid: 0f7c942a-e3e6-4168-8590-a7af4179ac63
        spec:
          egressIPs:
          - 10.0.128.100
          namespaceSelector:
            matchLabels:
              name: qe
        status:
          items:
          - egressIP: 10.0.128.100
            node: huirwang-0903a-dzkzl-worker-westus-cm96g
      kind: List
      metadata:
        resourceVersion: ""
      

      3. Create a namespace and pod in it
      % oc get pods -n test
      NAME READY STATUS RESTARTS AGE
      hello-pod 1/1 Running 0 4h14m

      Before apply label to namespace, be able to access public website
      % oc rsh -n test hello-pod
      ~ $ curl www.google.com -I
      HTTP/1.1 200 OK
      Content-Type: text/html; charset=ISO-8859-1
      Content-Security-Policy-Report-Only: object-src 'none';base-uri 'self';script-src 'nonce-0D7rinwocCtsS5AiY15Hhg' 'strict-dynamic' 'report-sample' 'unsafe-eval' 'unsafe-inline' https: http:;report-uri https://csp.withgoogle.com/csp/gws/other-hp
      P3P: CP="This is not a P3P policy! See g.co/p3phelp for more info."
      Date: Tue, 03 Sep 2024 09:48:16 GMT
      Server: gws
      X-XSS-Protection: 0
      X-Frame-Options: SAMEORIGIN
      Transfer-Encoding: chunked
      Expires: Tue, 03 Sep 2024 09:48:16 GMT
      Cache-Control: private
      Set-Cookie: AEC=AVYB7coTbmQFhsTPmUjwH5qL436wLURNH1OdSkZv6gofOPZd9MZIrDcgRSU; expires=Sun, 02-Mar-2025 09:48:16 GMT; path=/; domain=.google.com; Secure; HttpOnly; SameSite=lax
      Set-Cookie: NID=517=j8fKRt8T0S2pX7B6sHfXmT0f24yAlnhiAaUkyCoAx-_VOf6dtw3TeeDO3HtfRER_i8uSw-D3yLDZXMdOAVcflthJywZa5e5yzxvGCps_5U0rfjBotg3UdNiDlWqN9AzxKB_xIZhpoUD7yJQm8HLN7wgQJPG8P_o4PlUjU9s4yn_FXhI_LeUR; expires=Wed, 05-Mar-2025 09:48:16 GMT; path=/; domain=.google.com; HttpOnly

      After apply label to namespace name=qe which matched the egressIP object

      Actual results:
      Egress traffic got broken
      % oc rsh -n test hello-pod
      ~ $ curl www.google.com -I --connect-timeout 5
      curl: (28) Failed to connect to www.google.com port 80 after 2714 ms: Operation timed out

      Expected results:
      Egress traffic not broken, egressIP works well

      Additional info:

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              mkennell@redhat.com Martin Kennelly
              huirwang Huiran Wang
              Huiran Wang Huiran Wang
              Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

                Created:
                Updated:
                Resolved: