Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-38117

PodnetworkConnectivityCheck is reporting "i/o timout"

XMLWordPrintable

    • Important
    • None
    • False
    • Hide

      None

      Show
      None
    • 08/12 Not actionable with the info given. BQI:Poor - need more troubleshooting from support

      Description of problem:
      PodNetworkConnectivityCheck reports "i/o timeout" for pod that are running

      ~~~
      apiVersion: controlplane.operator.openshift.io/v1alpha1
      kind: PodNetworkConnectivityCheck
      metadata:
      creationTimestamp: '2024-07-06T00:38:12Z'
      generation: 3
      managedFields:

      • apiVersion: controlplane.operator.openshift.io/v1alpha1
        fieldsType: FieldsV1
        fieldsV1:
        'f:metadata':

      failures:

      • latency: 10.000398687s
        message: >-
        network-check-target-lxworker133: failed to establish a TCP connection
        to 10.128.41.44:8080: dial tcp 10.128.41.44:8080: i/o timeout
        reason: TCPConnectError
        success: false
        time: '2024-07-25T13:47:36Z'
      • latency: 3.13174ms
        message: >-
        network-check-target-lxworker133: failed to establish a TCP connection
        to 10.128.83.6:8080: dial tcp 10.128.83.6:8080: connect: connection
        refused
        reason: TCPConnectError
        success: false
        time: '2024-07-06T01:32:52Z'
      • latency: 4.517517ms
        message: >-
        network-check-target-lxworker133: failed to establish a TCP connection
        to 10.128.83.7:8080: dial tcp 10.128.83.7:8080: connect: connection
        refused
        reason: TCPConnectError
        success: false
        time: '2024-07-06T00:52:12Z'
      • latency: 10.001391928s
        message: >-
        network-check-target-lxworker133: failed to establish a TCP connection
        to 10.128.83.7:8080: dial tcp 10.128.83.7:8080: i/o timeout
        reason: TCPConnectError
        success: false
        time: '2024-07-06T00:51:12Z'
      • latency: 2.842981ms
        message: >-
        network-check-target-lxworker133: failed to establish a TCP connection
        to 10.128.83.7:8080: dial tcp 10.128.83.7:8080: connect: connection
        refused
        reason: TCPConnectError
        success: false
        time: '2024-07-06T00:50:12Z'
        outages:
      • end: '2024-07-25T13:48:36Z'
        endLogs:
      • latency: 8.988653ms
        message: >-
        network-check-target-lxworker133: tcp connection to
        10.128.41.44:8080 succeeded
        reason: TCPConnectime: '2024-07-25T13:47:36Z'
      • end: '2024-07-06T01:33:52Z'
        endLogs:
      • latency: 25.499411ms
        message: >-
        network-check-target-lxworker133: tcp connection to
        10.128.41.44:8080 succeeded
        reason: TCPConnect
        success: true
        time: '2024-07-06T01:33:52Z'
      • latency: 3.13174ms
        message: >-
        network-check-target-lxworker133: failed to establish a TCP
        connection to 10.128.83.6:8080: dial tcp 10.128.83.6:8080: connect:
        connection refused
        reason: TCPConnectError
        success: false
        time: '2024-07-06T01:32:52Z'
        message: Connectivity restored after 1m0.001979942s
        start: '2024-07-06T01:32:52Z'
        startLogs:
      • latency: 3.13174ms
        message: >-
        network-check-target-lxworker133: failed to establish a TCP
        connection to 10.128.83.6:8080: dial tcp 10.128.83.6:8080: connect:
        connection refused
        reason: TCPConnectError
        success: false
        time: '2024-07-06T01:32:52Z'
        ...
        successes:
      • latency: 10.041793ms
        message: >-
        network-check-target-lxworker133: tcp connection to 10.128.41.44:8080
        succeeded
        reason: TCPConnect
        success: true
        time: '2024-08-02T15:24:36Z'
      • latency: 22.147413ms
        message: >-
        network-check-target-lxworker133: tcp connection to 10.128.41.44:8080
        succeeded
        reason: TCPConnect
        success: true
        time: '2024-08-02T15:23:36Z'
      • latency: 7.61857ms
        message: >-
        ...
        ~~~

      When check the node sosreport we were not able to find the any logs also customer replied "
      We haven't found any problems on the node, except for this outage warning. We checked the logs of application pods running on this node and no errors are seen. On the other hand, CPU, memory and network consumption look normal."

      Since we do not see any type of problem in the nodes, or disconnections at the network level. What can be the cause of the issue

      Version-Release number of selected component (if applicable): 4.12.35

      How reproducible:

      Steps to Reproduce:

      1.

      2.

      3.

      Actual results: Getting i/o timeout error in pncc

      Expected results: Should not have any error. Since we do not see any type of problem in the nodes, or disconnections at the network level. What can be the cause of the issue

      Additional info:

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              mkennell@redhat.com Martin Kennelly
              hepatil Hemant Patil (Inactive)
              Anurag Saxena Anurag Saxena
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: