Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-54911

With route advertisement enabled a pod on the node cannot access pod host port on the same node and other node.

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • Known Issue
    • pod.spec.container.ports.hostPort not supported when network is BGP advertised
    • None
    • None
    • None
    • None

      Description of problem:

      The port on the node is opened with a pod and is mapped to container port without any service fronting the pod. The node port is accessed with the pod running on the same node with nodeIP and port number to see curl fail with route advertisement enabled on cluster. 

      If there is not route advertisement the curl requests pass.

      Version-Release number of selected component (if applicable):

      oc version
      Client Version: 4.15.9
      Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
      Server Version: 4.19.0-0.test-2025-04-10-063641-ci-ln-z4twiwt-latest
      Kubernetes Version: v1.32.3

      How reproducible:

      Always

      Steps to Reproduce:

      1. Set up external router and peer with it. Create a route advertisement.

      oc get ra default -oyaml

      apiVersion: k8s.ovn.org/v1
      kind: RouteAdvertisements
      metadata:
        annotations:
          kubectl.kubernetes.io/last-applied-configuration: |
            {"apiVersion":"k8s.ovn.org/v1","kind":"RouteAdvertisements","metadata":{"annotations":{},"name":"default"},"spec":{"advertisements":["PodNetwork"],"networkSelector":{"matchLabels":{"k8s.ovn.org/default-network":""}}}}
        creationTimestamp: "2025-04-10T08:37:35Z"
        generation: 3
        name: default
        resourceVersion: "263319"
        uid: f42fe855-fe89-4ddb-8172-956ef911deb4
      spec:
        advertisements:
        - PodNetwork
        networkSelector:
          matchLabels:
            k8s.ovn.org/default-network: ""
      status:
        conditions:
        - lastTransitionTime: "2025-04-10T21:51:51Z"
          message: ovn-kubernetes cluster-manager validated the resource and requested the
            necessary configuration changes
          observedGeneration: 3
          reason: Accepted
          status: "True"
          type: Accepted
        status: Accepted

      2. Create two pods each on a node that open 30003 port on the node where the pod is running.

      Use the JSON below to create httpserver pods.

       

      {
          "kind": "List",
          "apiVersion": "v1",
          "metadata": {},
          "items": [
              {
                  "apiVersion": "v1",
                  "kind": "Pod",
                  "metadata": {
                      "annotations": {
                          "openshift.io/scc": "privileged"
                      },
                      "labels": {
                          "app.kubernetes.io/name": "httpserver"
                      },
                      "name": "httpserverpod-73454-0",
                      "namespace": "test"
                  },
                  "spec": {
                      "containers": [
                          {
                              "command": [
                                  "python",
                                  "-m",
                                  "http.server",
                                  "-b",
                                  "::",
                                  "30001"
                              ],
                              "image": "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest",
                              "imagePullPolicy": "IfNotPresent",
                              "name": "httpserver",
                              "ports": [
                                  {
                                      "containerPort": 30001,
                                      "hostPort": 30003,
                                      "name": "httpport",
                                      "protocol": "TCP"
                                  }
                              ],
                              "resources": {
                                  "limits": {
                                      "cpu": "100m",
                                      "memory": "128Mi"
                                  },
                                  "requests": {
                                      "cpu": "1m",
                                      "memory": "10Mi"
                                  }
                              },
                              "securityContext": {
                                  "privileged": true
                              }
                          }
                      ],
                      "dnsPolicy": "ClusterFirst",
                      "nodeName": "worker-0",
                      "securityContext": {}
                  }
              }
          ]
      }
       
      

       

      Test Pod

      {
          "kind": "List",
          "apiVersion": "v1",
          "metadata": {},
          "items": [
              {
                  "apiVersion": "v1",
                  "kind": "Pod",
                  "metadata": {
                      "labels": {
                          "name": "hello-pod"
                      },
                      "name": "test-pod-73454-0",
                      "namespace": "e2e-test-networking-adminnetworkpolicy-swk5l"
                  },
                  "spec": {
                      "containers": [
                          {
                              "image": "quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4",
                              "name": "hello-pod",
                              "securityContext": {
                                  "allowPrivilegeEscalation": false,
                                  "capabilities": {
                                      "drop": [
                                          "ALL"
                                      ]
                                  }
                              }
                          }
                      ],
                      "nodeName": "worker-0",
                      "securityContext": {
                          "runAsNonRoot": true,
                          "seccompProfile": {
                              "type": "RuntimeDefault"
                          }
                      }
                  }
              }
          ]
      }

      oc get pods -owide

      NAME                    READY   STATUS      RESTARTS   AGE     IP               NODE       NOMINATED NODE   READINESS GATES
      httpserverpod-73454-0   1/1     Running     0          2m13s   10.128.2.56      worker-0   <none>           <none>
      httpserverpod-73454-1   1/1     Running     0          9s      10.131.0.71      worker-1   <none>           <none>
      test-pod-73454-0        1/1     Running     0          23h     10.128.2.67      worker-0   <none>           <none>

      3. Access the node port on nodes from test pod

      master-0 {"ipv4":"192.168.111.20/24"}
      master-1 {"ipv4":"192.168.111.21/24"}
      master-2 {"ipv4":"192.168.111.22/24"}
      worker-0 {"ipv4":"192.168.111.23/24"}
      worker-1 {"ipv4":"192.168.111.24/24"}
      worker-2 {"ipv4":"192.168.111.25/24"}

      Test pod and httpserver pod on same node

      oc exec -it test-pod-73454-0 – curl -I 192.168.111.23:30003 --connect-timeout 5

      curl: (28) Connection timeout after 5000 ms
      command terminated with exit code 28
       
      

      Test pod and httpserver on different nodes

      oc exec -it test-pod-73454-0 – curl -I 192.168.111.24:30003 --connect-timeout 5

      curl: (28) Connection timeout after 5000 ms
      command terminated with exit code 28
       
      

      Actual results:

      curl fails

      Expected results:

      curl should pass

      Additional info:

      Delete route advertisement to see curl pass

       oc exec -it test-pod-73454-0 – curl -I 192.168.111.23:30003 --connect-timeout 5

      HTTP/1.0 200 OK
      Server: SimpleHTTP/0.6 Python/3.9.18
      Date: Fri, 11 Apr 2025 16:29:33 GMT
      Content-type: text/html; charset=utf-8
      Content-Length: 1024
       
      

      oc exec -it test-pod-73454-0 – curl -I 192.168.111.24:30003 --connect-timeout 5

      HTTP/1.0 200 OK
      Server: SimpleHTTP/0.6 Python/3.9.18
      Date: Fri, 11 Apr 2025 16:29:42 GMT
      Content-type: text/html; charset=utf-8
      Content-Length: 1024

       

       

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              sdn-team-bot sdn-team bot
              rhn-support-asood Arti Sood
              None
              None
              Arti Sood Arti Sood
              Jason Boxman Jason Boxman
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated: