Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-9196

'execHostnameTest' method should consider situations where a node name is an ip address

XMLWordPrintable

    • Low
    • None
    • Unspecified
    • ---

      Description of problem:

      This test is currently failing with IBM Cloud's OpenShift offering.

      [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]

      The error message is as follows.

      [BeforeEach] [Top Level]
      github.com/openshift/origin/test/extended/util/framework.go:1489
      [BeforeEach] [Top Level]
      github.com/openshift/origin/test/extended/util/framework.go:1489
      [BeforeEach] [Top Level]
      github.com/openshift/origin/test/extended/util/test.go:61
      [BeforeEach] [sig-network] Services
      k8s.io/kubernetes@v1.23.0/test/e2e/framework/framework.go:185
      STEP: Creating a kubernetes client
      STEP: Building a namespace api object, basename services
      Mar 29 17:51:32.839: INFO: About to run a Kube e2e test, ensuring namespace is privileged
      W0329 17:51:33.862239 252 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
      Mar 29 17:51:33.862: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
      STEP: Waiting for a default service account to be provisioned in namespace
      [BeforeEach] [sig-network] Services
      k8s.io/kubernetes@v1.23.0/test/e2e/network/service.go:749
      [It] should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]
      k8s.io/kubernetes@v1.23.0/test/e2e/network/service.go:2194
      Mar 29 17:51:34.214: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
      Mar 29 17:51:36.302: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
      Mar 29 17:51:36.388: INFO: Running '/usr/bin/kubectl --server=https://c100-e.containers.test.cloud.ibm.com:30979 --kubeconfig=/tmp/kubeconfig --namespace=e2e-services-2011 exec kube-proxy-mode-detector – /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
      Mar 29 17:51:38.062: INFO: rc: 7
      Mar 29 17:51:38.216: INFO: Waiting for pod kube-proxy-mode-detector to disappear
      Mar 29 17:51:38.295: INFO: Pod kube-proxy-mode-detector no longer exists
      Mar 29 17:51:38.295: INFO: Couldn't detect KubeProxy mode - test failure may be expected: error running /usr/bin/kubectl --server=https://c100-e.containers.test.cloud.ibm.com:30979 --kubeconfig=/tmp/kubeconfig --namespace=e2e-services-2011 exec kube-proxy-mode-detector – /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode:
      Command stdout:

      stderr:
      + curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode
      command terminated with exit code 7

      error:
      exit status 7
      STEP: creating a TCP service svc-itp with type=ClusterIP and internalTrafficPolicy=Local in namespace e2e-services-2011
      STEP: Creating 1 webserver pod to be part of the TCP service
      Mar 29 17:51:38.660: INFO: The status of Pod echo-hostname-0 is Pending, waiting for it to be Running (with Ready = true)
      Mar 29 17:51:40.747: INFO: The status of Pod echo-hostname-0 is Running (Ready = true)
      STEP: waiting up to 3m0s for service svc-itp in namespace e2e-services-2011 to expose endpoints map[echo-hostname-0:[10180]]
      Mar 29 17:51:41.070: INFO: successfully validated that service svc-itp in namespace e2e-services-2011 exposes endpoints map[echo-hostname-0:[10180]]
      STEP: Creating 2 pause pods that will try to connect to the webserver
      Mar 29 17:51:41.235: INFO: The status of Pod pause-pod-0 is Pending, waiting for it to be Running (with Ready = true)
      Mar 29 17:51:43.315: INFO: The status of Pod pause-pod-0 is Running (Ready = true)
      Mar 29 17:51:43.523: INFO: The status of Pod pause-pod-1 is Pending, waiting for it to be Running (with Ready = true)
      Mar 29 17:51:45.603: INFO: The status of Pod pause-pod-1 is Pending, waiting for it to be Running (with Ready = true)
      Mar 29 17:51:47.621: INFO: The status of Pod pause-pod-1 is Running (Ready = true)
      Mar 29 17:51:47.621: INFO: Waiting up to 2m0s to get response from 172.21.181.166:80
      Mar 29 17:51:47.621: INFO: Running '/usr/bin/kubectl --server=https://c100-e.containers.test.cloud.ibm.com:30979 --kubeconfig=/tmp/kubeconfig --namespace=e2e-services-2011 exec pause-pod-0 – /bin/sh -x -c curl -q -s --connect-timeout 30 172.21.181.166:80/hostname'
      Mar 29 17:51:48.588: INFO: stderr: "+ curl -q -s --connect-timeout 30 172.21.181.166:80/hostname\n"
      Mar 29 17:51:48.588: INFO: stdout: "test-c90relb20u89kutb9q8g-ocptest410-default-0000019d.iks.ibm"
      [AfterEach] [sig-network] Services
      k8s.io/kubernetes@v1.23.0/test/e2e/framework/framework.go:186
      STEP: Collecting events from namespace "e2e-services-2011".
      STEP: Found 19 events.
      Mar 29 17:51:48.668: INFO: At 2022-03-29 17:51:34 +0000 UTC - event for kube-proxy-mode-detector:

      {default-scheduler } Scheduled: Successfully assigned e2e-services-2011/kube-proxy-mode-detector to 10.177.176.117
      Mar 29 17:51:48.668: INFO: At 2022-03-29 17:51:34 +0000 UTC - event for kube-proxy-mode-detector: {kubelet 10.177.176.117} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.33" already present on machine
      Mar 29 17:51:48.668: INFO: At 2022-03-29 17:51:35 +0000 UTC - event for kube-proxy-mode-detector: {kubelet 10.177.176.117} Created: Created container agnhost-container
      Mar 29 17:51:48.668: INFO: At 2022-03-29 17:51:35 +0000 UTC - event for kube-proxy-mode-detector: {kubelet 10.177.176.117} Started: Started container agnhost-container
      Mar 29 17:51:48.669: INFO: At 2022-03-29 17:51:38 +0000 UTC - event for echo-hostname-0: {default-scheduler }

      Scheduled: Successfully assigned e2e-services-2011/echo-hostname-0 to 10.177.176.76
      Mar 29 17:51:48.669: INFO: At 2022-03-29 17:51:38 +0000 UTC - event for kube-proxy-mode-detector:

      {kubelet 10.177.176.117} Killing: Stopping container agnhost-container
      Mar 29 17:51:48.669: INFO: At 2022-03-29 17:51:39 +0000 UTC - event for echo-hostname-0: {kubelet 10.177.176.76} Started: Started container agnhost-container
      Mar 29 17:51:48.669: INFO: At 2022-03-29 17:51:39 +0000 UTC - event for echo-hostname-0: {kubelet 10.177.176.76} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.33" already present on machine
      Mar 29 17:51:48.669: INFO: At 2022-03-29 17:51:39 +0000 UTC - event for echo-hostname-0: {kubelet 10.177.176.76} Created: Created container agnhost-container
      Mar 29 17:51:48.669: INFO: At 2022-03-29 17:51:41 +0000 UTC - event for pause-pod-0: {default-scheduler } Scheduled: Successfully assigned e2e-services-2011/pause-pod-0 to 10.177.176.76
      Mar 29 17:51:48.669: INFO: At 2022-03-29 17:51:42 +0000 UTC - event for pause-pod-0: {multus } AddedInterface: Add eth0 [172.30.227.253/32] from k8s-pod-network
      Mar 29 17:51:48.669: INFO: At 2022-03-29 17:51:42 +0000 UTC - event for pause-pod-0: {kubelet 10.177.176.76} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.33" already present on machine
      Mar 29 17:51:48.670: INFO: At 2022-03-29 17:51:42 +0000 UTC - event for pause-pod-0: {kubelet 10.177.176.76} Created: Created container agnhost-container
      Mar 29 17:51:48.670: INFO: At 2022-03-29 17:51:42 +0000 UTC - event for pause-pod-0: {kubelet 10.177.176.76} Started: Started container agnhost-container
      Mar 29 17:51:48.670: INFO: At 2022-03-29 17:51:43 +0000 UTC - event for pause-pod-1: {default-scheduler } Scheduled: Successfully assigned e2e-services-2011/pause-pod-1 to 10.177.176.117
      Mar 29 17:51:48.670: INFO: At 2022-03-29 17:51:44 +0000 UTC - event for pause-pod-1: {multus } AddedInterface: Add eth0 [172.30.210.22/32] from k8s-pod-network
      Mar 29 17:51:48.670: INFO: At 2022-03-29 17:51:45 +0000 UTC - event for pause-pod-1: {kubelet 10.177.176.117}

      Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.33" already present on machine
      Mar 29 17:51:48.670: INFO: At 2022-03-29 17:51:45 +0000 UTC - event for pause-pod-1:

      {kubelet 10.177.176.117} Created: Created container agnhost-container
      Mar 29 17:51:48.670: INFO: At 2022-03-29 17:51:45 +0000 UTC - event for pause-pod-1: {kubelet 10.177.176.117}

      Started: Started container agnhost-container
      Mar 29 17:51:48.755: INFO: POD NODE PHASE GRACE CONDITIONS
      Mar 29 17:51:48.755: INFO: echo-hostname-0 10.177.176.76 Running [

      {Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-03-29 17:51:38 +0000 UTC }

      {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-03-29 17:51:40 +0000 UTC }

      {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-03-29 17:51:40 +0000 UTC }

      {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-03-29 17:51:38 +0000 UTC }

      ]
      Mar 29 17:51:48.755: INFO: pause-pod-0 10.177.176.76 Running [

      {Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-03-29 17:51:41 +0000 UTC }

      {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-03-29 17:51:43 +0000 UTC }

      {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-03-29 17:51:43 +0000 UTC }

      {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-03-29 17:51:41 +0000 UTC }

      ]
      Mar 29 17:51:48.755: INFO: pause-pod-1 10.177.176.117 Running [

      {Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-03-29 17:51:43 +0000 UTC }

      {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-03-29 17:51:46 +0000 UTC }

      {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-03-29 17:51:46 +0000 UTC }

      {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-03-29 17:51:43 +0000 UTC }

      ]
      Mar 29 17:51:48.755: INFO:
      Mar 29 17:51:48.852: INFO: skipping dumping cluster info - cluster too large
      STEP: Destroying namespace "e2e-services-2011" for this suite.
      [AfterEach] [sig-network] Services
      k8s.io/kubernetes@v1.23.0/test/e2e/network/service.go:753
      fail [k8s.io/kubernetes@v1.23.0/test/e2e/network/util.go:176]: Expected
      <string>: test-c90relb20u89kutb9q8g-ocptest410-default-0000019d
      to equal
      <string>: 10

      The node name scheme in IBM Cloud is not the same as what this tests. The test incorrectly assumes that node names will never be IP addresses, but this can certainly be the case.

      Version-Release number of selected component (if applicable):

      ROKS 4.10

              cewong@redhat.com Cesar Wong
              openshift_jira_bot OpenShift Jira Bot
              Red Hat Employee
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated: