Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-6860

service configured with a nodeport can't be reached until after restart of ovnkube-master

XMLWordPrintable

    • +
    • None
    • False
    • Hide

      None

      Show
      None

      Description of problem:

      After a clean install of MicroShift from ISO, a service configured with a nodeport can not be reached

      Version-Release number of selected component (if applicable):

      [redhat@localhost ~]$ rpm -qi microshift
      Name        : microshift
      Version     : 4.12.0_20230119163440_3e0c7c14
      Release     : 1.el8
      Architecture: x86_64
      Install Date: Thu Jan 19 13:20:00 2023
      Group       : Unspecified
      Size        : 118331948
      License     : ASL 2.0
      Signature   : (none)
      Source RPM  : microshift-4.12.0_20230119163440_3e0c7c14-1.el8.src.rpm
      Build Date  : Thu Jan 19 13:13:51 2023
      Build Host  : microshift-dev
      Relocations : (not relocatable)
      URL         : https://github.com/openshift/microshift
      Summary     : MicroShift service
      Description :
      The microshift package provides an OpenShift Kubernetes distribution optimized for small form factor and edge computing.

      How reproducible:

      Always

      Steps to Reproduce:

      1. Create a new microshift cluster (https://mastern-jenkins-csb-openshift-qe.apps.ocp-c1.prod.psi.redhat.com/job/microshift/job/microshift-deploy-iso/build?delay=0sec)
      2. export KUBECONFIG=kubeconfig
      3. oc create ns test
      4. oc create -n test -f ./nodeport_test_pod.yaml
      5. oc create -n test -f ./nodeport_test_service.yaml
      6. oc get service -n test
      NAME        TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)           AGE
      hello-pod   NodePort   10.43.19.147   <none>        27017:32170/TCP   5m31s
      *NOTE PORT NUMBERS (e.g. 32170)
      7. oc get node
      NOTE NODE NAME (e.g. dhcp-1-235-100.arm.eng.rdu2.redhat.com)
      8 curl NODE:PORT
      Expect "Customer-Red Test Nodeport" response  

      Actual results:

      $ oc get all -n test
      NAME                  READY   STATUS    RESTARTS   AGE
      pod/hello-pod-hkqpp   1/1     Running   0          12m
      
      NAME                              DESIRED   CURRENT   READY   AGE
      replicationcontroller/hello-pod   1         1         1       12m
      
      NAME                TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)           AGE
      service/hello-pod   NodePort   10.43.19.147   <none>        27017:32170/TCP   12m
      
      $ oc get node
      NAME                                     STATUS   ROLES                         AGE   VERSION
      dhcp-1-235-100.arm.eng.rdu2.redhat.com   Ready    control-plane,master,worker   8h    v1.25.0
      
      $ curl dhcp-1-235-100.arm.eng.rdu2.redhat.com:32170
      curl: (7) Failed to connect to dhcp-1-235-100.arm.eng.rdu2.redhat.com port 32170: No route to host
      
      $ sudo iptables-save | grep 32170
      -A OVN-KUBE-NODEPORT -p tcp -m addrtype --dst-type LOCAL -m tcp --dport 32170 -j DNAT --to-destination 10.43.19.147:27017   

      Expected results:

      $ curl dhcp-1-235-100.arm.eng.rdu2.redhat.com:32170
      Customer-Red Test Nodeport

      Additional info:

      $ Workaround requires restart of the ovnkube-master pod
      (e.g. oc delete pod -n openshift-ovn-kubernetes ovnkube-master-d598f)

       

        1. iptables-nodeport-31716-newvm-after-ovnkmaster-restart
          4 kB
          Miguel Angel Ajo Pelayo
        2. iptables-nodeport-31716-newvm-does-not-work
          2 kB
          Miguel Angel Ajo Pelayo
        3. nodeport-pod.yaml
          0.5 kB
          John George
        4. nodeport-svc.yaml
          0.2 kB
          John George

              majopela@redhat.com Miguel Angel Ajo Pelayo
              weliang1@redhat.com Weibin Liang
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

                Created:
                Updated:
                Resolved: