Uploaded image for project: 'MicroShift'
  1. MicroShift
  2. USHIFT-6334

Priority handling fails when hosts file conflicts with cluster DNS entries

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Undefined Undefined
    • None
    • None
    • Documentation
    • None
    • None
    • False
    • Hide

      None

      Show
      None
    • None
    • Important
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

      When a  service is created  the ip of the service is overriden by adding a different ip to the /custom/hosts file and  when i try to resolve the dns name currently it is being resolved to the overriden ip, but i expect it to get resolved to the same service ip based on core dns file
      

      Version-Release number of selected component (if applicable):

      4.21
      

      How reproducible:

      Always
      

      Steps to Reproduce:

      1. Install latest 4.21 microshift
      2.  create pod and service using the yaml file below
      apiVersion: v1
      kind: Service
      metadata:
        name: myservice
        namespace: test
      spec:
        selector:
          app: myservice
        ports:
        - protocol: TCP
          port: 80
          targetPort: 80
        type: ClusterIP
      ---
      apiVersion: v1
      kind: Pod
      metadata:
        name: myservice-pod
        namespace: test
        labels:
          app: myservice
      spec:
        containers:
        - name: simple
          image: busybox:1.36
          command: ['sh', '-c', 'echo "Service is running" && sleep 3600']
          ports:
          - containerPort: 80
      
      3. create custom host file using the content below
      cat custom-hosts-conflict 
      # Conflicting DNS entry - this should take precedence if configured correctly
      # Actual service IP is: $SERVICE_IP
      192.168.1.100  myservice.test.svc.cluster.local
      
      # Additional test entries
      192.168.1.101  test1.local
      192.168.1.102  test2.local
      
      4. Now add the host file to /etc/microshift/config.yaml
      [redhat@el96-src-rpm-install-ostree-host1 ~]$ sudo cat /etc/microshift/config.yaml
      dns:
        hosts:
          status: Enabled
          file: /home/redhat/custom-hosts-conflict
      
      5. Restart microshift using the command `systemctl restart microshift`
      6. check if entries have been propagated using command `oc get configmap hosts-file -n openshift-dns -o yaml`
      7. create another pod using the definition below.
      [redhat@el96-src-rpm-install-ostree-host1 ~]$ cat /tmp/pod1.yaml 
      apiVersion: v1
      kind: Pod
      metadata:
        name: dns-test-pod
        namespace: test
      spec:
        containers:
        - name: test
          image: busybox:1.36
          command: ['sh', '-c', 'sleep 3600']
        restartPolicy: Never
      8. Now run command `oc exec -n test myservice-pod -- nslookup myservice.test.svc.cluster.local`
      
      

      Actual results:

      [redhat@el96-src-rpm-install-ostree-host1 ~]$ oc exec -n test myservice-pod -- nslookup myservice.test.svc.cluster.local
      Server:		10.43.0.10
      Address:	10.43.0.10:53
      
      Name:	myservice.test.svc.cluster.local
      Address: 192.168.1.100
      
      
      I see that it is being resolved to the ip that is added in the custom host file.
      

      Expected results:

      Should be resolved to the actual service ip based on the dns-default configmap here.
      [redhat@el96-src-rpm-install-ostree-host1 ~]$ oc get configmap dns-default -n openshift-dns -o jsonpath='{.data.Corefile}'
      .:5353 {
          bufsize 1232
          errors
          log . {
              class error
          }
          health {
              lameduck 20s
          }
          ready
          kubernetes cluster.local in-addr.arpa ip6.arpa {
              pods insecure
              fallthrough in-addr.arpa ip6.arpa
          }
          prometheus 127.0.0.1:9153
          forward . /etc/resolv.conf {
              policy sequential
          }
          cache 900 {
              denial 9984 30
          }
          hosts /tmp/hosts/hosts {
              fallthrough
          }        
      
          reload
      }
      hostname.bind:5353 {
          chaos
      }
      
      

      Additional info:

      
      

              rhn-support-shdiaz Shauna Diaz
              knarra@redhat.com Rama Kasturi Narra
              None
              None
              None
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated: