Uploaded image for project: 'Red Hat OpenStack Services on OpenShift'
  1. Red Hat OpenStack Services on OpenShift
  2. OSPRH-5808

DB doesn't accept IPv4 and IPv6 connections simultaneously

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done-Errata
    • Icon: Normal Normal
    • rhos-18.0.11
    • None
    • ovn-operator
    • None
    • Moderate

      Currently ovndb will check if it's using ipv4 or ipv6 due to an entry on /etc/hosts [0]. This basically will check if OpenShift Cluster Network or the primary nic on the ovn db  pod is using ipv4 or ipv6. 

      The problem is that the OpenShift ClusterNetwork can be deployed using ipv4 but the dataplane and RHOSO ctlplane networks using ipv6. So the traffic going through internalapi can be ipv6 although the Cluster Network is ipv4. Since it will be listening only on 0.0.0.0 if ipv4 or ::1 if ipv6, in case of using both simultaneously pod will be only listening to ipv4 or ipv6. Resulting on some connection being refused.

      The connections from client pods(ovn-northd and ovn-controller) connects via pod dns names on primary nic, while dataplane node clients(ovn-controller and ovn-metadata agent) connect via DNS names(over secondary NIC). If both are either IPv4 or IPv6 all good, if mixed the scenario is broken.

      We should see if we are supporting this mix architecture and accordingly handle this. Also looks like DB_ADDR="[::]" should work both IPv4 and IPv6 case, can be cross checked and fixed

       

       

       [0] https://github.com/openstack-k8s-operators/ovn-operator/blob/bd36630a5607668141eb5e99bf32e024a39f9296/templates/ovndbcluster/bin/setup.sh#L34

              egarciar@redhat.com Elvira Garcia
              averdagu@redhat.com Arnau Verdaguer Puigdollers
              Maor Blaustein Maor Blaustein
              rhos-dfg-networking-squad-neutron
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

                Created:
                Updated:
                Resolved: