Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-54229

Bond-CNI pod failed to recover IPv6 address config after bond interface restart

    • None
    • CNF Network Sprint 268, CNF Sprint 269
    • 2
    • False
    • Hide

      None

      Show
      None

      Description of problem:

      After restarting (down/up) bonded interface for the bond-cni pod, the net3 ipv6 configuration was lost     

      Version-Release number of selected component (if applicable):

          4.19.0-ec.3

      How reproducible:

          always

      Steps to Reproduce:

          1. Create pod-level bond deployment (it has to be privileged to be possible to shut down pod network interface) 
      
          2. verify original net3 configuration
      [kni@registry ~]$ oc -n rds-bonded-sriov-wlkd rsh rdscore-pod-level-bond-two-7f67ddc8c8-6bjzg
      sh-5.1# ip a
      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host 
             valid_lft forever preferred_lft forever
      2: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
          link/gre 0.0.0.0 brd 0.0.0.0
      3: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
          link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
      4: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
          link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
      5: eth0@if90: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8900 qdisc noqueue state UP group default 
          link/ether 0a:58:0a:82:02:2c brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 10.130.2.44/23 brd 10.130.3.255 scope global eth0
             valid_lft forever preferred_lft forever
          inet6 fd01:0:0:7::2c/64 scope global 
             valid_lft forever preferred_lft forever
          inet6 fe80::858:aff:fe82:22c/64 scope link 
             valid_lft forever preferred_lft forever
      6: net3: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
          link/ether 06:32:1c:9e:78:ed brd ff:ff:ff:ff:ff:ff
          inet 10.18.93.92/26 brd 10.18.93.127 scope global net3
             valid_lft forever preferred_lft forever
          inet6 2620:52:0:15d::62/122 scope global 
             valid_lft forever preferred_lft forever
          inet6 fe80::432:1cff:fe9e:78ed/64 scope link 
             valid_lft forever preferred_lft forever
      34: net1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master net3 state UP group default qlen 1000
          link/ether 06:32:1c:9e:78:ed brd ff:ff:ff:ff:ff:ff
          altname enp138s0f0v11
      48: net2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master net3 state UP group default qlen 1000
          link/ether ea:8c:14:19:fe:1e brd ff:ff:ff:ff:ff:ff
          altname enp138s0f1v10
      
           3. take net3 interface down
      sh-5.1# ip link set dev net3 down
      
           4. verify net3 interface config
      sh-5.1# ip a
      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host 
             valid_lft forever preferred_lft forever
      2: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
          link/gre 0.0.0.0 brd 0.0.0.0
      3: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
          link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
      4: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
          link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
      5: eth0@if90: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8900 qdisc noqueue state UP group default 
          link/ether 0a:58:0a:82:02:2c brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 10.130.2.44/23 brd 10.130.3.255 scope global eth0
             valid_lft forever preferred_lft forever
          inet6 fd01:0:0:7::2c/64 scope global 
             valid_lft forever preferred_lft forever
          inet6 fe80::858:aff:fe82:22c/64 scope link 
             valid_lft forever preferred_lft forever
      6: net3: <BROADCAST,MULTICAST,MASTER> mtu 9000 qdisc noqueue state DOWN group default qlen 1000
          link/ether 06:32:1c:9e:78:ed brd ff:ff:ff:ff:ff:ff
          inet 10.18.93.92/26 brd 10.18.93.127 scope global net3
             valid_lft forever preferred_lft forever
      34: net1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master net3 state UP group default qlen 1000
          link/ether 06:32:1c:9e:78:ed brd ff:ff:ff:ff:ff:ff
          altname enp138s0f0v11
      48: net2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master net3 state UP group default qlen 1000
          link/ether ea:8c:14:19:fe:1e brd ff:ff:ff:ff:ff:ff
          altname enp138s0f1v10
      
          5. change net3 interface state to up
      sh-5.1# ip link set dev net3 up  
      sh-5.1# ip a
      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host 
             valid_lft forever preferred_lft forever
      2: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
          link/gre 0.0.0.0 brd 0.0.0.0
      3: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
          link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
      4: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
          link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
      5: eth0@if90: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8900 qdisc noqueue state UP group default 
          link/ether 0a:58:0a:82:02:2c brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 10.130.2.44/23 brd 10.130.3.255 scope global eth0
             valid_lft forever preferred_lft forever
          inet6 fd01:0:0:7::2c/64 scope global 
             valid_lft forever preferred_lft forever
          inet6 fe80::858:aff:fe82:22c/64 scope link 
             valid_lft forever preferred_lft forever
      6: net3: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
          link/ether 06:32:1c:9e:78:ed brd ff:ff:ff:ff:ff:ff
          inet 10.18.93.92/26 brd 10.18.93.127 scope global net3
             valid_lft forever preferred_lft forever
          inet6 fe80::432:1cff:fe9e:78ed/64 scope link tentative 
             valid_lft forever preferred_lft forever
      34: net1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master net3 state UP group default qlen 1000
          link/ether 06:32:1c:9e:78:ed brd ff:ff:ff:ff:ff:ff
          altname enp138s0f0v11
      48: net2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master net3 state UP group default qlen 1000
          link/ether ea:8c:14:19:fe:1e brd ff:ff:ff:ff:ff:ff
          altname enp138s0f1v10
      

      Actual results:

      originally configured ipv6 address 2620:52:0:15d::62/122 was lost    

      Expected results:

          ipv6 address has to be configured for the net3 interface after the net3 interface restart

      Additional info:

          the IPv6 address disappeared when we took the interface down. So, even before we return it to the 'up' state, it is already missing.

            [OCPBUGS-54229] Bond-CNI pod failed to recover IPv6 address config after bond interface restart

            Upstream pr got merged yesterday.

            Are these tests intended to be for 4.14 as well? If so, let's do it for the bond-cni only. I will evaluate how easy is to backport this for the regular plugins. We can discuss this later.

            Marcelo Guerrero Viveros added a comment - Upstream pr got merged yesterday. Are these tests intended to be for 4.14 as well? If so, let's do it for the bond-cni only. I will evaluate how easy is to backport this for the regular plugins. We can discuss this later.

            Thank you both!

            Since this is a bug in the IPAM plugin that impacts all CNI plugins, and many are supported in OCP, it would be best to backport the fix to 4.14.z.
            rh-ee-marguerr, WDYT?

            Carlos Goncalves added a comment - Thank you both! Since this is a bug in the IPAM plugin that impacts all CNI plugins, and many are supported in OCP, it would be best to backport the fix to 4.14.z. rh-ee-marguerr , WDYT?

            quay.io/marguerr/bond-ocpbugs-54229

            Please give it a try.  The env var you need to modify now is BOND_CNI_PLUGIN_IMAGE

            Marcelo Guerrero Viveros added a comment - quay.io/marguerr/bond-ocpbugs-54229 Please give it a try.  The env var you need to modify now is BOND_CNI_PLUGIN_IMAGE

            No, that image was not for the bond-cni. I'll upload a custom one in a moment

            Marcelo Guerrero Viveros added a comment - No, that image was not for the bond-cni. I'll upload a custom one in a moment
            Federico Paolinelli made changes -
            Rank New: Ranked lower
            Federico Paolinelli made changes -
            Sprint Original: CNF Network Sprint 268 [ 70280 ] New: CNF Network Sprint 268, CNF Sprint 269 [ 70280, 70996 ]

            upstream proposal https://github.com/containernetworking/plugins/pull/1155

            elgerman, I have a custom image with the proposed fix quay.io/marguerr/plugins-ocpbugs-54229

            You can test it by running:

            oc scale deploy -n openshift-cluster-version cluster-version-operator --replicas=0

            oc edit deploy -n openshift-network-operator network-operator

            Here modify the env var CNI_PLUGINS_IMAGE, and then wait for the pods of the multus-additional-cni-plugins daemonset to be restarted

            oc get pods -n openshift-multus

             

            Marcelo Guerrero Viveros added a comment - upstream proposal https://github.com/containernetworking/plugins/pull/1155 elgerman , I have a custom image with the proposed fix quay.io/marguerr/plugins-ocpbugs-54229 You can test it by running: oc scale deploy -n openshift-cluster-version cluster-version-operator --replicas=0 oc edit deploy -n openshift-network-operator network-operator Here modify the env var CNI_PLUGINS_IMAGE, and then wait for the pods of the multus-additional-cni-plugins daemonset to be restarted oc get pods -n openshift-multus  

            elgerman, I was able to reproduce this behavior with other CNI plugins (vlan plugin). This is related to this sysctl parameter https://sysctl-explorer.net/net/ipv6/keep_addr_on_down/.

            I think it makes sense to enable this parameter when the configuration is done through any of the IPAM CNIs. I will propose this in the upstream community.

            A temporal solution is to chain the tuning CNI to enable this sysctl parameter. I will keep you posted. Please confirm from your side that you are able to reproduce this with a different CNI.

            Marcelo Guerrero Viveros added a comment - elgerman , I was able to reproduce this behavior with other CNI plugins (vlan plugin). This is related to this sysctl parameter https://sysctl-explorer.net/net/ipv6/keep_addr_on_down/. I think it makes sense to enable this parameter when the configuration is done through any of the IPAM CNIs. I will propose this in the upstream community. A temporal solution is to chain the tuning CNI to enable this sysctl parameter. I will keep you posted. Please confirm from your side that you are able to reproduce this with a different CNI.
            Elena German made changes -
            Description Original: Description of problem:
            {code:none}
            After restarting (down/up) bonded interface for the bond-cni pod, the net3 ipv6 configuration was lost {code}
            Version-Release number of selected component (if applicable):
            {code:none}
                4.19.0-ec.3{code}
            How reproducible:
            {code:none}
                always{code}
            Steps to Reproduce:
            {code:none}
                1. Create pod-level bond deployment (it has to be privileged to be possible to shut down pod network interface)

                2. verify original net3 configuration
            [kni@registry ~]$ oc -n rds-bonded-sriov-wlkd rsh rdscore-pod-level-bond-two-7f67ddc8c8-6bjzg
            sh-5.1# ip a
            1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
                link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
                inet 127.0.0.1/8 scope host lo
                   valid_lft forever preferred_lft forever
                inet6 ::1/128 scope host 
                   valid_lft forever preferred_lft forever
            2: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
                link/gre 0.0.0.0 brd 0.0.0.0
            3: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
                link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
            4: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
                link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
            5: eth0@if90: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8900 qdisc noqueue state UP group default 
                link/ether 0a:58:0a:82:02:2c brd ff:ff:ff:ff:ff:ff link-netnsid 0
                inet 10.130.2.44/23 brd 10.130.3.255 scope global eth0
                   valid_lft forever preferred_lft forever
                inet6 fd01:0:0:7::2c/64 scope global 
                   valid_lft forever preferred_lft forever
                inet6 fe80::858:aff:fe82:22c/64 scope link 
                   valid_lft forever preferred_lft forever
            6: net3: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
                link/ether 06:32:1c:9e:78:ed brd ff:ff:ff:ff:ff:ff
                inet 10.18.93.92/26 brd 10.18.93.127 scope global net3
                   valid_lft forever preferred_lft forever
                inet6 2620:52:0:15d::62/122 scope global 
                   valid_lft forever preferred_lft forever
                inet6 fe80::432:1cff:fe9e:78ed/64 scope link 
                   valid_lft forever preferred_lft forever
            34: net1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master net3 state UP group default qlen 1000
                link/ether 06:32:1c:9e:78:ed brd ff:ff:ff:ff:ff:ff
                altname enp138s0f0v11
            48: net2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master net3 state UP group default qlen 1000
                link/ether ea:8c:14:19:fe:1e brd ff:ff:ff:ff:ff:ff
                altname enp138s0f1v10

                 3. take net3 interface down
            sh-5.1# ip link set dev net3 down

                 4. verify net3 interface config
            sh-5.1# ip a
            1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
                link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
                inet 127.0.0.1/8 scope host lo
                   valid_lft forever preferred_lft forever
                inet6 ::1/128 scope host 
                   valid_lft forever preferred_lft forever
            2: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
                link/gre 0.0.0.0 brd 0.0.0.0
            3: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
                link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
            4: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
                link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
            5: eth0@if90: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8900 qdisc noqueue state UP group default 
                link/ether 0a:58:0a:82:02:2c brd ff:ff:ff:ff:ff:ff link-netnsid 0
                inet 10.130.2.44/23 brd 10.130.3.255 scope global eth0
                   valid_lft forever preferred_lft forever
                inet6 fd01:0:0:7::2c/64 scope global 
                   valid_lft forever preferred_lft forever
                inet6 fe80::858:aff:fe82:22c/64 scope link 
                   valid_lft forever preferred_lft forever
            6: net3: <BROADCAST,MULTICAST,MASTER> mtu 9000 qdisc noqueue state DOWN group default qlen 1000
                link/ether 06:32:1c:9e:78:ed brd ff:ff:ff:ff:ff:ff
                inet 10.18.93.92/26 brd 10.18.93.127 scope global net3
                   valid_lft forever preferred_lft forever
            34: net1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master net3 state UP group default qlen 1000
                link/ether 06:32:1c:9e:78:ed brd ff:ff:ff:ff:ff:ff
                altname enp138s0f0v11
            48: net2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master net3 state UP group default qlen 1000
                link/ether ea:8c:14:19:fe:1e brd ff:ff:ff:ff:ff:ff
                altname enp138s0f1v10

                5. change net3 interface state to up
            sh-5.1# ip link set dev net3 up  
            sh-5.1# ip a
            1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
                link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
                inet 127.0.0.1/8 scope host lo
                   valid_lft forever preferred_lft forever
                inet6 ::1/128 scope host 
                   valid_lft forever preferred_lft forever
            2: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
                link/gre 0.0.0.0 brd 0.0.0.0
            3: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
                link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
            4: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
                link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
            5: eth0@if90: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8900 qdisc noqueue state UP group default 
                link/ether 0a:58:0a:82:02:2c brd ff:ff:ff:ff:ff:ff link-netnsid 0
                inet 10.130.2.44/23 brd 10.130.3.255 scope global eth0
                   valid_lft forever preferred_lft forever
                inet6 fd01:0:0:7::2c/64 scope global 
                   valid_lft forever preferred_lft forever
                inet6 fe80::858:aff:fe82:22c/64 scope link 
                   valid_lft forever preferred_lft forever
            6: net3: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
                link/ether 06:32:1c:9e:78:ed brd ff:ff:ff:ff:ff:ff
                inet 10.18.93.92/26 brd 10.18.93.127 scope global net3
                   valid_lft forever preferred_lft forever
                inet6 fe80::432:1cff:fe9e:78ed/64 scope link tentative 
                   valid_lft forever preferred_lft forever
            34: net1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master net3 state UP group default qlen 1000
                link/ether 06:32:1c:9e:78:ed brd ff:ff:ff:ff:ff:ff
                altname enp138s0f0v11
            48: net2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master net3 state UP group default qlen 1000
                link/ether ea:8c:14:19:fe:1e brd ff:ff:ff:ff:ff:ff
                altname enp138s0f1v10
            {code}
            Actual results:
            {code:none}
            originally configured ipv6 address 2620:52:0:15d::62/122 was lost {code}
            Expected results:
            {code:none}
                ipv6 address has to be configured for the net3 interface after the net3 interface restart{code}
            Additional info:
            {code:none}
                {code}
            New: Description of problem:
            {code:none}
            After restarting (down/up) bonded interface for the bond-cni pod, the net3 ipv6 configuration was lost {code}
            Version-Release number of selected component (if applicable):
            {code:none}
                4.19.0-ec.3{code}
            How reproducible:
            {code:none}
                always{code}
            Steps to Reproduce:
            {code:none}
                1. Create pod-level bond deployment (it has to be privileged to be possible to shut down pod network interface)

                2. verify original net3 configuration
            [kni@registry ~]$ oc -n rds-bonded-sriov-wlkd rsh rdscore-pod-level-bond-two-7f67ddc8c8-6bjzg
            sh-5.1# ip a
            1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
                link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
                inet 127.0.0.1/8 scope host lo
                   valid_lft forever preferred_lft forever
                inet6 ::1/128 scope host 
                   valid_lft forever preferred_lft forever
            2: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
                link/gre 0.0.0.0 brd 0.0.0.0
            3: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
                link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
            4: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
                link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
            5: eth0@if90: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8900 qdisc noqueue state UP group default 
                link/ether 0a:58:0a:82:02:2c brd ff:ff:ff:ff:ff:ff link-netnsid 0
                inet 10.130.2.44/23 brd 10.130.3.255 scope global eth0
                   valid_lft forever preferred_lft forever
                inet6 fd01:0:0:7::2c/64 scope global 
                   valid_lft forever preferred_lft forever
                inet6 fe80::858:aff:fe82:22c/64 scope link 
                   valid_lft forever preferred_lft forever
            6: net3: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
                link/ether 06:32:1c:9e:78:ed brd ff:ff:ff:ff:ff:ff
                inet 10.18.93.92/26 brd 10.18.93.127 scope global net3
                   valid_lft forever preferred_lft forever
                inet6 2620:52:0:15d::62/122 scope global 
                   valid_lft forever preferred_lft forever
                inet6 fe80::432:1cff:fe9e:78ed/64 scope link 
                   valid_lft forever preferred_lft forever
            34: net1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master net3 state UP group default qlen 1000
                link/ether 06:32:1c:9e:78:ed brd ff:ff:ff:ff:ff:ff
                altname enp138s0f0v11
            48: net2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master net3 state UP group default qlen 1000
                link/ether ea:8c:14:19:fe:1e brd ff:ff:ff:ff:ff:ff
                altname enp138s0f1v10

                 3. take net3 interface down
            sh-5.1# ip link set dev net3 down

                 4. verify net3 interface config
            sh-5.1# ip a
            1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
                link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
                inet 127.0.0.1/8 scope host lo
                   valid_lft forever preferred_lft forever
                inet6 ::1/128 scope host 
                   valid_lft forever preferred_lft forever
            2: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
                link/gre 0.0.0.0 brd 0.0.0.0
            3: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
                link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
            4: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
                link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
            5: eth0@if90: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8900 qdisc noqueue state UP group default 
                link/ether 0a:58:0a:82:02:2c brd ff:ff:ff:ff:ff:ff link-netnsid 0
                inet 10.130.2.44/23 brd 10.130.3.255 scope global eth0
                   valid_lft forever preferred_lft forever
                inet6 fd01:0:0:7::2c/64 scope global 
                   valid_lft forever preferred_lft forever
                inet6 fe80::858:aff:fe82:22c/64 scope link 
                   valid_lft forever preferred_lft forever
            6: net3: <BROADCAST,MULTICAST,MASTER> mtu 9000 qdisc noqueue state DOWN group default qlen 1000
                link/ether 06:32:1c:9e:78:ed brd ff:ff:ff:ff:ff:ff
                inet 10.18.93.92/26 brd 10.18.93.127 scope global net3
                   valid_lft forever preferred_lft forever
            34: net1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master net3 state UP group default qlen 1000
                link/ether 06:32:1c:9e:78:ed brd ff:ff:ff:ff:ff:ff
                altname enp138s0f0v11
            48: net2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master net3 state UP group default qlen 1000
                link/ether ea:8c:14:19:fe:1e brd ff:ff:ff:ff:ff:ff
                altname enp138s0f1v10

                5. change net3 interface state to up
            sh-5.1# ip link set dev net3 up  
            sh-5.1# ip a
            1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
                link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
                inet 127.0.0.1/8 scope host lo
                   valid_lft forever preferred_lft forever
                inet6 ::1/128 scope host 
                   valid_lft forever preferred_lft forever
            2: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
                link/gre 0.0.0.0 brd 0.0.0.0
            3: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
                link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
            4: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
                link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
            5: eth0@if90: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8900 qdisc noqueue state UP group default 
                link/ether 0a:58:0a:82:02:2c brd ff:ff:ff:ff:ff:ff link-netnsid 0
                inet 10.130.2.44/23 brd 10.130.3.255 scope global eth0
                   valid_lft forever preferred_lft forever
                inet6 fd01:0:0:7::2c/64 scope global 
                   valid_lft forever preferred_lft forever
                inet6 fe80::858:aff:fe82:22c/64 scope link 
                   valid_lft forever preferred_lft forever
            6: net3: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
                link/ether 06:32:1c:9e:78:ed brd ff:ff:ff:ff:ff:ff
                inet 10.18.93.92/26 brd 10.18.93.127 scope global net3
                   valid_lft forever preferred_lft forever
                inet6 fe80::432:1cff:fe9e:78ed/64 scope link tentative 
                   valid_lft forever preferred_lft forever
            34: net1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master net3 state UP group default qlen 1000
                link/ether 06:32:1c:9e:78:ed brd ff:ff:ff:ff:ff:ff
                altname enp138s0f0v11
            48: net2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master net3 state UP group default qlen 1000
                link/ether ea:8c:14:19:fe:1e brd ff:ff:ff:ff:ff:ff
                altname enp138s0f1v10
            {code}
            Actual results:
            {code:none}
            originally configured ipv6 address 2620:52:0:15d::62/122 was lost {code}
            Expected results:
            {code:none}
                ipv6 address has to be configured for the net3 interface after the net3 interface restart{code}
            Additional info:
            {code:none}
                the IPv6 address disappeared when we took the interface down. So, even before we return it to the 'up' state, it is already missing.{code}
            Marcelo Guerrero Viveros made changes -
            Status Original: New [ 10016 ] New: ASSIGNED [ 14452 ]
            Marcelo Guerrero Viveros made changes -
            Sprint New: CNF Network Sprint 268 [ 70280 ]
            Marcelo Guerrero Viveros made changes -
            Assignee Original: Peng Liu [ pliurh ] New: Marcelo Guerrero Viveros [ rh-ee-marguerr ]
            Carlos Goncalves made changes -
            Link New: This issue blocks CNF-15799 [ CNF-15799 ]
            Elena German made changes -
            QA Contact New: Weibin Liang [ weliang1 ]
            Elena German created issue -

              rh-ee-marguerr Marcelo Guerrero Viveros
              elgerman Elena German
              Weibin Liang Weibin Liang
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

                Created:
                Updated: