Uploaded image for project: 'Fast Datapath Product'
  1. Fast Datapath Product
  2. FDP-1220

[Pensando test block]balance-tcp/slb didn't work with lacp channel group

    • 8
    • False
    • Hide

      None

      Show
      None
    • False
    • Hide

      Given a system administrator configuring OVS bonding with LACP on a Pensando NIC,

      When balance-tcp/slb mode is used and the bond is created,

      Then OVS should successfully negotiate LACP, enable the bond and allow traffic to pass through.

      Show
      Given a system administrator configuring OVS bonding with LACP on a Pensando NIC, When balance-tcp/slb mode is used and the bond is created, Then OVS should successfully negotiate LACP, enable the bond and allow traffic to pass through.
    • rhel-9
    • None
    • rhel-net-ovs-dpdk
    • ssg_networking
    • OVS/DPDK - FDP-25.C
    • 1

       Problem Description: Clearly explain the issue.

      use Pensando nic to build ovs bonding port with lacp channel group. But the bonding port didn't work. On physical switch, I configure lacp mode is active, so no matter active/passive on ovs-bond, it should be work.

       Impact Assessment: Describe the severity and impact (e.g., network down,availability of a workaround, etc.).

      no workaround, Pensando didn't work with balance-slb/tcp + lacp channel group.

       Software Versions: Specify the exact versions in use (e.g.,openvswitch3.1-3.1.0-147.el8fdp).

      openvswitch3.5-3.5.0-32.el9fdp.x86_64
      kernel version:5.14.0-570.el9.x86_64

      [root@hp-dl380g10-04 ~]# ethtool -i enp20s0np0
      driver: ionic
      version: 5.14.0-570.el9.x86_64
      firmware-version: 1.28.0-E-96
      expansion-rom-version: 
      bus-info: 0000:14:00.0
      supports-statistics: yes
      supports-test: no
      supports-eeprom-access: no
      supports-register-dump: yes
      supports-priv-flags: no
      [root@hp-dl380g10-04 ~]# ethtool -i enp21s0np0
      driver: ionic
      version: 5.14.0-570.el9.x86_64
      firmware-version: 1.28.0-E-96
      expansion-rom-version: 
      bus-info: 0000:15:00.0
      supports-statistics: yes
      supports-test: no
      supports-eeprom-access: no
      supports-register-dump: yes
      supports-priv-flags: no

        Issue Type: Indicate whether this is a new issue or a regression (if a regression, state the last known working version).

      New issue

       Reproducibility: Confirm if the issue can be reproduced consistently. If not, describe how often it occurs.

      100%

       Reproduction Steps: Provide detailed steps or scripts to replicate the issue.

      run below step

      ovs-vsctl list bridge 2>/dev/null | grep name | awk '{
          system("ovs-vsctl --if-exist del-br "$3" &>/dev/null")
      }'
      systemctl stop openvswitch &>/dev/null
      rm -rf /etc/openvswitch/*.db
      rm -rf /var/lib/openvswitch/*
      rm -rf /dev/hugepages/rtemap_*
      systemctl restart openvswitch &>/dev/null
      ip link set enp20s0np0 up
      ip link set enp21s0np0 up
      ovs-vsctl --may-exist add-br bondbridge
      ovs-vsctl add-bond bondbridge balance-tcp enp20s0np0 enp21s0np0 lacp=passive bond_mode=balance-tcp
      ip link set bondbridge up
      ip addr add 172.30.42.2/24 dev bondbridge
      ping -I bondbridge 172.30.42.1 

       Expected Behavior: Describe what should happen under normal circumstances.

      ping can successes.

       Observed Behavior: Explain what actually happens.

      ping failed

      ovs bonding port was disabled

       

      [root@hp-dl380g10-04 ~]# ip link show enp21s0np0
      7: enp21s0np0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP mode DEFAULT group default qlen 1000
          link/ether 00:ae:cd:09:a2:f1 brd ff:ff:ff:ff:ff:ff
      [root@hp-dl380g10-04 ~]# ip link show enp20s0np0
      4: enp20s0np0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP mode DEFAULT group default qlen 1000
          link/ether 00:ae:cd:09:a2:f0 brd ff:ff:ff:ff:ff:ff
      [root@hp-dl380g10-04 ~]# 
      
      [root@hp-dl380g10-04 ~]# ovs-appctl bond/show
      ---- balance-tcp ----
      bond_mode: balance-tcp
      bond may use recirculation: yes, Recirc-ID : 1
      bond-hash-basis: 0
      lb_output action: disabled, bond-id: -1
      updelay: 0 ms
      downdelay: 0 ms
      next rebalance: 8747 ms
      lacp_status: configured
      lacp_fallback_ab: false
      active-backup primary: <none>
      active member mac: 00:00:00:00:00:00(none)member enp20s0np0: disabled
        may_enable: falsemember enp21s0np0: disabled
        may_enable: false[root@hp-dl380g10-04 ~]# ovs-appctl lacp/show
      ---- balance-tcp ----
        status: passive
        sys_id: 00:ae:cd:09:a2:f0
        sys_priority: 65534
        aggregation key: 1
        lacp_time: slowmember: enp20s0np0: defaulted detached
        port_id: 2
        port_priority: 65535
        may_enable: false  actor sys_id: 00:ae:cd:09:a2:f0
        actor sys_priority: 65534
        actor port_id: 2
        actor port_priority: 65535
        actor key: 1
        actor state: aggregation collecting distributing defaulted  partner sys_id: 00:00:00:00:00:00
        partner sys_priority: 0
        partner port_id: 0
        partner port_priority: 0
        partner key: 0
        partner state:member: enp21s0np0: defaulted detached
        port_id: 1
        port_priority: 65535
        may_enable: false  actor sys_id: 00:ae:cd:09:a2:f0
        actor sys_priority: 65534
        actor port_id: 1
        actor port_priority: 65535
        actor key: 1
        actor state: aggregation collecting distributing defaulted  partner sys_id: 00:00:00:00:00:00
        partner sys_priority: 0
        partner port_id: 0
        partner port_priority: 0
        partner key: 0
        partner state:
      [root@hp-dl380g10-04 ~]# ovs-appctl lacp/show-stats
      ---- balance-tcp statistics ----member: enp20s0np0:
        TX PDUs: 0
        RX PDUs: 0
        RX Bad PDUs: 0
        RX Marker Request PDUs: 0
        Link Expired: 0
        Link Defaulted: 0
        Carrier Status Changed: 0member: enp21s0np0:
        TX PDUs: 0
        RX PDUs: 0
        RX Bad PDUs: 0
        RX Marker Request PDUs: 0
        Link Expired: 0
        Link Defaulted: 0
        Carrier Status Changed: 0
       

       

       

       Troubleshooting Actions: Outline the steps taken to diagnose or resolve the issue so far.

      create a kernel port port and it can work.

       

      ip link add bond0 type bond mode 4
      ip link set enp20s0np0 down
      ip link set enp21s0np0 down
      ip link set enp20s0np0 master bond0
      ip link set enp21s0np0 master bond0
      ip link set enp20s0np0 up
      ip link set enp21s0np0 up
      ip link set bond0 up
      ip addr add 172.30.42.2/24 dev bond0
      ping -I bond0 172.30.42.1
      
       64 bytes from 172.30.42.1: icmp_seq=4 ttl=64 time=1024 ms
      64 bytes from 172.30.42.1: icmp_seq=5 ttl=64 time=0.268 ms
      64 bytes from 172.30.42.1: icmp_seq=6 ttl=64 time=0.086 ms
      64 bytes from 172.30.42.1: icmp_seq=7 ttl=64 time=0.067 ms
      64 bytes from 172.30.42.1: icmp_seq=8 ttl=64 time=0.082 ms
      64 bytes from 172.30.42.1: icmp_seq=9 ttl=64 time=0.070 ms
      ^C
      --- 172.30.42.1 ping statistics ---
      9 packets transmitted, 6 received, +3 errors, 33.3333% packet loss, time 8176ms
      rtt min/avg/max/mdev = 0.067/170.785/1024.139/381.631 ms, pipe 3
      [root@hp-dl380g10-04 ~]# cat /proc/net/bonding/bond0 
      Ethernet Channel Bonding Driver: v5.14.0-570.el9.x86_64
      Bonding Mode: IEEE 802.3ad Dynamic link aggregation
      Transmit Hash Policy: layer2 (0)
      MII Status: up
      MII Polling Interval (ms): 100
      Up Delay (ms): 0
      Down Delay (ms): 0
      Peer Notification Delay (ms): 0
      802.3ad info
      LACP active: on
      LACP rate: slow
      Min links: 0
      Aggregator selection policy (ad_select): stable
      System priority: 65535
      System MAC address: 00:ae:cd:09:a2:f0
      Active Aggregator Info:
       Aggregator ID: 1
       Number of ports: 2
       Actor Key: 21
       Partner Key: 89
       Partner Mac Address: cc:6a:33:81:8c:07
      Slave Interface: enp20s0np0
      MII Status: up
      Speed: 25000 Mbps
      Duplex: full
      Link Failure Count: 0
      Permanent HW addr: 00:ae:cd:09:a2:f0
      Slave queue ID: 0
      Aggregator ID: 1
      Actor Churn State: monitoring
      Partner Churn State: monitoring
      Actor Churned Count: 0
      Partner Churned Count: 0
      details actor lacp pdu:
          system priority: 65535
          system mac address: 00:ae:cd:09:a2:f0
          port key: 21
          port priority: 255
          port number: 1
          port state: 61
      details partner lacp pdu:
          system priority: 32768
          system mac address: cc:6a:33:81:8c:07
          oper key: 89
          port priority: 32768
          port number: 337
          port state: 63
      Slave Interface: enp21s0np0
      MII Status: up
      Speed: 25000 Mbps
      Duplex: full
      Link Failure Count: 0
      Permanent HW addr: 00:ae:cd:09:a2:f1
      Slave queue ID: 0
      Aggregator ID: 1
      Actor Churn State: monitoring
      Partner Churn State: monitoring
      Actor Churned Count: 0
      Partner Churned Count: 0
      details actor lacp pdu:
          system priority: 65535
          system mac address: 00:ae:cd:09:a2:f0
          port key: 21
          port priority: 255
          port number: 2
          port state: 61
      details partner lacp pdu:
          system priority: 32768
          system mac address: cc:6a:33:81:8c:07
          oper key: 89
          port priority: 32768
          port number: 338
          port state: 63

       

       

       Logs: If you collected logs please provide them (e.g. sos report, /var/log/openvswitch/* , testpmd console)

              rh-ee-mpattric Mike Pattrick
              mhou@redhat.com HOU MINXI
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: