Uploaded image for project: 'Fast Datapath Product'
  1. Fast Datapath Product
  2. FDP-1239

[performance degradation]on 4q8pmd test enable rss_lacp + pmd_affinity on balance_tcp port

    • Icon: Bug Bug
    • Resolution: Not a Bug
    • Icon: Undefined Undefined
    • FDP-25.C
    • None
    • openvswitch3.5
    • None
    • 2
    • False
    • Hide

      None

      Show
      None
    • False
    • rhel-9
    • None
    • rhel-net-ovs-dpdk
    • ssg_networking
    • OVS/DPDK - FDP-25.C
    • 1

       Problem Description: Clearly explain the issue.

      Performance degradation when enable 

      options:rx-steering=rss+lacp and pmd_affinity.

      ovs-vsctl set Open_vSwitch . other_config:pmd-rxq-assign=group

      ovs-vsctl set Open_vSwitch . other_config:pmd-rxq-isolate=false

      ovs-vsctl add-bond ovsbr0 balance-tcp ens3f0 ens3f1 lacp=active bond_mode=balance-tcp – set Interface ens3f0 type=dpdk options:dpdk-devargs=0000:13:00.0 options:n_rxq=4 options:n_rxq_desc=2048 options:n_txq_desc=2048 mtu_request=9200 options:dpdk-lsc-interrupt=true options:rx-steering=rss+lacp – set Interface ens3f1 type=dpdk options:dpdk-devargs=0000:13:00.1 options:n_rxq=4 options:n_rxq_desc=2048 options:n_txq_desc=2048 mtu_request=9200 options:dpdk-lsc-interrupt=true options:rx-steering=rss+lacp – set Port balance-tcp other_config:lb-output-action=true

      ovs-vsctl set Interface vhost0 other_config:pmd-rxq-affinity=0:10,1:11,2:12,3:13

      ovs-vsctl set Interface vhost1 other_config:pmd-rxq-affinity=0:14,1:15,2:16,3:17

      ovs-vsctl set Interface ens3f0 other_config:pmd-rxq-affinity=0:10,1:11,2:12,3:13,4:18

      ovs-vsctl set Interface ens3f1 other_config:pmd-rxq-affinity=0:14,1:15,2:16,3:17,4:18

       Impact Assessment: Describe the severity and impact (e.g., network down,availability of a workaround, etc.).

      balance-tcp of 4q8pmd scenario got very low performance on PvP test.

       Software Versions: Specify the exact versions in use (e.g.,openvswitch3.1-3.1.0-147.el8fdp).

      openvswitch3.5-3.5.0-32.el9fdp

        Issue Type: Indicate whether this is a new issue or a regression (if a regression, state the last known working version).

      new issue

       Reproducibility: Confirm if the issue can be reproduced consistently. If not, describe how often it occurs.

      100%

       Reproduction Steps: Provide detailed steps or scripts to replicate the issue.

       

      ovs-vsctl set Open_vSwitch . other_config:dpdk-socket-mem=4096"
      
      # due to rx-steering=rss+lacp need one more core to deal with lacp traffic
      # IN this scenario, 9pmd were assigned. So need pmd affinity to make sure the last rx queues of nic interface would assign to one pmd and won't influence other pmds.
       
      ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x7fc00"
      ovs-vsctl set Open_vSwitch . other_config:dpdk-lcore-mask=0x80000"
      ovs-vsctl set Open_vSwitch . other_config:vhost-iommu-support=false"
      ovs-vsctl set Open_vSwitch . other_config:userspace-tso-enable=false"
      ovs-vsctl set Open_vSwitch . other_config:pmd-rxq-assign=group"
      ovs-vsctl set Open_vSwitch . other_config:pmd-rxq-isolate=false"
      ovs-vsctl set Open_vSwitch . other_config:dpdk-init=true"
      ovs-vsctl --may-exist add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev"
      ovs-vsctl add-bond ovsbr0 balance-tcp ens3f0 ens3f1 lacp=active bond_mode=balance-tcp -- set Interface ens3f0 type=dpdk options:dpdk-devargs=0000:13:00.0 options:n_rxq=4 options:n_rxq_desc=2048 options:n_txq_desc=2048 mtu_request=9200 options:dpdk-lsc-interrupt=true options:rx-steering=rss+lacp -- set Interface ens3f1 type=dpdk options:dpdk-devargs=0000:13:00.1 options:n_rxq=4 options:n_rxq_desc=2048 options:n_txq_desc=2048 mtu_request=9200 options:dpdk-lsc-interrupt=true options:rx-steering=rss+lacp -- set Port balance-tcp other_config:lb-output-action=true"
      ovs-vsctl add-port ovsbr0 vhost0 tag=1000 -- set Interface vhost0 type=dpdkvhostuserclient options:vhost-server-path=/tmp/vhostuser/vhost0 mtu_request=9200"
      ovs-vsctl add-port ovsbr0 vhost1 tag=1099 -- set Interface vhost1 type=dpdkvhostuserclient options:vhost-server-path=/tmp/vhostuser/vhost1 mtu_request=9200"
      ovs-vsctl set Interface vhost0 other_config:pmd-rxq-affinity=0:10,1:11,2:12,3:13"
      ovs-vsctl set Interface vhost1 other_config:pmd-rxq-affinity=0:14,1:15,2:16,3:17"
      ovs-vsctl set Interface ens3f0 other_config:pmd-rxq-affinity=0:10,1:11,2:12,3:13,4:18"
      ovs-vsctl set Interface ens3f1 other_config:pmd-rxq-affinity=0:14,1:15,2:16,3:17,4:18"

       

       

       Expected Behavior: Describe what should happen under normal circumstances.

      if didn't add  options:rx-steering=rss+lacp when create balance-tcp, keep pmd affinity setting on Open_vSwitch. (pmd cpus: 0x7f800 -->10,11,12,13,14,15,16,17)

      ovs-vsctl set Interface vhost0 other_config:pmd-rxq-affinity=0:10,1:11,2:12,3:13"
      ovs-vsctl set Interface vhost1 other_config:pmd-rxq-affinity=0:14,1:15,2:16,3:17"
      ovs-vsctl set Interface ens3f0 other_config:pmd-rxq-affinity=0:10,1:11,2:12,3:13"
      ovs-vsctl set Interface ens3f1 other_config:pmd-rxq-affinity=0:14,1:15,2:16,3:17"

      https://beaker.engineering.redhat.com/jobs/10789536

      https://beaker-archive.prod.engineering.redhat.com/beaker-logs/2025/03/107895/10789536/18307404/192316343/ice_dpdkbond_25g.html

      4q8pmd test result is:9810793

       

      if didn't add  options:rx-steering=rss+lacp and pmd affinity setting on Open_vSwitch.

       (pmd cpus: 0x7f800 -->10,11,12,13,14,15,16,17)

      https://beaker.engineering.redhat.com/jobs/10789444

      https://beaker-archive.prod.engineering.redhat.com/beaker-logs/2025/03/107894/10789444/18307163/192310806/ice_dpdkbond_25g.html

      4q8pmd test result is:9157658

       Observed Behavior: Explain what actually happens.

      if add  options:rx-steering=rss+lacp when create balance-tcp, keep pmd affinity setting on Open_vSwitch. (pmd cpus: 0x7f800 -->10,11,12,13,14,15,16,17,18)

      ovs-vsctl set Interface vhost0 other_config:pmd-rxq-affinity=0:10,1:11,2:12,3:13"
      ovs-vsctl set Interface vhost1 other_config:pmd-rxq-affinity=0:14,1:15,2:16,3:17"
      ovs-vsctl set Interface ens3f0 other_config:pmd-rxq-affinity=0:10,1:11,2:12,3:13,4:18"
      ovs-vsctl set Interface ens3f1 other_config:pmd-rxq-affinity=0:14,1:15,2:16,3:17,4:18"

      https://beaker.engineering.redhat.com/jobs/10792287

      https://beaker-archive.prod.engineering.redhat.com/beaker-logs/2025/03/107922/10792287/18310687/192333197/ice_dpdkbond_25g.html

      4q8pmd test result is:3178440

       Troubleshooting Actions: Outline the steps taken to diagnose or resolve the issue so far.

      Keep the same software version and test env, only control enable/disable rss_lacp and enable/disable pmd affinity, find the performance degradation.

       Logs: If you collected logs please provide them (e.g. sos report, /var/log/openvswitch/* , testpmd console)

      https://beaker-archive.prod.engineering.redhat.com/beaker-logs/2025/03/107922/10792287/18310686/192333194/ovs-vswitchd.log

      full output log:

      https://beaker-archive.prod.engineering.redhat.com/beaker-logs/2025/03/107922/10792287/18310686/192333194/taskout.log

      you can grep setup_ovs_dpdkbond_vhostuser_pvp_bond_type_balance-tcp_lacp_active_xnuma_no_queue4_pmds8_vcpus9 as key string.

              rhn-support-ktraynor Kevin Traynor
              mhou@redhat.com HOU MINXI
              HOU MINXI
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: