Uploaded image for project: 'Fast Datapath Product'
  1. Fast Datapath Product
  2. FDP-1148

mlx dpdk port's duplex mode is "half" for "backplane" port

    • Icon: Epic Epic
    • Resolution: Unresolved
    • Icon: Undefined Undefined
    • None
    • None
    • ovs-dpdk
    • None
    • mlx dpdk port's duplex mode is "half" for "backplane" port
    • 3
    • False
    • False
    • Hide

      Please mark each item below with ( / ) if completed or ( x ) if incomplete:

      ( ) The acceptance criteria defined below are met.


      ( ) The epics work is available in a downstream build (nightly/Async or other)


      ( ) All cards under the epic have been moved to Done

      Show
      Please mark each item below with ( / ) if completed or ( x ) if incomplete: ( ) The acceptance criteria defined below are met. ( ) The epics work is available in a downstream build (nightly/Async or other) ( ) All cards under the epic have been moved to Done
    • rhel-9
    • rhel-net-ovs-dpdk
    • 100% To Do, 0% In Progress, 0% Done
    • ssg_networking

      This epic tracks all the effort needed to deliver the solution related to the bug described below.

      Problem Description: Clearly explain the issue.

      Added mlx dpdk port to ovs user bridge and checked its "duplex" mode which shows "half".

      [root@rhos-nfv-04 ~]# ovs-vsctl list Interface dpdk1 <<<<<<<<<<<<<<<<<<<<<<<<<<<MLX
      _uuid               : 3497b0a0-52e5-4d66-a98f-c72fc354f0aa
      admin_state         : up
      bfd                 : {}
      bfd_status          : {}
      cfm_fault           : []
      cfm_fault_status    : []
      cfm_flap_count      : []
      cfm_health          : []
      cfm_mpid            : []
      cfm_remote_mpids    : []
      cfm_remote_opstate  : []
      duplex              : half <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
      error               : []
      external_ids        : {}
      ifindex             : 4575007
      ingress_policing_burst: 0
      ingress_policing_kpkts_burst: 0
      ingress_policing_kpkts_rate: 0
      ingress_policing_rate: 0
      lacp_current        : []
      link_resets         : 0
      link_speed          : 25000000000
      link_state          : up
      lldp                : {}
      mac                 : []
      mac_in_use          : "04:3f:72:d9:c0:49"
      mtu                 : 1500
      mtu_request         : []
      name                : dpdk1
      ofport              : 2
      ofport_request      : []
      options             : {dpdk-devargs="0000:04:00.1"}
      other_config        : {}
      statistics          : {ovs_rx_qos_drops=0, ovs_tx_failure_drops=0, ovs_tx_invalid_hwol_drops=0, ovs_tx_mtu_exceeded_drops=0, ovs_tx_qos_drops=0, rx_broadcast_packets=0, rx_bytes=1254, rx_dropped=0, rx_errors=0, rx_mbuf_allocation_errors=0, rx_missed_errors=0, rx_multicast_packets=3, rx_packets=3, rx_phy_crc_errors=0, rx_phy_in_range_len_errors=0, rx_phy_symbol_errors=0, rx_q0_bytes=1254, rx_q0_errors=0, rx_q0_packets=3, rx_wqe_errors=0, tx_broadcast_packets=0, tx_bytes=0, tx_dropped=0, tx_errors=0, tx_multicast_packets=0, tx_packets=0, tx_phy_errors=0, tx_pp_clock_queue_errors=0, tx_pp_missed_interrupt_errors=0, tx_pp_rearm_queue_errors=0, tx_pp_timestamp_future_errors=0, tx_pp_timestamp_order_errors=0, tx_pp_timestamp_past_errors=0, tx_q0_bytes=0, tx_q0_packets=0, tx_q1_bytes=0, tx_q1_packets=0, tx_q2_bytes=0, tx_q2_packets=0}
      status              : {bus_info="bus_name=pci, vendor_id=15b3, device_id=1017", driver_name=mlx5_pci, if_descr="DPDK 23.11.2 mlx5_pci", if_type="6", link_speed="25Gbps", max_hash_mac_addrs="0", max_mac_addrs="128", max_rx_pktlen="1518", max_rx_queues="1024", max_tx_queues="1024", max_vfs="0", max_vmdq_pools="0", min_rx_bufsize="32", n_rxq="1", n_txq="3", numa_id="0", port_no="1", rx-steering=rss, rx_csum_offload="true", tx_geneve_tso_offload="false", tx_ip_csum_offload="true", tx_out_ip_csum_offload="true", tx_out_udp_csum_offload="false", tx_sctp_csum_offload="false", tx_tcp_csum_offload="true", tx_tcp_seg_offload="false", tx_udp_csum_offload="true", tx_vxlan_tso_offload="false"}
      type                : dpdk

       

      The same port's duplex mode is "full" when i checked with ethtool.
       
      [root@rhos-nfv-04 ~]# ethtool enp4s0f1np1  <<<<<<<<<<<<<<<<<<<<MLX
      Settings for enp4s0f1np1:
      Supported ports: [ Backplane ]
      Supported link modes:   1000baseKX/Full
                              10000baseKR/Full
                              25000baseCR/Full
                              25000baseKR/Full
                              25000baseSR/Full
      Supported pause frame use: Symmetric
      Supports auto-negotiation: Yes
      Supported FEC modes: None RS BASER
      Advertised link modes:  1000baseKX/Full
                              10000baseKR/Full
                              25000baseCR/Full
                              25000baseKR/Full
                              25000baseSR/Full
      Advertised pause frame use: No
      Advertised auto-negotiation: Yes
      Advertised FEC modes: BASER
      Speed: 25000Mb/s
      Duplex: Full  <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<Full
      Auto-negotiation: on
      Port: Direct Attach Copper
      PHYAD: 0
      Transceiver: internal
      Supports Wake-on: d
      Wake-on: d
              Current message level: 0x00000004 (4)
                                     link
      Link detected: yes

      Impact Assessment: Describe the severity and impact (e.g., network down,availability of a workaround, etc.).

      • Severity would be low as functionally works and it does not impact the work. Also, i doubt if customer would be doing it (we have 100 gbps port in uplink switch and splited to 25 gbps so we can connect more machines to switch with speed negotiated to 25 bps).

        Software Versions: Specify the exact versions in use (e.g.,openvswitch3.1-3.1.0-147.el8fdp).

      [root@rhos-nfv-04 ~]# ovs-vswitchd --version
      ovs-vswitchd (Open vSwitch) 3.3.4-62.el9fdp
      DPDK 23.11.2

      Issue Type: Indicate whether this is a new issue or a regression (if a regression, state the last known working version).

      • I never tried to observe this before. so cant say if it is regression or new issue

        Reproducibility: Confirm if the issue can be reproduced consistently. If not, describe how often it occurs.

      • Every time reproducible 

        Reproduction Steps: Provide detailed steps or scripts to replicate the issue.

      • Have "backplane" mlx port and add to ovs dpdk user bridge and check duplex mode

        Expected Behavior: Describe what should happen under normal circumstances.

      • Should be duplex "full"

        Observed Behavior: Explain what actually happens.

      • "duplex" mode is "half"

        Troubleshooting Actions: Outline the steps taken to diagnose or resolve the issue so far.

      • None

         Logs: If you collected logs please provide them (e.g. sos report, /var/log/openvswitch/* , testpmd console)

      • None

              ovsdpdk-bot ovsdpdk bot
              hakhande Haresh Khandelwal
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated: