Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-60674

DPDK client traffic failing with Bond interface on Intel E810 card

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • None
    • 4.20
    • Networking / SR-IOV
    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Important
    • No
    • None
    • None
    • None
    • None
    • In Progress
    • Release Note Not Required
    • None
    • None
    • None
    • None
    • None

      Description of problem: A dpdk client with a vdev bond interface and two E810 VFs can not send tx traffic to a client dpdk pod with testpmd. The same testpmd commands worked with no issue on a dpdk bond pod with two MLX VFs. 

      Version-Release number of selected component (if applicable): 4.20

      How reproducible:  Easily reproducable

      Steps to Reproduce:
      1. deploy performanceprofile w/ huge pages
      2. deploy sriovnetworknodepolicy, sriovnetwork and network-attachment-definition
      3. Deploy a dpdk pod with vdev bond interface and client dpdk pod with single secondary interface. Send traffic from the bond interface to the client. 
      Actual results: There is tx and rx traffic on all three ports

      Expected results: To see on the dpdk bond pod only tx traffic and on the client dpdk only rx traffic

      Additional info:

      client-dpdk-bond pod sending traffic:
      testpmd --allow=0000:5e:01.0 --allow=0000:5e:01.7 --vdev=net_bonding0,mode=1 -n 4 – -i --total-num-mbufs=2048 --disable-device-start 

      set bonding mode 1 2
      port start 0
      port start 1
      add bonding slave 0 2
      add bonding slave 1 2
      port start 2
      set fwd txonly
      set txpkts 64
      set burst 32
      set eth-peer 2 60:00:00:00:00:11
      start

      testpmd> show bonding config 2
      Bonding mode: 1
      Slaves (2): [0 1]
      Active Slaves (2): [0 1]
      Primary: [0]

      client-dpdk pod receiving traffic:
      oc exec -it client-dpdk -n sriov-operator-tests – bash
      testpmd --allow=0000:5e:XX.X -n 4 – -i --total-num-mbufs=2048 --disable-device-start
      port start 0
      set fwd rxonly
      start

      The show port stats all looks strange on both server and client side:Server:
        ######################## NIC statistics for port 1  ########################
        RX-packets: 577        RX-missed: 3174       RX-bytes:  1243024
        RX-errors: 0
        RX-nombuf:  0
        TX-packets: 1814       TX-errors: 0          TX-bytes:  116096  Throughput (since last show)
        Rx-pps:            0          Rx-bps:            0
        Tx-pps:            0          Tx-bps:            0
        ############################################################################Client:
      testpmd> show port stats all  ######################## NIC statistics for port 0  ########################
        RX-packets: 54         RX-missed: 28850      RX-bytes:  9538104
        RX-errors: 0
        RX-nombuf:  0
        TX-packets: 6083680    TX-errors: 0          TX-bytes:  389355520  Throughput (since last show)
        Rx-pps:            0          Rx-bps:            0
        Tx-pps:            0          Tx-bps:            0
        ############################################################################

       

              bnemeth@redhat.com Balazs Nemeth
              gkopels@redhat.com Gregory Kopels
              None
              None
              Zhiqiang Fang Zhiqiang Fang
              None
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated: