Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-6184

[RHEL8.7] all mvapich2 benchmarks fail with "create qp: failed on ibv_cmd_create_qp with 22" error on QEDR IW / ROCE device

Linking RHIVOS CVEs to...Migration: Automation ...SWIFT: POC ConversionSync from "Extern...XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Won't Do
    • Icon: Undefined Undefined
    • None
    • rhel-8.7.0
    • mvapich2
    • None
    • None
    • 1
    • rhel-net-drivers
    • ssg_networking
    • 1
    • False
    • False
    • Hide

      None

      Show
      None
    • None
    • Network Drivers 6
    • None
    • None
    • If docs needed, set a value
    • None
    • 57,005

      Description of problem:

      When tested on QEDR IW or ROCE device, all mvapich2 benchmarks fail with the following error message.

      "[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22"

      Version-Release number of selected component (if applicable):

      Clients: rdma-dev-03
      Servers: rdma-dev-02

      DISTRO=RHEL-8.7.0-20220505.0

      + [22-05-06 09:59:11] cat /etc/redhat-release
      Red Hat Enterprise Linux release 8.7 Beta (Ootpa)

      + [22-05-06 09:59:11] uname -a
      Linux rdma-dev-03.rdma.lab.eng.rdu2.redhat.com 4.18.0-387.el8.x86_64 #1 SMP Thu Apr 28 02:53:03 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux

      + [22-05-06 09:59:11] cat /proc/cmdline
      BOOT_IMAGE=(hd0,msdos1)/vmlinuz-4.18.0-387.el8.x86_64 root=UUID=6af6e8fd-80e1-4f7c-829f-4429f394874f ro console=tty0 rd_NO_PLYMOUTH intel_iommu=on iommu=on crashkernel=auto resume=UUID=cd8c4427-1709-4dde-9c33-5e6d445d3dbd console=ttyS1,115200

      + [22-05-06 09:59:11] rpm -q rdma-core linux-firmware
      rdma-core-37.2-1.el8.x86_64
      linux-firmware-20220210-107.git6342082c.el8.noarch

      + [22-05-06 09:59:11] tail /sys/class/infiniband/qedr0/fw_ver /sys/class/infiniband/qedr1/fw_ver
      ==> /sys/class/infiniband/qedr0/fw_ver <==
      8. 42. 2. 0

      ==> /sys/class/infiniband/qedr1/fw_ver <==
      8. 42. 2. 0

      + [22-05-06 09:59:11] lspci
      + [22-05-06 09:59:11] grep -i -e ethernet -e infiniband -e omni -e ConnectX
      02:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
      02:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
      08:00.0 Ethernet controller: QLogic Corp. FastLinQ QL45000 Series 25GbE Controller (rev 10)
      08:00.1 Ethernet controller: QLogic Corp. FastLinQ QL45000 Series 25GbE Controller (rev 10)

      Installed:
      mpitests-mvapich2-5.8-1.el8.x86_64 mvapich2-2.3.6-1.el8.x86_64

      How reproducible:

      100%

      Steps to Reproduce:
      1. bring up the RDMA hosts mentioned above with RHEL8.7 build
      2. set up RDMA hosts for mvapich2 benchamrk tests
      3. run one of the mvapich2 benchmark with "mpirun" command, as the following:

      timeout --preserve-status --kill-after=5m 3m mpirun -hostfile /root/hfile_one_core -np 2 mpitests-IMB-MPI1 PingPong -time 1.5

      Actual results:

      [rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][rdma_param_handle_heterogeneity] All nodes involved in the job were detected to be homogeneous in terms of processors and interconnects. Setting MV2_HOMOGENEOUS_CLUSTER=1 can improve job startup performance on such systems. The following link has more details on enhancing job startup performance. http://mvapich.cse.ohio-state.edu/performance/job-startup/.
      [rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][rdma_param_handle_heterogeneity] To suppress this warning, please set MV2_SUPPRESS_JOB_STARTUP_PERFORMANCE_WARNING to 1

      [create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
      [cli_1]: aborting job:
      Fatal error in PMPI_Init_thread:
      Other MPI error, error stack:
      MPIR_Init_thread(493)....:
      MPID_Init(419)...........: channel initialization failed
      MPIDI_CH3_Init(550)......:
      MPIDI_CH3I_RDMA_init(446):
      rdma_iba_hca_init(1748)..: Failed to create qp for rank 0

      + [22-05-06 10:09:58] __MPI_check_result 143 mpitests-mvapich2 IMB-MPI1 PingPong mpirun /root/hfile_one_core

      Expected results:

      Normal execution of the benchmark with stats output

      Additional info:

      Same behavior was observed on RHEL8.6 builds, as well

              kheib Kamal Heib
              bchae Brian Chae (Inactive)
              Kamal Heib Kamal Heib
              infiniband-qe infiniband-qe infiniband-qe infiniband-qe
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: