Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-6196

[RHEL9.1] all mvapich2 benchmarks fail with "Unknown HCA type" error when tested on iRDMA device

Linking RHIVOS CVEs to...Migration: Automation ...SWIFT: POC ConversionSync from "Extern...XMLWordPrintable

    • None
    • None
    • rhel-net-drivers
    • None
    • False
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • If docs needed, set a value
    • None
    • 57,005

      Description of problem:

      When tested on iRDMA ROCE device, all mvapich2 benchmarks fail with the following error message.

      "[rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:mpi_rank_1][rdma_open_hca] Unknown HCA type: this build of MVAPICH2 does notfully support the HCA found on the system (try with other build options)"

      Version-Release number of selected component (if applicable):

      Clients: rdma-dev-31
      Servers: rdma-dev-30

      DISTRO=RHEL-9.1.0-20220609.0

      + [22-06-10 06:52:38] cat /etc/redhat-release
      Red Hat Enterprise Linux release 9.1 Beta (Plow)

      + [22-06-10 06:52:38] uname -a
      Linux rdma-dev-31.rdma.lab.eng.rdu2.redhat.com 5.14.0-106.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Jun 7 07:22:29 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux

      + [22-06-10 06:52:38] cat /proc/cmdline
      BOOT_IMAGE=(hd0,gpt2)/vmlinuz-5.14.0-106.el9.x86_64 root=/dev/mapper/rhel_rdma-dev31-root ro crashkernel=1G-4G:192M,4G-64G:256M,64G:512M resume=/dev/mapper/rhel_rdma-dev-31-swap rd.lvm.lv=rhel_rdma-dev-31/root rd.lvm.lv=rhel_rdma-dev-31/swap console=ttyS0,115200n81

      + [22-06-10 06:52:38] rpm -q rdma-core linux-firmware
      rdma-core-37.2-1.el9.x86_64
      linux-firmware-20220509-126.el9.noarch

      + [22-06-10 06:52:38] tail /sys/class/infiniband/irdma0/fw_ver /sys/class/infiniband/irdma1/fw_ver
      ==> /sys/class/infiniband/irdma0/fw_ver <==
      1.52

      ==> /sys/class/infiniband/irdma1/fw_ver <==
      1.52

      + [22-06-10 06:52:38] lspci
      + [22-06-10 06:52:38] grep -i -e ethernet -e infiniband -e omni -e ConnectX
      04:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
      04:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
      04:00.2 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
      04:00.3 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
      44:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02)
      44:00.1 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02)

      Installed:
      mpitests-mvapich2-5.8-1.el9.x86_64 mvapich2-2.3.6-3.el9.x86_64

      How reproducible:

      100%

      Steps to Reproduce:
      1. bring up the RDMA hosts mentioned above with RHEL9.1 build
      2. set up RDMA hosts for mvapich2 benchamrk tests
      3. run one of the mvapich2 benchmark with "mpirun" command, as the following:
      timeout --preserve-status --kill-after=5m 3m mpirun -hostfile /root/hfile_one_core -np 2 mpitests-IMB-MPI1 PingPong -time 1.5

      Actual results:

      [rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:mpi_rank_1][rdma_open_hca] Unknown HCA type: this build of MVAPICH2 does notfully support the HCA found on the system (try with other build options) <<<===============================

      [rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:mpi_rank_1][error_sighandler] Caught error: Segmentation fault (signal 11)

      ===================================================================================
      = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
      = PID 47813 RUNNING AT 172.31.45.131
      = EXIT CODE: 139
      = CLEANING UP REMAINING PROCESSES
      = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
      ===================================================================================
      [proxy:0:0@rdma-dev-30.rdma.lab.eng.rdu2.redhat.com] HYD_pmcd_pmip_control_cmd_cb (pm/pmiserv/pmip_cb.c:911): assert (!closed) failed
      [proxy:0:0@rdma-dev-30.rdma.lab.eng.rdu2.redhat.com] HYDT_dmxu_poll_wait_for_event (tools/demux/demux_poll.c:76): callback returned error status
      [proxy:0:0@rdma-dev-30.rdma.lab.eng.rdu2.redhat.com] main (pm/pmiserv/pmip.c:202): demux engine error waiting for event
      YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault (signal 11)
      This typically refers to a problem with your application.
      Please see the FAQ page for debugging suggestions
      + [22-06-10 06:52:51] __MPI_check_result 139 mpitests-mvapich2 IMB-MPI1 PingPong mpirun /root/hfile_one_core

      Expected results:

      Normal execution of benchmarks with stats output

      Additional info:

              kheib Kamal Heib
              bchae Brian Chae (Inactive)
              Kamal Heib Kamal Heib
              infiniband-qe infiniband-qe infiniband-qe infiniband-qe
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: