Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-6144

[RHEL9.2] all mvapich2 benchmarks fail when run on QEDR ROCE with "QP Modify: Failed command. rc=22"

Linking RHIVOS CVEs to...Migration: Automation ...SWIFT: POC ConversionSync from "Extern...XMLWordPrintable

    • None
    • None
    • rhel-net-drivers
    • ssg_networking
    • None
    • False
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • If docs needed, set a value
    • None
    • 57,005

      Description of problem:

      When tested on QEDR ROCE device, all mvapich2 benchmarks fail with the following error messages.

      1. when the benchmarks run with "mpirun" command, the following error occurred.

      [qelr_poll_cq_req:2215]RDMA_CQE_REQ_STS_TRANSPORT_RETRY_CNT_ERR. QP icid=0x3
      [rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][handle_cqe] Send desc error in msg to 1, wc_opcode=0
      [rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][handle_cqe] Msg from 1: wc.status=12 (transport retry counter exceeded), wc.wr_id=0x5645dca6f040, wc.opcode=0, vbuf->phead->type=0 = MPIDI_CH3_PKT_EAGER_SEND
      [rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][mv2_print_wc_status_error] IBV_WC_RETRY_EXC_ERR: This event is generated when a sender is unable to receive feedback from the receiver. This means that either the receiver just never ACKs sender messages in a specified time period, or it has been disconnected or it is in a bad state which prevents it from responding. If this happens when sending the first message, usually it means that the QP connection attributes are wrong or the remote side is not in a state that it can respond to messages. If this happens after sending the first message, usually it means that the remote QP is not available anymore or that there is congestion in the network preventing the packets from reaching on time. Relevant to: RC or DC QPs.
      [rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][handle_cqe] src/mpid/ch3/channels/mrail/src/gen2/ibv_channel_manager.c:497: [] Got completion with error 12, vendor code=0x0, dest rank=1
      : Invalid argument (22)

      2. when the benchmarks run with "mpirun_rsh" command, the following error occurred.

      + [23-01-18 09:48:12] timeout --preserve-status --kill-after=5m 3m mpirun_rsh -hostfile /root/hfile_one_core -np 2 /usr/lib64/mvapich2/bin/mpitests-osu_reduce
      [rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:mpispawn_1][report_error] connect() failed: Connection refused (111)
      [rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:mpispawn_0][read_size] Unexpected End-Of-File on file descriptor 6. MPI process died?
      [rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:mpispawn_0][read_size] Unexpected End-Of-File on file descriptor 6. MPI process died?
      [rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:mpispawn_0][handle_mt_peer] Error while reading PMI socket. MPI process died?
      [qelr_modify_qp:1007]QP Modify: Failed command. rc=22
      [src/mpid/ch3/channels/mrail/src/gen2/rdma_iba_priv.c:2118] Could not modify qpto RTR
      [1 => 0]: post_nosrq_send(ibv_post_sr (post_send_desc)): ret=-22, errno=22: failed while avail wqe is 63, rail 0
      IBV_POST_SR err:: : Invalid argument
      [rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:mpi_rank_1][post_nosrq_send] ./src/mpid/ch3/channels/mrail/src/gen2/ibv_send_inline.h:754: ibv_post_sr (post_send_desc): Invalid argument (22)
      [rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:mpispawn_1][readline] Unexpected End-Of-File on file descriptor 6. MPI process died?
      [rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:mpispawn_1][mtpmi_processops] Error while reading PMI socket. MPI process died?
      [rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:mpispawn_1][child_handler] MPI process (rank: 1, pid: 57685) exited with status 255
      [rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:mpispawn_0][report_error] connect() failed: Connection refused (111)

      Version-Release number of selected component (if applicable):

      Clients: rdma-dev-02
      Servers: rdma-perf-06

      DISTRO=RHEL-9.2.0-20230115.7

      + [23-01-18 08:43:36] cat /etc/redhat-release
      Red Hat Enterprise Linux release 9.2 Beta (Plow)

      + [23-01-18 08:43:36] uname -a
      Linux rdma-dev-02.rdma.lab.eng.rdu2.redhat.com 5.14.0-234.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Jan 12 15:00:24 EST 2023 x86_64 x86_64 x86_64 GNU/Linux

      + [23-01-18 08:43:36] cat /proc/cmdline
      BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-234.el9.x86_64 root=UUID=461524e6-5ab2-4693-b663-de82a9f66e4c ro console=tty0 rd_NO_PLYMOUTH intel_iommu=on iommu=on crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=UUID=6b236054-ce67-4c0e-bfdd-f2533cf0097e console=ttyS1,115200

      + [23-01-18 08:43:36] rpm -q rdma-core linux-firmware
      rdma-core-41.0-3.el9.x86_64
      linux-firmware-20221214-129.el9.noarch

      + [23-01-18 08:43:36] tail /sys/class/infiniband/qedr0/fw_ver /sys/class/infiniband/qedr1/fw_ver
      ==> /sys/class/infiniband/qedr0/fw_ver <==
      8.59.1.0

      ==> /sys/class/infiniband/qedr1/fw_ver <==
      8.59.1.0

      + [23-01-18 08:43:36] lspci
      + [23-01-18 08:43:36] grep -i -e ethernet -e infiniband -e omni -e ConnectX
      02:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
      02:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
      08:00.0 Ethernet controller: QLogic Corp. FastLinQ QL45000 Series 25GbE Controller (rev 10)
      08:00.1 Ethernet controller: QLogic Corp. FastLinQ QL45000 Series 25GbE Controller (rev 10)

      How reproducible:
      100%

      Steps to Reproduce:
      1. On the client hosts, run any of the benchmarks
      2.
      a. with "mpirun" command,

      + [23-01-18 08:43:47] timeout --preserve-status --kill-after=5m 3m mpirun -hostfile /root/hfile_one_core -np 2 mpitests-IMB-MPI1 PingPong -time 1.5

      b. with "mpirun_rsh" command,
      + [23-01-18 09:46:04] timeout --preserve-status --kill-after=5m 3m mpirun -hostfile /root/hfile_one_core -np 2 /usr/lib64/mvapich2/bin/mpitests-osu_scatterv

      3.

      Actual results:

      a. + [23-01-18 08:43:47] timeout --preserve-status --kill-after=5m 3m mpirun -hostfile /root/hfile_one_core -np 2 mpitests-IMB-MPI1 PingPong -time 1.5
      [qelr_modify_qp:1007]QP Modify: Failed command. rc=22
      [src/mpid/ch3/channels/mrail/src/gen2/rdma_iba_priv.c:2118] Could not modify qpto RTR
      [1 => 0]: post_nosrq_send(ibv_post_sr (post_send_desc)): ret=-22, errno=22: failed while avail wqe is 63, rail 0
      IBV_POST_SR err:: : Invalid argument
      [rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:mpi_rank_1][post_nosrq_send] ./src/mpid/ch3/channels/mrail/src/gen2/ibv_send_inline.h:754: ibv_post_sr (post_send_desc): Invalid argument (22)
      #----------------------------------------------------------------

      1. Intel(R) MPI Benchmarks 2021.3, MPI-1 part
        #----------------------------------------------------------------
      2. Date : Wed Jan 18 08:43:48 2023
      3. Machine : x86_64
      4. System : Linux
      5. Release : 5.14.0-234.el9.x86_64
      6. Version : #1 SMP PREEMPT_DYNAMIC Thu Jan 12 15:00:24 EST 2023
      7. MPI Version : 3.1
      8. MPI Thread Environment:
      1. Calling sequence was:
      1. mpitests-IMB-MPI1 PingPong -time 1.5
      1. Minimum message length in bytes: 0
      2. Maximum message length in bytes: 4194304
        #
      3. MPI_Datatype : MPI_BYTE
      4. MPI_Datatype for reductions : MPI_FLOAT
      5. MPI_Op : MPI_SUM
      1. List of Benchmarks to run:
      1. PingPong
        [qelr_poll_cq_req:2215]RDMA_CQE_REQ_STS_TRANSPORT_RETRY_CNT_ERR. QP icid=0x3
        [rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][handle_cqe] Send desc error in msg to 1, wc_opcode=0
        [rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][handle_cqe] Msg from 1: wc.status=12 (transport retry counter exceeded), wc.wr_id=0x5645dca6f040, wc.opcode=0, vbuf->phead->type=0 = MPIDI_CH3_PKT_EAGER_SEND
        [rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][mv2_print_wc_status_error] IBV_WC_RETRY_EXC_ERR: This event is generated when a sender is unable to receive feedback from the receiver. This means that either the receiver just never ACKs sender messages in a specified time period, or it has been disconnected or it is in a bad state which prevents it from responding. If this happens when sending the first message, usually it means that the QP connection attributes are wrong or the remote side is not in a state that it can respond to messages. If this happens after sending the first message, usually it means that the remote QP is not available anymore or that there is congestion in the network preventing the packets from reaching on time. Relevant to: RC or DC QPs.
        [rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][handle_cqe] src/mpid/ch3/channels/mrail/src/gen2/ibv_channel_manager.c:497: [] Got completion with error 12, vendor code=0x0, dest rank=1
        : Invalid argument (22)

      b. + [23-01-18 09:46:04] timeout --preserve-status --kill-after=5m 3m mpirun -hostfile /root/hfile_one_core -np 2 /usr/lib64/mvapich2/bin/mpitests-osu_scatterv
      [qelr_modify_qp:1007]QP Modify: Failed command. rc=22
      [src/mpid/ch3/channels/mrail/src/gen2/rdma_iba_priv.c:2118] Could not modify qpto RTR
      [1 => 0]: post_nosrq_send(ibv_post_sr (post_send_desc)): ret=-22, errno=22: failed while avail wqe is 63, rail 0
      IBV_POST_SR err:: : Invalid argument
      [rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:mpi_rank_1][post_nosrq_send] ./src/mpid/ch3/channels/mrail/src/gen2/ibv_send_inline.h:754: ibv_post_sr (post_send_desc): Invalid argument (22)
      [qelr_poll_cq_req:2215]RDMA_CQE_REQ_STS_TRANSPORT_RETRY_CNT_ERR. QP icid=0x3
      [rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][handle_cqe] Send desc error in msg to 1, wc_opcode=0
      [rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][handle_cqe] Msg from 1: wc.status=12 (transport retry counter exceeded), wc.wr_id=0x558eb6dc0040, wc.opcode=0, vbuf->phead->type=0 = MPIDI_CH3_PKT_EAGER_SEND
      [rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][mv2_print_wc_status_error] IBV_WC_RETRY_EXC_ERR: This event is generated when a sender is unable to receive feedback from the receiver. This means that either the receiver just never ACKs sender messages in a specified time period, or it has been disconnected or it is in a bad state which prevents it from responding. If this happens when sending the first message, usually it means that the QP connection attributes are wrong or the remote side is not in a state that it can respond to messages. If this happens after sending the first message, usually it means that the remote QP is not available anymore or that there is congestion in the network preventing the packets from reaching on time. Relevant to: RC or DC QPs.
      [rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][handle_cqe] src/mpid/ch3/channels/mrail/src/gen2/ibv_channel_manager.c:497: [] Got completion with error 12, vendor code=0x0, dest rank=1
      : Invalid argument (22)

      Expected results:

      Additional info:

              kheib Kamal Heib
              bchae Brian Chae (Inactive)
              Kamal Heib Kamal Heib
              infiniband-qe infiniband-qe infiniband-qe infiniband-qe
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: