Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-6188

[RHEL9.1] some IMB-RMA and OSU benchmarks fail due to "write error (Bad file descriptor)" when run in BCM57508 device

Linking RHIVOS CVEs to...Migration: Automation ...SWIFT: POC ConversionSync from "Extern...XMLWordPrintable

    • None
    • None
    • rhel-net-drivers
    • ssg_networking
    • None
    • False
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • If docs needed, set a value
    • None
    • 57,005

      Description of problem:

      The following mvapich2 benchmarks fail due to "write error (Bad file descriptor)" when run in BCM57508 device.

      FAIL | 255 | mvapich2 IMB-RMA Get_accumulate mpirun one_core
      FAIL | 255 | mvapich2 IMB-RMA Fetch_and_op mpirun one_core
      FAIL | 255 | mvapich2 IMB-RMA Compare_and_swap mpirun one_core
      FAIL | 255 | mvapich2 IMB-RMA Get_local mpirun one_core
      FAIL | 255 | mvapich2 IMB-RMA Get_all_local mpirun one_core
      FAIL | 255 | mvapich2 OSU acc_latency mpirun one_core
      FAIL | 255 | mvapich2 OSU allgather mpirun one_core
      FAIL | 255 | mvapich2 OSU allgatherv mpirun one_core
      FAIL | 255 | mvapich2 OSU allreduce mpirun one_core
      FAIL | 255 | mvapich2 OSU alltoall mpirun one_core
      FAIL | 255 | mvapich2 OSU alltoallv mpirun one_core
      FAIL | 255 | mvapich2 OSU barrier mpirun one_core
      FAIL | 255 | mvapich2 OSU bcast mpirun one_core
      FAIL | 255 | mvapich2 OSU bibw mpirun one_core
      FAIL | 255 | mvapich2 OSU bw mpirun one_core
      FAIL | 255 | mvapich2 OSU cas_latency mpirun one_core
      FAIL | 255 | mvapich2 OSU fop_latency mpirun one_core
      FAIL | 255 | mvapich2 OSU gather mpirun one_core
      FAIL | 255 | mvapich2 OSU gatherv mpirun one_core
      FAIL | 255 | mvapich2 OSU get_acc_latency mpirun one_core
      FAIL | 255 | mvapich2 OSU get_bw mpirun one_core
      FAIL | 255 | mvapich2 OSU get_latency mpirun one_core
      FAIL | 255 | mvapich2 OSU hello mpirun one_core
      FAIL | 255 | mvapich2 OSU iallgather mpirun one_core
      FAIL | 255 | mvapich2 OSU iallgatherv mpirun one_core
      FAIL | 255 | mvapich2 OSU iallreduce mpirun one_core
      FAIL | 255 | mvapich2 OSU ialltoall mpirun one_core
      FAIL | 255 | mvapich2 OSU ialltoallv mpirun one_core
      FAIL | 255 | mvapich2 OSU ialltoallw mpirun one_core
      FAIL | 255 | mvapich2 OSU ibarrier mpirun one_core
      FAIL | 255 | mvapich2 OSU ibcast mpirun one_core
      FAIL | 255 | mvapich2 OSU igather mpirun one_core
      FAIL | 255 | mvapich2 OSU igatherv mpirun one_core
      FAIL | 255 | mvapich2 OSU init mpirun one_core
      FAIL | 255 | mvapich2 OSU ireduce mpirun one_core
      FAIL | 255 | mvapich2 OSU iscatter mpirun one_core
      FAIL | 255 | mvapich2 OSU iscatterv mpirun one_core
      FAIL | 255 | mvapich2 OSU latency mpirun one_core
      FAIL | 255 | mvapich2 OSU latency_mp mpirun one_core
      FAIL | 255 | mvapich2 OSU mbw_mr mpirun one_core
      FAIL | 255 | mvapich2 OSU multi_lat mpirun one_core
      FAIL | 255 | mvapich2 OSU put_bibw mpirun one_core
      FAIL | 255 | mvapich2 OSU put_bw mpirun one_core
      FAIL | 255 | mvapich2 OSU put_latency mpirun one_core
      FAIL | 255 | mvapich2 OSU reduce mpirun one_core
      FAIL | 255 | mvapich2 OSU reduce_scatter mpirun one_core
      FAIL | 255 | mvapich2 OSU scatter mpirun one_core
      FAIL | 255 | mvapich2 OSU scatterv mpirun one_core

      Version-Release number of selected component (if applicable):

      Clients: rdma-dev-26
      Servers: rdma-dev-25

      DISTRO=RHEL-9.1.0-20220509.3

      + [22-05-10 09:57:56] cat /etc/redhat-release
      Red Hat Enterprise Linux release 9.1 Beta (Plow)

      + [22-05-10 09:57:56] uname -a
      Linux rdma-dev-26.rdma.lab.eng.rdu2.redhat.com 5.14.0-86.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri May 6 09:23:00 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux

      + [22-05-10 09:57:56] cat /proc/cmdline
      BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-86.el9.x86_64 root=/dev/mapper/rhel_rdma-dev26-root ro intel_idle.max_cstate=0 intremap=no_x2apic_optout processor.max_cstate=0 console=tty0 rd_NO_PLYMOUTH crashkernel=1G-4G:192M,4G-64G:256M,64G:512M resume=/dev/mapper/rhel_rdma-dev-26-swap rd.lvm.lv=rhel_rdma-dev-26/root rd.lvm.lv=rhel_rdma-dev-26/swap console=ttyS1,115200n81

      + [22-05-10 09:57:56] rpm -q rdma-core linux-firmware
      rdma-core-37.2-1.el9.x86_64
      linux-firmware-20220209-126.el9_0.noarch

      + [22-05-10 09:57:56] tail /sys/class/infiniband/bnxt_re0/fw_ver /sys/class/infiniband/bnxt_re1/fw_ver
      ==> /sys/class/infiniband/bnxt_re0/fw_ver <==
      219.0.112.0

      ==> /sys/class/infiniband/bnxt_re1/fw_ver <==
      219.0.112.0

      + [22-05-10 09:57:56] lspci
      + [22-05-10 09:57:56] grep -i -e ethernet -e infiniband -e omni -e ConnectX
      02:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
      02:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
      03:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
      03:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
      04:00.0 Ethernet controller: Broadcom Inc. and subsidiaries BCM57508 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb Ethernet (rev 11)
      04:00.1 Ethernet controller: Broadcom Inc. and subsidiaries BCM57508 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb Ethernet (rev 11)

      Installed:
      mpitests-mvapich2-5.8-1.el9.x86_64 mvapich2-2.3.6-3.el9.x86_64

      How reproducible:

      100%

      Steps to Reproduce:
      1. bring up the RDMA hosts mentioned above with RHEL8.7 build
      2. set up RDMA hosts for mvapich2 benchamrk tests
      3. run one of the mvapich2 benchmark with "mpirun" command, as the following:

      timeout --preserve-status --kill-after=5m 3m mpirun -hostfile /root/hfile_one_core -np 2 mpitests-IMB-RMA Get_accumulate -time 1.5

      Actual results:

      [mpiexec@rdma-dev-26.rdma.lab.eng.rdu2.redhat.com] HYDU_sock_write (utils/sock/sock.c:294): write error (Bad file descriptor)
      [mpiexec@rdma-dev-26.rdma.lab.eng.rdu2.redhat.com] HYD_pmcd_pmiserv_send_signal (pm/pmiserv/pmiserv_cb.c:177): unable to write data to proxy
      [mpiexec@rdma-dev-26.rdma.lab.eng.rdu2.redhat.com] ui_cmd_cb (pm/pmiserv/pmiserv_pmci.c:79): unable to send signal downstream
      [mpiexec@rdma-dev-26.rdma.lab.eng.rdu2.redhat.com] HYDT_dmxu_poll_wait_for_event (tools/demux/demux_poll.c:76): callback returned error status
      [mpiexec@rdma-dev-26.rdma.lab.eng.rdu2.redhat.com] HYD_pmci_wait_for_completion (pm/pmiserv/pmiserv_pmci.c:198): error waiting for event
      [mpiexec@rdma-dev-26.rdma.lab.eng.rdu2.redhat.com] main (ui/mpich/mpiexec.c:340): process manager error waiting for completion
      + [22-05-10 10:08:20] __MPI_check_result 255 mpitests-mvapich2 IMB-RMA Get_accumulate mpirun /root/hfile_one_core

      Expected results:

      Normal execution of benchmarks with stats output

      Additional info:

              kheib Kamal Heib
              bchae Brian Chae (Inactive)
              Kamal Heib Kamal Heib
              infiniband-qe infiniband-qe infiniband-qe infiniband-qe
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: