Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-38362

[RHEL10] create MD failed if with two nvme disk

    • Icon: Bug Bug
    • Resolution: Duplicate
    • Icon: Critical Critical
    • rhel-10.0
    • rhel-10.0.beta
    • mdadm
    • None
    • None
    • rhel-sst-logical-storage
    • ssg_filesystems_storage_and_HA
    • 20
    • 30
    • 3
    • QE ack, Dev ack
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • None

      What were you trying to do that didn't work?

      Please provide the package NVR for which bug is seen:

      How reproducible:

      Steps to reproduce

      1.  
      2.  
      3.  

      Expected results

      Actual results

       

       

      2024-05-23 01:02:31]  INFO: free disk ['/dev/nvme0n1', '/dev/nvme3n1', '/dev/nvme2n1', '/dev/nvme1n1'] 
      INFO: [2024-05-23 01:02:31] Running: 'lsblk;pvs'...
      NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
      sda                            8:0    0 223.6G  0 disk 
      sda1                         8:1    0   600M  0 part /boot/efi
      sda2                         8:2    0     1G  0 part /boot
      sda3                         8:3    0   222G  0 part 
        rhel_storageqe--101-root 253:0    0    70G  0 lvm  /
        rhel_storageqe--101-swap 253:1    0  15.5G  0 lvm  [SWAP]
        rhel_storageqe--101-home 253:2    0 136.5G  0 lvm  /home
      sdb                            8:16   0 223.6G  0 disk 
      sdc                            8:32   0 223.6G  0 disk 
      sdd                            8:48   0 223.6G  0 disk 
      sde                            8:64   0 223.6G  0 disk 
      sdf                            8:80   0 223.6G  0 disk 
      nvme1n1                      259:0    0   1.8T  0 disk 
      nvme3n1                      259:1    0 931.5G  0 disk 
      nvme0n1                      259:3    0   1.5T  0 disk 
      nvme2n1                      259:5    0   1.5T  0 disk 
        PV         VG                 Fmt  Attr PSize   PFree
        /dev/sda3  rhel_storageqe-101 lvm2 a--  221.98g    0
      [2024-05-23 01:02:31]  INFO: setup the storage raid  
      [2024-05-23 01:02:31]  INFO: free disk is ['/dev/nvme0n1', '/dev/nvme3n1'] 
      Traceback (most recent call last):
        File "/home/cryptsetup_libblockdev/luks_main.py", line 140, in <module>
          target = obj.make_test_target(i)
                   ^^^^^^^^^^^^^^^^^^^^^^^
        File "/home/cryptsetup_libblockdev/luks.py", line 1543, in make_test_target
          dev = self.add_raid(self.raid_name, self.raid_mem, self.raid_level, size="3G")
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/home/cryptsetup_libblockdev/luks.py", line 1234, in add_raid
          succ = bd.md_create(raid_name, raid_level,raid_mem, **kwrags)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/usr/lib64/python3.12/site-packages/gi/overrides/BlockDev.py", line 1023, in md_create
          return _md_create(device_name, level, disks, spares, version, bitmap, chunk_size, extra)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      gi.repository.GLib.GError: g-bd-utils-exec-error-quark: Process killed with a signal (0)
      ...finished running python3, exit code=1
      [root@storageqe-101 cryptsetup_libblockdev]# uname -a
      Linux storageqe-101.rhts.eng.pek2.redhat.com 6.9.0-6.el10.x86_64 #1 SMP PREEMPT_DYNAMIC Tue May 14 13:50:59 EDT 2024 x86_64 GNU/Linux
      [root@storageqe-101 cryptsetup_libblockdev]# rpm -qa |grep libblockdev
      libblockdev-utils-3.1.0-3.el10.x86_64
      libblockdev-3.1.0-3.el10.x86_64
      libblockdev-fs-3.1.0-3.el10.x86_64
      libblockdev-loop-3.1.0-3.el10.x86_64
      libblockdev-nvme-3.1.0-3.el10.x86_64
      libblockdev-mdraid-3.1.0-3.el10.x86_64
      libblockdev-part-3.1.0-3.el10.x86_64
      libblockdev-swap-3.1.0-3.el10.x86_64
      libblockdev-crypto-3.1.0-3.el10.x86_64
      libblockdev-lvm-3.1.0-3.el10.x86_64
      libblockdev-lvm-dbus-3.1.0-3.el10.x86_64
      libblockdev-nvdimm-3.1.0-3.el10.x86_64
      libblockdev-mpath-3.1.0-3.el10.x86_64
      libblockdev-dm-3.1.0-3.el10.x86_64
      libblockdev-plugins-all-3.1.0-3.el10.x86_64
      libblockdev-tools-3.1.0-3.el10.x86_64
      [root@storageqe-101 cryptsetup_libblockdev]# 
        
      May 23 01:02:31 storageqe-101 audit[35678]: ANOM_ABEND auid=0 uid=0 gid=0 ses=2 pid=35678 comm="mdadm" exe="/usr/sbin/mdadm" sig=6 res=1
      May 23 01:02:31 storageqe-101 audit: BPF prog-id=201 op=LOAD
      May 23 01:02:31 storageqe-101 audit: BPF prog-id=202 op=LOAD
      May 23 01:02:31 storageqe-101 audit: BPF prog-id=203 op=LOAD
      May 23 01:02:31 storageqe-101 systemd[1]: Started systemd-coredump@1-35679-0.service - Process Core Dump (PID 35679/UID 0).
      May 23 01:02:31 storageqe-101 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@1-35679-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
      May 23 01:02:31 storageqe-101 systemd-coredump[35680]: Process 35678 (mdadm) of user 0 dumped core.#012#012Module libcap.so.2 from rpm libcap-2.69-3.el10.x86_64#012Module libudev.so.1 from rpm systemd-255.3-1.el10.x86_64#012Module mdadm from rpm mdadm-4.3-2.el10.x86_64#012Stack trace of thread 35678:#012#0  0x00007f45f44b827c __pthread_kill_implementation (libc.so.6 + 0x9527c)#012#1  0x00007f45f44633d6 raise (libc.so.6 + 0x403d6)#012#2  0x00007f45f444b8fa abort (libc.so.6 + 0x288fa)#012#3  0x00007f45f444c956 __libc_message_impl.cold (libc.so.6 + 0x29956)#012#4  0x00007f45f453ef7b __fortify_fail (libc.so.6 + 0x11bf7b)#012#5  0x00007f45f453e906 __chk_fail (libc.so.6 + 0x11b906)#012#6  0x00007f45f44a41b4 __vsprintf_internal (libc.so.6 + 0x811b4)#012#7  0x00007f45f45400b9 __sprintf_chk (libc.so.6 + 0x11d0b9)#012#8  0x000055607bbe7aef devt_to_devpath (mdadm + 0x6daef)#012#9  0x000055607bbca908 find_disk_attached_hba.lto_priv.0 (mdadm + 0x50908)#012#10 0x000055607bbd2f8e find_intel_hba_capability.lto_priv.0 (mdadm + 0x58f8e)#012#11 0x000055607bbd340b load_super_imsm.lto_priv.0 (mdadm + 0x5940b)#012#12 0x000055607bb8d727 guess_super_type (mdadm + 0x13727)#012#13 0x000055607bb9c04d Create (mdadm + 0x2204d)#012#14 0x000055607bb82494 main (mdadm + 0x8494)#012#15 0x00007f45f444d30e __libc_start_call_main (libc.so.6 + 0x2a30e)#012#16 0x00007f45f444d3c9 __libc_start_main@@GLIBC_2.34 (libc.so.6 + 0x2a3c9)#012#17 0x000055607bb84cd5 _start (mdadm + 0xacd5)#012ELF object binary architecture: AMD x86-64
      May 23 01:02:31 storageqe-101 systemd[1]: systemd-coredump@1-35679-0.service: Deactivated successfully.
      May 23 01:02:31 storageqe-101 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@1-35679-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
      May 23 01:02:31 storageqe-101 audit: BPF prog-id=203 op=UNLOAD
      May 23 01:02:31 storageqe-101 audit: BPF prog-id=202 op=UNLOAD
      May 23 01:02:31 storageqe-101 audit: BPF prog-id=201 op=UNLOAD
       

              xni@redhat.com Xiao Ni
              guazhang@redhat.com Guangwu Zhang
              Nigel Croxon Nigel Croxon
              Fan Fan Fan Fan
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

                Created:
                Updated:
                Resolved: