Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-65106

[rhel10] mdadm option --fail not valid in grow mode

    • Icon: Bug Bug
    • Resolution: Not a Bug
    • Icon: Undefined Undefined
    • None
    • rhel-10.0
    • libblockdev
    • None
    • No
    • None
    • rhel-sst-storage-management
    • ssg_filesystems_storage_and_HA
    • None
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • None

      What were you trying to do that didn't work?

      What is the impact of this issue to you?

      Please provide the package NVR for which the bug is seen:

      How reproducible is this bug?:

      Steps to reproduce

      1.  
      2.  
      3.  

      Expected results

      Actual results

       

      [2024-10-28 22:30:55]  INFO: setup the storage raid1  
      [2024-10-28 22:30:55]  INFO: free disk is ['/dev/loop0', '/dev/loop1'] 
      [2024-10-28 22:30:55]  INFO: raid parameter raid_level raid1,  raid_mem ['/dev/loop0', '/dev/loop1'],                    version None,  bitmap internal,                     chunk_size 0                    extra, [BlockDev.ExtraArg (BDExtraArg) instance (0x7fc5e76969f0)
       opt: --write-mostly
       val: 
      , BlockDev.ExtraArg (BDExtraArg) instance (0x7fc5e7696930)
       opt: --bitmap-chunk
       val: 128M
      ],                     kwrags, {'size': '3G'} 
      Sleeping, wait MD resync
      Sleeping, wait MD resync
       

       

      >>> d1 = "/dev/%s" % o.add_loop()
      /home/bd.gc_hfuw5-loop_test
      >>> d2 = "/dev/%s" % o.add_loop()
      /home/bd.2col18w5-loop_test
      >>> o.bd.md_add("TESTRAID", d1, 3, None)
      True
      >>> o.bd.md_add("TESTRAID", d1, 3, [o.bd.ExtraArg.new("--fail", o.raid_mem[1]), o.bd.ExtraArg.new("--remove", o.raid_mem[1])])
      Traceback (most recent call last):
        File "<stdin>", line 1, in <module>
        File "/usr/lib64/python3.12/site-packages/gi/overrides/BlockDev.py", line 1031, in md_add
          return _md_add(raid_spec, device, raid_devs, extra)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      gi.repository.GLib.GError: g-bd-utils-exec-error-quark: Process reported exit code 2: mdadm: :option --fail not valid in grow mode
       (0)
      >>> 
      
      root@storageqe-65 ]# mdadm /dev/md/TESTRAID --add /dev/nvme0n1p1 --fail /dev/loop0 --remove /dev/loop0
      mdadm: added /dev/nvme0n1p1
      mdadm: set /dev/loop0 faulty in /dev/md/TESTRAID
      mdadm: hot removed /dev/loop0 from /dev/md/TESTRAID
      [root@storageqe-65 00-raid0]# mdadm -D /dev/md/TESTRAID 
      /dev/md/TESTRAID:
                 Version : 1.2
           Creation Time : Mon Oct 28 22:30:55 2024
              Raid Level : raid1
              Array Size : 3145728 (3.00 GiB 3.22 GB)
           Used Dev Size : 3145728 (3.00 GiB 3.22 GB)
            Raid Devices : 3
           Total Devices : 4
             Persistence : Superblock is persistent     Intent Bitmap : Internal       Update Time : Mon Oct 28 22:58:12 2024
                   State : clean, degraded, recovering 
          Active Devices : 2
         Working Devices : 4
          Failed Devices : 0
           Spare Devices : 2Consistency Policy : bitmap    Rebuild Status : 57% complete              Name : TESTRAID
                    UUID : cdf991ff:2bda37da:6cf16808:e09298b0
                  Events : 58    Number   Major   Minor   RaidDevice State
             2       7        2        0      spare rebuilding   /dev/loop2
             1       7        1        1      active sync   /dev/loop1
             3       7        3        2      active sync   /dev/loop3       4     259        5        -      spare   /dev/nvme0n1p1
      [root@storageqe-65 ]#
      [root@storageqe-65 ]# mdadm -D /dev/md/TESTRAID 
      /dev/md/TESTRAID:
                 Version : 1.2
           Creation Time : Mon Oct 28 22:30:55 2024
              Raid Level : raid1
              Array Size : 3145728 (3.00 GiB 3.22 GB)
           Used Dev Size : 3145728 (3.00 GiB 3.22 GB)
            Raid Devices : 3
           Total Devices : 4
             Persistence : Superblock is persistent     Intent Bitmap : Internal       Update Time : Mon Oct 28 22:58:19 2024
                   State : clean 
          Active Devices : 3
         Working Devices : 4
          Failed Devices : 0
           Spare Devices : 1Consistency Policy : bitmap              Name : TESTRAID
                    UUID : cdf991ff:2bda37da:6cf16808:e09298b0
                  Events : 68    Number   Major   Minor   RaidDevice State
             2       7        2        0      active sync   /dev/loop2
             1       7        1        1      active sync   /dev/loop1
             3       7        3        2      active sync   /dev/loop3       
             4     259        5        -      spare   /dev/nvme0n1p1
      
        

      looks mdadm works well with the cmd

       mdadm /dev/md/TESTRAID --add /dev/nvme0n1p1 --fail /dev/loop0 --remove /dev/loop0

      but the libblockdev md api hit error , so please have a look

      mdadm-4.3-3.el10.x86_64

      libblockdev-mdraid-3.1.0-8.el10.x86_64

      libblockdev-3.1.0-8.el10.x86_64

      6.11.0-25.el10.x86_64

       

       

              vtrefny@redhat.com Vojtěch Trefný
              guazhang@redhat.com Guangwu Zhang
              Vojtěch Trefný Vojtěch Trefný
              Guangwu Zhang Guangwu Zhang
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: