Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-40724

[rhel9.7] While growing the md RAID0 with the new device array converted to the RAID4 with the clean,degraded state.

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Normal Normal
    • None
    • rhel-8.8.0, rhel-9.2.0
    • mdadm
    • None
    • None
    • Moderate
    • rhel-sst-logical-storage
    • ssg_filesystems_storage_and_HA
    • 3
    • False
    • Hide

      None

      Show
      None
    • None
    • Red Hat Enterprise Linux
    • None
    • None
    • None
    • All
    • None

      What were you trying to do that didn't work?

      While trying to grow md RAID0  with the new device  the md array converted to the RADI4 with clean,degraded state.

      Please provide the package NVR for which bug is seen:

      [root@localhost ~]# cat /etc/redhat-release 
      Red Hat Enterprise Linux release 9.2 (Plow)

      [root@localhost ~]# uname -r
      5.14.0-427.16.1.el9_4.x86_64

      [root@localhost ~]# rpm -qa |grep -i mdadm
      mdadm-4.2-8.el9.x86_64

      Note: Issue reproducible in RHEL 8.8 as well.

      How reproducible:

      Steps to reproduce

      1.  Created the md RAID0 with the 3 disks.

      [root@localhost ~]# lsblk
      NAME          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
      sda             8:0    0 102.4M  0 disk 
      sdb             8:16   0 102.4M  0 disk 
      sdc             8:32   0 102.4M  0 disk 
      sdd             8:48   0 102.4M  0 disk 
      sde             8:64   0 102.4M  0 disk 
      sr0            11:0    1  1024M  0 rom  
      vda           252:0    0    10G  0 disk 
      ├─vda1        252:1    0     1G  0 part /boot
      └─vda2        252:2    0     9G  0 part 
        ├─rhel-root 253:0    0     8G  0 lvm  /
        └─rhel-swap 253:1    0     1G  0 lvm  [SWAP]

      [root@localhost ~]# mdadm --create /dev/md101 --level=raid0 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc
      mdadm: Defaulting to version 1.2 metadata
      mdadm: array /dev/md101 started.

      [root@localhost ~]# cat /proc/mdstat 
      Personalities : [raid0] 
      md101 : active raid0 sdc[2] sdb[1] sda[0]
            307200 blocks super 1.2 512k chunks
            unused devices: <none>

      1.  md array created with the above 3 disks with the clean state.

      [root@localhost ~]# mdadm -D /dev/md101 
      /dev/md101:
                 Version : 1.2
           Creation Time : Tue Jun 11 15:11:09 2024
              Raid Level : raid0
              Array Size : 307200 (300.00 MiB 314.57 MB)
            Raid Devices : 3
           Total Devices : 3
             Persistence : Superblock is persistent

             Update Time : Tue Jun 11 15:11:09 2024
                   State : clean 
          Active Devices : 3
         Working Devices : 3
          Failed Devices : 0
           Spare Devices : 0

                  Layout : unknown
              Chunk Size : 512K

      Consistency Policy : none

                    Name : localhost.localdomain:101  (local to host localhost.localdomain)
                    UUID : 060f4768:8dfac2fc:e13d5108:f9cc2f0d
                  Events : 0

          Number   Major   Minor   RaidDevice State
             0       8        0        0      active sync   /dev/sda
             1       8       16        1      active sync   /dev/sdb
             2       8       32        2      active sync   /dev/sdc

       

      1.  I have added the empty disk /dev/sdd to grow RAID0 by using the below command;

      [root@localhost ~]# mdadm --grow /dev/md101 --raid-devices=4 --add /dev/sdd
      mdadm: level of /dev/md101 changed to raid4  -----> RAID converted to the RAID4
      mdadm: added /dev/sdd
      mdadm: Need to backup 6144K of critical section..

      [root@localhost ~]# mdadm -D /dev/md101 
      /dev/md101:
                 Version : 1.2
           Creation Time : Tue Jun 11 15:11:09 2024
              Raid Level : raid4
              Array Size : 409600 (400.00 MiB 419.43 MB)
           Used Dev Size : 102400 (100.00 MiB 104.86 MB)
            Raid Devices : 5
           Total Devices : 4
             Persistence : Superblock is persistent

             Update Time : Tue Jun 11 15:11:51 2024
                   State : clean, degraded  ====================> clean,degraded state
          Active Devices : 4
         Working Devices : 4
          Failed Devices : 0
           Spare Devices : 0

              Chunk Size : 512K

      Consistency Policy : resync

                    Name : localhost.localdomain:101  (local to host localhost.localdomain)
                    UUID : 060f4768:8dfac2fc:e13d5108:f9cc2f0d
                  Events : 23

          Number   Major   Minor   RaidDevice State
             0       8        0        0      active sync   /dev/sda
             1       8       16        1      active sync   /dev/sdb
             2       8       32        2      active sync   /dev/sdc
             4       8       48        3      active sync   /dev/sdd
             -       0        0        4      removed  =======> Array showing device in removed state

      1. But mdstat showing as a active state.

      [root@localhost ~]# cat /proc/mdstat 
      Personalities : [raid0] [raid6] [raid5] [raid4] 
      md101 : active raid4 sdd[4] sdc[2] sdb[1] sda[0]
            409600 blocks super 1.2 level 4, 512k chunk, algorithm 5 [5/4] [UUUU_]
            
      unused devices: <none>

       

       

      1.  To fix this issue, manually converted above array from RAID4 to RAID0 as below: 

      [root@localhost ~]# mdadm --grow --level=0 /dev/md101 --raid-devices=4
      mdadm: level of /dev/md101 changed to raid0

      [root@localhost ~]# mdadm -D /dev/md101 
      /dev/md101:
                 Version : 1.2
           Creation Time : Tue Jun 11 15:11:09 2024
              Raid Level : raid0
              Array Size : 409600 (400.00 MiB 419.43 MB)
            Raid Devices : 4
           Total Devices : 4
             Persistence : Superblock is persistent

             Update Time : Tue Jun 11 15:13:17 2024
                   State : clean =========================> clean state
          Active Devices : 4
         Working Devices : 4
          Failed Devices : 0
           Spare Devices : 0

              Chunk Size : 512K

      Consistency Policy : none

                    Name : localhost.localdomain:101  (local to host localhost.localdomain)
                    UUID : 060f4768:8dfac2fc:e13d5108:f9cc2f0d
                  Events : 24

          Number   Major   Minor   RaidDevice State
             0       8        0        0      active sync   /dev/sda
             1       8       16        1      active sync   /dev/sdb
             2       8       32        2      active sync   /dev/sdc
             4       8       48        3      active sync   /dev/sdd
      [root@localhost ~]

      Expected results.

      md RAID0 should grow without manual intervation. Why manually need to converted back to RAID 0?

      Actual results

      md RAID0 array show status as clean,degraded state but $cat /proc/mdstat show active state.

              xni@redhat.com Xiao Ni
              rhn-support-pmahale Pratapsingh Mahale
              Nigel Croxon Nigel Croxon
              Fan Fan Fan Fan
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated: