Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-120978

should lvm suggest a no-op refresh post device failure when user vgreduced and vgextended

Linking RHIVOS CVEs to...Migration: Automation ...RHELPRIO AssignedTeam ...SWIFT: POC ConversionSync from "Extern...XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Undefined Undefined
    • None
    • rhel-10.2
    • lvm2
    • None
    • None
    • None
    • rhel-storage-lvm
    • None
    • False
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • Unspecified
    • Unspecified
    • Unspecified
    • x86_64
    • None

      To the best of my knowledge, a raid refresh operation is largely only used during transient failures. The device fails, LVM marks the raid with a (r)efresh needed in the 9th 'volume health' position, and if the device ends up coming back, a refresh will restore the metadata, and it's once again a healthy raid, assuming no need for a resync, etc.

      However, if a user sees a failure and then chooses to remove that device (vgreduce removemissing) before it comes back online, and then uses vgck --updatemetadata; vgextend $dev to bring that device back into the VG for use again, a refresh will do nothing to help that raid volume. Yet, that's exactly what LVM suggests. In this scenario it would take a repair or an upconvert operation to bring that raid volume back to its "original" state. Shouldn't we suggest one of those operations versus the refresh?

      [root@virt-484 ~]# lvs -a -o +devices
        WARNING: RaidLV raid_sanity/degraded_upconvert needs to be refreshed!  See character 'r' at position 9 in the RaidLV's attributes and its SubLV(s).
        LV                            VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                      
        degraded_upconvert            raid_sanity   rwi-a-r-r- 800.00m                                    100.00           degraded_upconvert_rimage_0(0),degraded_upconvert_rimage_1(0)
        [degraded_upconvert_rimage_0] raid_sanity   iwi-aor--- 800.00m                                                     /dev/sda1(1)                                                 
        [degraded_upconvert_rimage_1] raid_sanity   vwi-aor-r- 800.00m                                                                                                                  
        [degraded_upconvert_rmeta_0]  raid_sanity   ewi-aor---   4.00m                                                     /dev/sda1(0)                                                 
        [degraded_upconvert_rmeta_1]  raid_sanity   ewi-aor-r-   4.00m                                                                                                                  
      
      # This is going to pass, yet will do nothing to remedy the warning and suggestion that LVM is providing.
      [root@virt-484 ~]# lvchange --refresh raid_sanity/degraded_upconvert
      [root@virt-484 ~]# echo $?
      0
      
      [root@virt-484 ~]# lvs -a -o +devices
        WARNING: RaidLV raid_sanity/degraded_upconvert needs to be refreshed!  See character 'r' at position 9 in the RaidLV's attributes and its SubLV(s).
        LV                            VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                      
        degraded_upconvert            raid_sanity   rwi-a-r-r- 800.00m                                    100.00           degraded_upconvert_rimage_0(0),degraded_upconvert_rimage_1(0)
        [degraded_upconvert_rimage_0] raid_sanity   iwi-aor--- 800.00m                                                     /dev/sda1(1)                                                 
        [degraded_upconvert_rimage_1] raid_sanity   vwi-aor-r- 800.00m                                                                                                                  
        [degraded_upconvert_rmeta_0]  raid_sanity   ewi-aor---   4.00m                                                     /dev/sda1(0)                                                 
        [degraded_upconvert_rmeta_1]  raid_sanity   ewi-aor-r-   4.00m                                                                                                                  
      
      # A device that's back online and is extended back into the VG will be put it back into the raid with a "repair", but that's a replace operation, not just a simple refresh.
      [root@virt-484 ~]# lvconvert --yes --repair raid_sanity/degraded_upconvert /dev/sdf1
        Faulty devices in raid_sanity/degraded_upconvert successfully replaced.
      

              lvm-team lvm-team
              cmarthal@redhat.com Corey Marthaler
              lvm-team lvm-team
              Cluster QE Cluster QE
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

                Created:
                Updated: