Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-134418

Customer needs LVM Raid5LV conversion on full PVs

Linking RHIVOS CVEs to...Migration: Automation ...SWIFT: POC ConversionSync from "Extern...XMLWordPrintable

    • Icon: Task Task
    • Resolution: Unresolved
    • Icon: Normal Normal
    • None
    • rhel-9.7
    • lvm2
    • 2
    • rhel-storage-lvm
    • crs - Sprint 9, crs - Sprint 10
    • 2
    • False
    • Hide

      None

      Show
      None
    • None

      case 04326419:
      Customer has VG with 3NVMes and one raid5 LV on.
      They want to add another 4 NVMes and convert the existing
      raid5 LV from 3 to 7 stripes total
      That conversion needs one extent per PV free space which is not available.

      Proposed workaround:
      Add the 4 NVMes which need to have at least the capacity to allocate a new raid5 LV with 4 stripes total of the same size as the existing one.  A temporary raid1 would need to be configured on top of 2 raid5 LVs to resync the data from the 3-legged (which is what lvm can't do natively), existing raid5 LV across to the new 4 legged one.  We'd need to provide a programmed workaround for that constraint in LVM to make this happen.
      For that workaround, a dmsetup based approach (no need to close the existing LV) or an "mdadm --build $MD -n2 -l1 ..." (short close and open the resulting MD device needed) one is possible.  The later and simpler one presuming the the existing raid5 LV can be closed briefly using "mdadm --build ..." to avoid MD metadata on the devices altogether as it would corrupt part of the raid5 data.

      Steps of the MD utilizing transition would be to

      a) create the 4 legged new raid5 (stripe size changes if required) on the 4 new PVs with a little larger LV size as the existing one (it's going to be a little larger size as of stripe rounding differinces)

      b) close the existing raid5 LV and set up the temporary MD raid1 on top of the 2 ensuring the existing 3-legged one is the first leg to resync the new raid5 LV from
      (the MD device can be opened directly after if acccess is needed)

      c) optionally close and tear down the raid1 after if finished synchronization

      d) validate the new ones contents

      e) remove the old one

      f) optionally rename the new one to the old ones name if required

      g) lvconvert to 7 stripes total.

      Obviously have an actual, validated backup at start.

              rhn-engineering-heinzm Heinz Mauelshagen
              rhn-engineering-heinzm Heinz Mauelshagen
              Rick Greene
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated: