Uploaded image for project: 'OpenShift Container Platform (OCP) Strategy'
  1. OpenShift Container Platform (OCP) Strategy
  2. OCPSTRAT-728

LVM storage recover from existing disks/VG

XMLWordPrintable

    • False
    • Hide

      None

      Show
      None
    • False
    • 100
    • 100% 100%
    • 0
    • 0
    • Program Call

      Feature Overview (aka. Goal Summary)  

      When a user re-installs LVMS, pointing to disks that have been used by a previous LVMS  installation, at least the PVs should be re-created ( best case also the PVC ). Currently, LVMs considers the disks as already be in use and wont touch them, make recovery from re-install without data loss very hard / if not impossible. 

      Goals (aka. expected user outcomes)

      The goal is to be able to recover from re-install (of LVMS or the whole OCP cluster) rather easy.  If LVMS is pointed to the same disks again (i.e. by re-creating the LVMCluster CRD), it should reconcile against the existing disks and recover/re-create the PVs.

      Requirements (aka. Acceptance Criteria):

      for each VG/StorageClass: if the existing disks are  found to be part of an old LVMs installation (see open questions), reconcilation against the VG is done: for each existing LV, if no matching PV is found, the corresponding PV is create. If possible, a matching PVC is also being created.

      Questions to Answer (Optional):

      Include a list of refinement / architectural questions that may need to be answered before coding can begin.  Initial completion during Refinement status.

      How can the a VG / set of LVs being detected as being from an old LVMS installation? Maybe PV/VG labels can be used? Or a naming convention? Worst case,  if nothing of that is possible, we should create a PV with the name of the LV  so that it could be manually attached to a pod for recovery, or deleted if no longer needed. 

      Out of Scope

      Full blown backup&recovery solution, this is the domain of 3d party k8s backup/restore solution providers.

      Background

      Turns out that current LVMs implementation is not very resilient against config changes, so a re-installation might be necessary. That should not induce loss of customer data.

      Customer Considerations

      None.

      Documentation Considerations

      Behaviour needs to be document as part of recovery / troubleshooting docs.

      Interoperability Considerations

      None.

      Size

      Eng: M - We can use LV/VG/PV labeling to store the metadata on-disk so when an LVMCluster object is recreated it can automatically pull all of the previously managed disks.

      Docs: M - Informational note about the feature in the existing docs. No API changes so existing examples can stay as-is.

      QE: S - Standard feature level testing.

            dfroehli42rh Daniel Fröhlich
            dfroehli42rh Daniel Fröhlich
            Suleyman Akbas Suleyman Akbas
            Mike Fiedler Mike Fiedler
            Avital Pinnick Avital Pinnick
            Chad Scribner Chad Scribner
            Daniel Fröhlich Daniel Fröhlich
            Jon Thomas Jon Thomas
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: