Uploaded image for project: 'OpenShift Container Platform (OCP) Strategy'
  1. OpenShift Container Platform (OCP) Strategy
  2. OCPSTRAT-2897

LVMS supports software RAID configuration

XMLWordPrintable

    • Product / Portfolio Work
    • None
    • False
    • Hide

      None

      Show
      None
    • False
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Feature Overview (aka. Goal Summary)  

      Allow users to configure and use software RAID capabilities provided by LVM in a declarative way using the LVMCluster CR.

      Goals (aka. expected user outcomes)

      Currently, LVMS does not directly support software RAID configuration. That is risky if multiple disks on a node are used, because the chance of a disk failure (causing loss of data) increases with each disk. 

      The goal of this feature is to provide a simple way of configuring software RAID on the LVMCLuster CR, by introducing new attributes to allow configuration of the various RAID parameters (e.g. raid level, num of stripes etc). This configuration is then used for all PVC->PV->LV created by this deviceClass. 

      Requirements (aka. Acceptance Criteria):

      1. Introduce new optional configuration option on the LVMCluster CR for a device class, that allows configuration of RAID levels and other options. Example sketch (exact details to be defined during design and implementation):
        spec:
          storage:
            deviceClasses:
              - name:  localRaid5
                deviceSelector:
                  paths:
                  - /dev/sda
                  - /dev/sdb
                  - /dev/sdc
                raidConfig:
                  type: raid5 
                  stripes: 3
                  mirrors: 1
        
      1. RAID configuration must be provided at creation/addition of the device class and can not be added or changed later (Rational: it would be too complex and risky to change the already existing PV/LV on the deviceClass / VG)
      2. Important: RAID configuration is only supported with THICK volume provisioning. Trying to create a RAID config in combination with a thinPoolConfig leads to a clear error message. Rationale: RAID with thin pools are tricky to operate on day2 due to the way meta-data is handled. Note: this means that volume snapshot/clone becomes rather slow, as physical copying of the data is required. This needs to be explained in the documentation.
      3. All RAID levels that are supported by LVM need to be possible to configure. See the RHEL documentation here: https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_logical_volumes/configuring-raid-logical-volumes_configuring-and-managing-logical-volumes Please note: as the actual implementation is provided by well proven and tested LVM, there is no need for QE to test all possible options. Simple smoke test with one or two configurations is good enough (see use cases)
      4. Inconsistent / wrong configuration leads to clear error message and Non-Ready deviceClass. Example are a RAID5 configuration with only two disks (RAID5 requires at least 3). 
      5. Day2 operation of replacing a failed disk that is part of the RAID is documented and tested.

      Use Cases:

      1. As a LVMS user, I want to be able to configure RAID1 across two disks so that I  get maximum read performance for my mostly read access database with good redundancy.
      2. As a LVMS user, I want to be able to configure RAID5 across three disks so that I get good balance of performance, capacity and redundancy.
      3. As a LVMS user, I can replace a failed disk in my RAID array with redundancy (Level >=1) without any interruption of the workload. 

      Questions to Answer:

      1. Do we want to break down the RAID configuration details into multiple attributes (e.g. level, stripe, stripSize, mirrors etc. Or simply have one line of “raidOptions” with has a string that is passed down to lvcreate - e.g. from the  “lvcreate –help” pages:
        Create a raid LV (a specific raid level must be used, e.g. raid1).
        lvcreate    --type raid -L|--size Size[m|UNIT] VG
              [ -l|--extents Number[PERCENT] ]
              [ -m|--mirrors Number ]
              [ -i|--stripes Number ]
              [ -I|--stripesize Size[k|UNIT] ]
              [ -R|--regionsize Size[m|UNIT] ]
              [    --minrecoveryrate Size[k|UNIT] ]
              [    --maxrecoveryrate Size[k|UNIT] ]
              [    --raidintegrity y|n ]
              [    --raidintegritymode String ]
              [    --raidintegrityblocksize Number ]
              [    --integritysettings String ]
              [ COMMON_OPTIONS ]
              [ PV ... ] 
        

       

      Out of Scope

      1. RAID level with THIN pools
      2. Full testing of all possible LVM configurations - this is already done on RHEL level

      Background

      See linked customer RFE

      Customer Considerations

      See linked customer RFE

      Documentation Considerations

      1. Needs to be added as a new section “Configuring software RAID” to the LVMS documentation, explaining the way to configure it, the limitations and day2 operations.

      Interoperability Considerations

      none

              dfroehli42rh Daniel Fröhlich
              dfroehli42rh Daniel Fröhlich
              None
              None
              Geri Peterson Geri Peterson
              Minal Pradeep Makwana Minal Pradeep Makwana
              Matthew Werner Matthew Werner
              Eric Rich Eric Rich
              openshift-edge-enablement
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated: