-
Feature
-
Resolution: Done
-
Normal
-
None
-
BU Product Work
-
False
-
-
False
-
0% To Do, 0% In Progress, 100% Done
-
There is a documented and supported procedure how to configure software RAID via mdadm to be used by LVMS.
-
Enhancement
-
Proposed
-
0
-
Program Call
Epic Goal
- Currently, LVMS does provide multiple disks for the VolumeGroup as backend, but it does not support the configuration of LVM software raid features (e.g. using lvmcreate-options).
Why is this important?
- splitting a VG across multiple disks without RAID support creates a high risk of data loss - one disk failure is enough to loose the whole VG. Protecting against that is easy using LVM software raid features. I can do so by specifying the desired config in the LVMCluster CRD, e.g. by adding a "raidConfig: raid2"
Scenarios
- As LVM Storage user with multiple disks, I want to be able to configure software RAID levels to protect my data against single disk outages.
Acceptance Criteria
- CI - MUST be running successfully with tests automated: setup of a SNO cluster with 3 (virtual) disks, create a LVMS deployment with RAID5 configuration.
- QE - use the test from CI, but destroy a disk (e.g. using wipefs) and make sure theres no data loss.
- Release Technical Enablement - Provide necessary release enablement details and documents.
- Docs:
- provide example documentation on how to configure this
- provide procedure on how to replace a failed disk and re-integrate it into the LVM vg (assume we can point to a RHEL doc procedure for this, e.g. this here: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_logical_volumes/troubleshooting-lvm_configuring-and-managing-logical-volumes#removing-lost-lvm-physical-volumes-from-a-volume-group_troubleshooting-lvm )
Dependencies (internal and external)
- ...
Previous Work (Optional):
- …
Open questions::
- …
Done Checklist
- CI - CI is running, tests are automated and merged.
- Release Enablement <link to Feature Enablement Presentation>
- DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
- DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
- DEV - Downstream build attached to advisory: <link to errata>
- QE - Test plans in Polarion: <link or reference to Polarion>
- QE - Automated tests merged: <link or reference to automated tests>
- DOC - Downstream documentation merged: <link to meaningful PR>
Size
Eng: M - Speculatively, this feature can be enabled using existing code paths for passing lvcreate options to the CSI driver. This function is also extensively documented by RHEL so it should be a matter of translating existing LVM operations into the operator.
Docs: S - Feature needs another subsection in the existing docs with a configuration example,
QE: L - Speculating that this will be L for QE. There will be multiple RAID levels to test and I think we'll require testing with multiple disks, RAID resiliency across upgrades, as well as effects of dropping/corrupting one disk in the RAID.
- is cloned by
-
OCPSTRAT-1121 LVM Storage improve metadata size handling
- Release Pending