-
Feature
-
Resolution: Unresolved
-
Major
-
None
-
None
Feature Overview (aka. Goal Summary)
Allow users to configure and use software RAID capabilities provided by LVM in a declarative way using the LVMCluster CR.
Goals (aka. expected user outcomes)
Currently, LVMS does not directly support software RAID configuration. That is risky if multiple disks on a node are used, because the chance of a disk failure (causing loss of data) increases with each disk.
The goal of this feature is to provide a simple way of configuring software RAID on the LVMCLuster CR, by introducing new attributes to allow configuration of the various RAID parameters (e.g. raid level, num of stripes etc). This configuration is then used for all PVC->PV->LV created by this deviceClass.
Requirements (aka. Acceptance Criteria):
- Introduce new optional configuration option on the LVMCluster CR for a device class, that allows configuration of RAID levels and other options. Example sketch (exact details to be defined during design and implementation):
spec: storage: deviceClasses: - name: localRaid5 deviceSelector: paths: - /dev/sda - /dev/sdb - /dev/sdc raidConfig: type: raid5 stripes: 3 mirrors: 1
- RAID configuration must be provided at creation/addition of the device class and can not be added or changed later (Rational: it would be too complex and risky to change the already existing PV/LV on the deviceClass / VG)
- Important: RAID configuration is only supported with THICK volume provisioning. Trying to create a RAID config in combination with a thinPoolConfig leads to a clear error message. Rationale: RAID with thin pools are tricky to operate on day2 due to the way meta-data is handled. Note: this means that volume snapshot/clone becomes rather slow, as physical copying of the data is required. This needs to be explained in the documentation.
- All RAID levels that are supported by LVM need to be possible to configure. See the RHEL documentation here: https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_logical_volumes/configuring-raid-logical-volumes_configuring-and-managing-logical-volumes Please note: as the actual implementation is provided by well proven and tested LVM, there is no need for QE to test all possible options. Simple smoke test with one or two configurations is good enough (see use cases)
- Inconsistent / wrong configuration leads to clear error message and Non-Ready deviceClass. Example are a RAID5 configuration with only two disks (RAID5 requires at least 3).
- Day2 operation of replacing a failed disk that is part of the RAID is documented and tested.
Use Cases:
- As a LVMS user, I want to be able to configure RAID1 across two disks so that I get maximum read performance for my mostly read access database with good redundancy.
- As a LVMS user, I want to be able to configure RAID5 across three disks so that I get good balance of performance, capacity and redundancy.
- As a LVMS user, I can replace a failed disk in my RAID array with redundancy (Level >=1) without any interruption of the workload.
Questions to Answer:
- Do we want to break down the RAID configuration details into multiple attributes (e.g. level, stripe, stripSize, mirrors etc. Or simply have one line of “raidOptions” with has a string that is passed down to lvcreate - e.g. from the “lvcreate –help” pages:
Create a raid LV (a specific raid level must be used, e.g. raid1). lvcreate --type raid -L|--size Size[m|UNIT] VG [ -l|--extents Number[PERCENT] ] [ -m|--mirrors Number ] [ -i|--stripes Number ] [ -I|--stripesize Size[k|UNIT] ] [ -R|--regionsize Size[m|UNIT] ] [ --minrecoveryrate Size[k|UNIT] ] [ --maxrecoveryrate Size[k|UNIT] ] [ --raidintegrity y|n ] [ --raidintegritymode String ] [ --raidintegrityblocksize Number ] [ --integritysettings String ] [ COMMON_OPTIONS ] [ PV ... ]
Out of Scope
- RAID level with THIN pools
- Full testing of all possible LVM configurations - this is already done on RHEL level
Background
See linked customer RFE
Customer Considerations
See linked customer RFE
Documentation Considerations
- Needs to be added as a new section “Configuring software RAID” to the LVMS documentation, explaining the way to configure it, the limitations and day2 operations.
Interoperability Considerations
none
- is triggered by
-
RFE-8380 LVM Storage operator - Ability to stripe local NVMe disks
-
- Approved
-