-
Feature
-
Resolution: Done
-
Critical
-
None
-
BU Product Work
-
False
-
-
False
-
0% To Do, 0% In Progress, 100% Done
-
-
Feature
-
Proposed
-
0
-
Program Call
Goal
- Currently, LVM Storage is supported only on Single Node OpenShift Clusters. The goal of this feature is to fully support LVMS on compact/multi node clusters, esp. to provide storage to OpenShift Virtualisation VM images.
Why is this important?
- OpenShift Virtualisation (CNV) currently uses / supports hostpath-provision (HPP). While it has the benefit of maximum performance, it has some drawbacks like lack of isolation of PVs, quota enforcment etc. Hence customers are asking for support of LVMS.
Scenarios
- LVMS is already working on SNO with multiple worker nodes. This is mainly a QE/Docs topic to ensure we can fully support this use case.
Acceptance Criteria
- CI - MUST be running successfully with tests automated
- Release Technical Enablement - Provide necessary release enablement details and documents.
- Install LVMS and CNV to a compact cluster, deploy and run a VM with image on LVMS hosted PV.
Out of scope
- High availability of storage - each node with local storage becomes a single point of failure. That needs to be documented and highlighted in the documentation
- Support for SAN storage integration. While a SAN storage device might be use on a node, support for access to this SAN device from multiple nodes is not supported. This also holds true for multipath access of SAN devices. No explicit testcases are needed for SAN integration
Dependencies (internal and external)
- External: work with the CNV team to ensure they fully understand the limitations (e.g. no multi-node SAN support)
Previous Work (Optional):
- LVMS already working with SNO and additional worker nodes (TODO: find the docs/testcases)
Open questions::
- Multi-Node Snapshotting as well as Volume Cloning is dangerous / can fail due to the scheduler not being aware of Volume Topology from CSI. This can lead to errors while snapshotting. see https://github.com/kubernetes/kubernetes/issues/107479 for more details (Thanks rhn-support-awels for pointing that out) This is potentially a hard blocker for the support of this feature.
Done Checklist
- CI - CI is running, tests are automated and merged.
- Release Enablement <link to Feature Enablement Presentation>
- DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
- DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
- DEV - Downstream build attached to advisory: <link to errata>
- QE - Test plans in Polarion: <link or reference to Polarion>
- QE - Automated tests merged: <link or reference to automated tests>
- DOC - Downstream documentation merged: <link to meaningful PR>
Size
Eng: S - Minimal engineering needed, if any.
Docs: S - Minimal docs needed. This likely requires an informational note and a warning about running LVMS on multi-node clusters.
QE: S - Regression testing run on a multinode cluster to confirm functionality.
- is related to
-
CNV-38485 CNV + LVM Storage on multi node clusters [validation]
- Closed
-
OCPBUGS-23181 topolvm-node crash loopback errors due to default device-class not detected properly
- Closed
- links to
- mentioned in
-
Page Loading...