-
Bug
-
Resolution: Done-Errata
-
Critical
-
4.17.0
-
Critical
-
None
-
1
-
OCPEDGE Sprint 259
-
1
-
Proposed
-
False
-
-
Release Note Not Required
-
In Progress
This is a clone of issue OCPBUGS-41632. The following is the description of the original issue:
—
This is an LVMS Bug Report:
Please make sure that you describe your storage configuration in detail. List all devices that you plan to work with for LVMS as well as any relevant machine configuration data to make it easier for an engineer to help out.
Description of problem:
LVMCluster CR stuck in 'Failed/Progressing' state when try to create lvmCluster with RAID disk in deviceSelector:paths Error in LVMCLuster CR as below, reason: |- failed to create/extend volume group vg1: failed to create volume group vg1: failed to create volume group "vg1". exit status 5: Physical volume '/dev/md1' is already in volume group 'vg1' PV /dev/md1 cannot be added to VG vg1. status: Failed
Version-Release number of selected component (if applicable):
4.17.0-38
Steps to Reproduce:
1. Install latest 4.17 lvms operator 2. Create RAID disk on cluster node using mdadm functionality 3. Create LVMCluster CR RAID disk under deviceSelctor:paths 4. Check LVMCluster CR state
Actual results:
LVMCluster CR stuck in 'Failed/Progressing', $ oc get lvmcluster test-lvmcluster -n openshift-storage -w NAME STATUS test-lvmcluster Progressing test-lvmcluster Failed test-lvmcluster Progressing test-lvmcluster Failed
Expected results:
LVMCluster successfully created and in 'Ready' state
Additional info:
- clones
-
OCPBUGS-41632 LVMCluster failing with RAID disk created via mdadm
- Verified
- is blocked by
-
OCPBUGS-41632 LVMCluster failing with RAID disk created via mdadm
- Verified
- links to
-
RHBA-2024:133135 LVMS 4.17 Bug Fix and Enhancement update
- mentioned on