-
Bug
-
Resolution: Unresolved
-
Major
-
4.16, 4.17, 4.18
-
None
-
3
-
OCPEDGE Sprint 262
-
1
-
False
-
-
Before this fix, LVMS was unintentionally wiping data during an upgrade from 4.16 to 4.18 if the forceWipeDevicesAndDestroyAllData is enabled. With this fix, we implemented a migration logic that prevents unintentional data wipe.
-
Bug Fix
-
In Progress
Description of problem:
Wiping data unintentionally during an upgrade on LVMS
Version-Release number of selected component (if applicable):
4.16
How reproducible:
100%
Steps to Reproduce:
1.Perform Disk partitioning and then create LVMS Cluster with force-wipe parameter 2.Create workloads 3.Upgrade the cluster from 4.16 to 4.17
Actual results:
oc get lvmcluster -n openshift-storage NAME STATUS test-lvmcluster Progressing oc get pods -n openshift-storage NAME READY STATUS RESTARTS AGE lvms-operator-5cb9f698b5-qc666 1/1 Running 0 7m7s mypod1 1/1 Running 1 162m vg-manager-xknhv 1/1 Running 0 7m12s Err: Generated from vg-manager 1417 times in the last 7 minutes error on node openshift-storage/ip-10-0-24-18.us-east-2.compute.internal in volume group openshift-storage/vg1: failed to create/extend volume group vg1: failed to create volume group vg1: failed to create volume group "vg1". exit status 3: /dev/vg1: already exists in filesystem Run `vgcreate --help' for more information. No data in lvs sh-5.1# chroot /host sh-5.1# lvs sh-5.1# exit
Expected results:
Additional info:
- blocks
-
OCPBUGS-44440 [release-4.17] Unintentionally wiping data during an upgrade on LVMS
- ON_QA
- is cloned by
-
OCPBUGS-44440 [release-4.17] Unintentionally wiping data during an upgrade on LVMS
- ON_QA
-
OCPBUGS-44500 [release-4.16] Unintentionally wiping data during an upgrade on LVMS
- ON_QA
- links to