-
Bug
-
Resolution: Done
-
Normal
-
4.13, 4.12, 4.14, 4.15
-
None
-
Quality / Stability / Reliability
-
False
-
-
2
-
Moderate
-
No
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
While running a fresh install, topolvm-node pods are unable to start and will stay in Init stage.
Version-Release number of selected component (if applicable):
4.12.z / 4.13.z
How reproducible:
Always when lvm configuration from previous installation exists on disk (lvs, vgs, etc...)
Steps to Reproduce:
1. Running OpenShift + LVMS on private CI 2. With `install` CI steps, reuse the disks without `wipefs -a` on the previous `destroy cluster` step.
Actual results:
# oc get pods -n openshift-storage NAME READY STATUS RESTARTS AGE lvms-operator-88cfc4c9c-f6gg4 3/3 Running 0 13h topolvm-controller-5496f5d4f4-c6dbb 5/5 Running 0 13h topolvm-node-jc49v 0/4 Init:0/1 0 13h topolvm-node-n52vs 0/4 Init:0/1 0 13h topolvm-node-v8pvm 0/4 Init:0/1 0 13h vg-manager-m5gqx 1/1 Running 0 13h vg-manager-qcwq4 1/1 Running 0 13h vg-manager-r2df4 1/1 Running 0 13h
Expected results:
# oc get pods -n openshift-storage NAME READY STATUS RESTARTS AGE lvms-operator-88cfc4c9c-f6gg4 3/3 Running 0 13h topolvm-controller-5496f5d4f4-c6dbb 5/5 Running 0 13h topolvm-node-jc49v 4/4 Running 0 13h topolvm-node-n52vs 4/4 Running 0 13h topolvm-node-v8pvm 4/4 Running 0 13h vg-manager-m5gqx 1/1 Running 0 13h vg-manager-qcwq4 1/1 Running 0 13h vg-manager-r2df4 1/1 Running 0 13h
Additional info: