-
Bug
-
Resolution: Duplicate
-
Normal
-
4.13.z, 4.13.0, 4.13, 4.14, 4.14.0, 4.14.z
-
Quality / Stability / Reliability
-
False
-
-
2
-
Low
-
No
-
None
-
None
-
None
-
OCP VE Sprint 241, OCP VE Sprint 242
-
2
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
After creating LVMCluster, it reports that the state is "progressing" indefinitely even though storage has been successfully provisioned
Version-Release number of selected component (if applicable):
4.13.1
How reproducible:
Steps to Reproduce:
1. Install LVM Operator 2. Create LVMCluster, in my case I used the specs below {code:yaml} apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: lvmcluster namespace: openshift-storage spec: storage: deviceClasses: - name: standard default: true thinPoolConfig: name: thin-pool-1 overprovisionRatio: 10 sizePercent: 90 deviceSelector: paths: - /dev/vdb - name: fast thinPoolConfig: name: thin-pool-2 overprovisionRatio: 10 sizePercent: 90 deviceSelector: paths: - /dev/vdc
Actual results:
The LVMCluster provisions storage successfully, but the state is forever stuck in Progressing {code:yaml} status: deviceClassStatuses: - name: fast nodeStatus: - devices: - /dev/vdc node: worker2 status: Ready - devices: - /dev/vdc node: worker1 status: Ready - name: standard nodeStatus: - devices: - /dev/vdb node: worker2 status: Ready - devices: - /dev/vdb node: worker1 status: Ready state: Progressing
I can create PVs and write to them successfully on both storage classes on both workers. The node storage looks good as well.
worker1
oc debug node/worker1 -- chroot /host lsblk
Temporary namespace openshift-debug-mdvh4 is created for debugging node...
Starting pod/worker1-debug ...
To use host binaries, run `chroot /host`
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
vda 252:0 0 100G 0 disk
|-vda1 252:1 0 1M 0 part
|-vda2 252:2 0 127M 0 part
|-vda3 252:3 0 384M 0 part /boot
`-vda4 252:4 0 99.5G 0 part /var/lib/containers/storage/overlay
/var
/sysroot/ostree/deploy/rhcos/var
/sysroot
/usr
/etc
/
vdb 252:16 0 690G 0 disk
|-standard-thin--pool--1_tmeta 253:3 0 312M 0 lvm
| `-standard-thin--pool--1-tpool 253:5 0 620.4G 0 lvm
| |-standard-thin--pool--1 253:6 0 620.4G 1 lvm
| |-standard-459af0ae--91d5--484f--8878--d906919e2999 253:7 0 30G 0 lvm
| |-standard-ddd0dc26--622d--4bef--93f5--1588c5c7f4e6 253:8 0 30G 0 lvm
| |-standard-49d1e522--b2ca--4bd8--b14e--56aaae9b7523 253:9 0 30G 0 lvm
| `-standard-93211e78--971c--4358--b23d--5e3cc40628ba 253:10 0 1G 0 lvm /var/lib/kubelet/pods/6ba87e10-d04b-4b73-beab-6ac0fe2ec511/volumes/kubernetes.io~csi/pvc-398a4a1a-962a-4061-b2a3-0233a8b5bcf0/mount
`-standard-thin--pool--1_tdata 253:4 0 620.4G 0 lvm
`-standard-thin--pool--1-tpool 253:5 0 620.4G 0 lvm
|-standard-thin--pool--1 253:6 0 620.4G 1 lvm
|-standard-459af0ae--91d5--484f--8878--d906919e2999 253:7 0 30G 0 lvm
|-standard-ddd0dc26--622d--4bef--93f5--1588c5c7f4e6 253:8 0 30G 0 lvm
|-standard-49d1e522--b2ca--4bd8--b14e--56aaae9b7523 253:9 0 30G 0 lvm
`-standard-93211e78--971c--4358--b23d--5e3cc40628ba 253:10 0 1G 0 lvm /var/lib/kubelet/pods/6ba87e10-d04b-4b73-beab-6ac0fe2ec511/volumes/kubernetes.io~csi/pvc-398a4a1a-962a-4061-b2a3-0233a8b5bcf0/mount
vdc 252:32 0 260.8G 0 disk
|-fast-thin--pool--2_tmeta 253:0 0 120M 0 lvm
| `-fast-thin--pool--2-tpool 253:2 0 234.5G 0 lvm
| |-fast-thin--pool--2 253:11 0 234.5G 1 lvm
| `-fast-31ab7030--56ad--4302--b6d8--244dbb68e157 253:12 0 1G 0 lvm /var/lib/kubelet/pods/6ba87e10-d04b-4b73-beab-6ac0fe2ec511/volumes/kubernetes.io~csi/pvc-889e8e12-ca83-4dad-a5e7-59f25cf37ff9/mount
`-fast-thin--pool--2_tdata 253:1 0 234.5G 0 lvm
`-fast-thin--pool--2-tpool 253:2 0 234.5G 0 lvm
|-fast-thin--pool--2 253:11 0 234.5G 1 lvm
`-fast-31ab7030--56ad--4302--b6d8--244dbb68e157 253:12 0 1G 0 lvm /var/lib/kubelet/pods/6ba87e10-d04b-4b73-beab-6ac0fe2ec511/volumes/kubernetes.io~csi/pvc-889e8e12-ca83-4dad-a5e7-59f25cf37ff9/mount
worker2
oc debug node/worker2 -- chroot /host lsblk
Temporary namespace openshift-debug-sc87v is created for debugging node...
Starting pod/worker2-debug ...
To use host binaries, run `chroot /host`
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 30G 0 loop
vda 252:0 0 100G 0 disk
|-vda1 252:1 0 1M 0 part
|-vda2 252:2 0 127M 0 part
|-vda3 252:3 0 384M 0 part /boot
`-vda4 252:4 0 99.5G 0 part /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-7ca3145a-2940-4170-ae2f-a86c7f385a8c/dev/a11a7405-80b9-4760-9229-aa9bbfcb5d4b
/var/lib/kubelet/pods/ed5974a8-c08f-41c7-a95a-8da772186991/volume-subpaths/nginx-conf/kubevirt-console-plugin/1
/var/lib/containers/storage/overlay
/var
/sysroot/ostree/deploy/rhcos/var
/sysroot
/usr
/etc
/
vdb 252:16 0 690G 0 disk
|-standard-thin--pool--1_tmeta 253:0 0 312M 0 lvm
| `-standard-thin--pool--1-tpool 253:2 0 620.4G 0 lvm
| |-standard-thin--pool--1 253:6 0 620.4G 1 lvm
| |-standard-12172f7e--6741--4555--a5cd--6959e882472d 253:7 0 30G 0 lvm
| |-standard-8fa0b27b--c394--4418--8318--050018bf02e4 253:8 0 30G 0 lvm
| |-standard-baf430fa--ef14--4b89--8562--0ab08cd294f6 253:9 0 30G 0 lvm
| `-standard-047df878--79c1--47b0--b6d4--68c426006761 253:10 0 30G 0 lvm
`-standard-thin--pool--1_tdata 253:1 0 620.4G 0 lvm
`-standard-thin--pool--1-tpool 253:2 0 620.4G 0 lvm
|-standard-thin--pool--1 253:6 0 620.4G 1 lvm
|-standard-12172f7e--6741--4555--a5cd--6959e882472d 253:7 0 30G 0 lvm
|-standard-8fa0b27b--c394--4418--8318--050018bf02e4 253:8 0 30G 0 lvm
|-standard-baf430fa--ef14--4b89--8562--0ab08cd294f6 253:9 0 30G 0 lvm
`-standard-047df878--79c1--47b0--b6d4--68c426006761 253:10 0 30G 0 lvm
vdc 252:32 0 260.8G 0 disk
|-fast-thin--pool--2_tmeta 253:3 0 120M 0 lvm
| `-fast-thin--pool--2 253:5 0 234.5G 0 lvm
`-fast-thin--pool--2_tdata 253:4 0 234.5G 0 lvm
`-fast-thin--pool--2 253:5 0 234.5G 0 lvm
Additional info:{code:none}
- is caused by
-
OCPBUGS-17853 expectedVGCount / readyVGCount mismatch in MultiNode Clusters fails Readiness
-
- Closed
-