-
Bug
-
Resolution: Done-Errata
-
Normal
-
4.14
-
No
-
2
-
OCPEDGE Sprint 245, OCPEDGE Sprint 246
-
2
-
False
-
-
LVMCluster now properly respects Node tolerations in the NodeSpec of any nodes instead of only checking the NoSchedule Taint
-
Bug Fix
-
In Progress
Description of problem:
LVMCluster CR '.status.state' stuck in 'Progressing' even when all VGs are created and running fine
Version-Release number of selected component (if applicable):
4.14
How reproducible:
Always
Steps to Reproduce:
- Deployed LVMS 4.14 spec: storage: deviceClasses: - deviceSelector: paths: - '/dev/disk/by-path/pci-0000:61:00.0-nvme-1' fstype: xfs name: hcp-etcd nodeSelector: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/master operator: Exists thinPoolConfig: name: thin-pool-1 overprovisionRatio: 10 sizePercent: 90 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master status: deviceClassStatuses: - name: hcp-etcd nodeStatus: - devices: - /dev/nvme0n1 node: control-1-ru2xxxx status: Ready - devices: - /dev/nvme0n1 node: control-1-ru3xxxx status: Ready - devices: - /dev/nvme0n1 node: control-1-ru4xxxx status: Ready state: Progressing - When above is deployed, we are explicitly asking for VGs to be created on control nodes however for updating the state, counting of VGs on control nodes is being skipped, in here https://github.com/openshift/lvm-operator/blob/release-4.14/controllers/lvmcluster_controller.go#L315C1-L315C54
Actual results:
state is in 'Progressing'
Expected results:
state should be 'Ready'
Additional info:
observable behavior for this bug is state not going into Ready, may have different side effects as well, wasn't able to check fully
- is cloned by
-
OCPBUGS-23782 LVM Controller not respecting Tolerations while counting VGs created
- Closed
- links to
-
RHBA-2024:126443 LVMS 4.15 Bug Fix and Enhancement update