-
Bug
-
Resolution: Unresolved
-
Minor
-
None
-
odf-4.14
-
None
Description of problem (please be detailed as possible and provide log
snippets):
The status of ODF "Overview" will show error and the status of "Block and File" of the ocs-storagecluster-storagesystem will show warning with "1 daemons have recently crashed" after replacing one local storage device on one openshift-storage node
Version of all relevant components (if applicable):
version 4.14.24
csv
NAME DISPLAY VERSION REPLACES PHASE
mcg-operator.v4.14.6-rhodf NooBaa Operator 4.14.6-rhodf mcg-operator.v4.14.5-rhodf Succeeded
ocs-operator.v4.14.6-rhodf OpenShift Container Storage 4.14.6-rhodf ocs-operator.v4.14.5-rhodf Succeeded
odf-csi-addons-operator.v4.14.6-rhodf CSI Addons 4.14.6-rhodf odf-csi-addons-operator.v4.14.5-rhodf Succeeded
odf-operator.v4.14.6-rhodf OpenShift Data Foundation 4.14.6-rhodf odf-operator.v4.14.5-rhodf Succeeded
servicemeshoperator.v2.5.1 Red Hat OpenShift Service Mesh 2.5.1-0 servicemeshoperator.v2.5.0 Succeeded
Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
$ omc get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 2d
ocs-storagecluster-ceph-rgw openshift-storage.ceph.rook.io/bucket Delete Immediate false 2d
ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 2d
openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 2d
storageclass-odf kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 2d
Is there any workaround available to the best of your knowledge?
https://access.redhat.com/solutions/5989901
Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
3
Can this issue reproducible?
NA
Can this issue reproduce from the UI?
NA
Additional info:
- external trackers