-
Bug
-
Resolution: Cannot Reproduce
-
Major
-
None
-
4.14.0
-
None
-
Important
-
No
-
False
-
-
The node will be working as expected then during a reboot the node will not come back up.
The console for the vm will indicate
~~~
Redhat ... no media
EFI virtual disk (0.0) ... no media
EFI network 1 .
~~~
Booting the vm from an iso and then running lsblk will show that sda does not have any partitions, same when checking with fdisk.
The node role is infra and labeled for storage
~~~
node-role.kubernetes.io/infra: ""
cluster.ocs.openshift.io/openshift-storage: ""
~~~
The node has 2 disks, sda and sdb
~~~
sda 8:0 0 120G 0 disk
sdb 8:16 0 1.7T 0 disk
~~~
the pv annotation indicates sda for that node, sda is the root disk.
oc get pv local-pv-b0acdaaa -oyaml
~~~
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/bound-by-controller: "yes"
pv.kubernetes.io/provisioned-by: local-volume-provisioner-ocs1.rch3cdtnsrsocp.vzbi.com
storage.openshift.com/device-id: wwn-0x600508b1001c09040a5b34781cd00445
storage.openshift.com/device-name: sda
~~~
When the cluster is at 4.12 the issue does not happen, only when the cluster is at 4.14.10