-
Bug
-
Resolution: Unresolved
-
Undefined
-
CNV v4.17.3
-
None
-
Quality / Stability / Reliability
-
0.42
-
False
-
-
False
-
None
-
-
CNV Storage 267
-
Important
-
None
Description of problem:
Mapped a new disk from an iSCSI target to the node:
# iscsiadm -m session -P 3 |grep sd Attached scsi disk sdb State: running # [root@10 ~]# lsblk |grep sdb sdb 8:16 0 30G 0 disk ├─sdb1 8:17 0 16M 0 part └─sdb2 8:18 0 1K 0 part
Created a PV on this device, and then it was attached to the VM with the reservation set to 'true.' Tried running a validation test for the Windows Failover Cluster, but the validation failed. The strace of qemu-pr-helper reported that it was unable to see sdb.
1455447 08:47:00.142969 read(14</sys/devices/platform/host13/session17/target13:0:0/13:0:0:0/state>, "running\n", 19) = 8 1455447 08:47:00.143010 close(14</sys/devices/platform/host13/session17/target13:0:0/13:0:0:0/state>) = 0 1455447 08:47:00.143048 openat(AT_FDCWD</>, "/dev/sdb", O_RDONLY) = -1 ENOENT (No such file or directory) 1455447 08:47:00.143107 close(12</dev/disk-chocolate-crane-84>) = 0
"pr-helper" container is unable to see sdb:
[core@10 ~]$ oc rsh -c pr-helper virt-handler-j5lff sh-5.1# ls /dev/sd* /dev/sda /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4
If I restart the virt-handler pod, and stop and start the VMs (required because of CNV-55559), the validation is successful. So a restart of the pod is required for virt-handler to see the new devices.
Version-Release number of selected component (if applicable):
OpenShift Virtualization 4.17.3
How reproducible:
100%
Steps to Reproduce:
1. Map new device on the node. Ensure that virt-handler is not restarted after mapping the new device. 2. The new device will not visible in the pr-helper container. 3. Attach the disk to the VM with "reservation: true". 4. Try doing a WSFC validating test from Windows or run sg_persist from a RHEL VM. It will fail.
Actual results:
qemu-pr-helper is not working with new SCSI devices and requires a restart of virt-handler pod
Expected results:
Additional info: