-
Bug
-
Resolution: Done
-
Major
-
rhel-9.5
-
No
-
Important
-
CustomerScenariosInitiative
-
rhel-sst-virtualization-windows
-
ssg_virtualization
-
15
-
8
-
QE ack
-
False
-
-
None
-
Red Hat Enterprise Linux
-
None
-
None
-
Automated
-
-
x86_64
-
Windows
-
None
What were you trying to do that didn't work?
During the QE testing, an issue was identified with the Windows Hardware Lab Kit (HLK) failing to execute all "2 Machine" test cases, such as the NDISTest 6.5 - [2 Machine] - LinkCheck. The jobs triggered from the HLK Server were unsuccessful. The VM with HLK Client installed has blank windows and no HLK test suite popping up on the desktop.
Attempts were made to resolve the issue by uninstalling and reinstalling the HLK, but the installation process became stuck. However, after disabling the VBS function, the jobs ran successfully.
Please provide the package NVR for which bug is seen:
- CPU=Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz
- virtio-win-prewhql-0.1-262
- kernel-5.14.0-480.el9.x86_64
- edk2-ovmf-20240524-1.el9.noarch
- swtpm-0.8.0-2.el9_4.x86_64
- qemu-kvm-core-9.0.0-7.el9.x86_64
How reproducible:
100%
Steps to reproduce
1. Install the HLK Driver from our HLK SMB server (\\<HLK_Server_IP>\HLKInstall\Client).
2. Launch a VM on a host with an "Intel" CPU using the provided QEMU command line, including vmx=on.
3. Inside the VM, open the "Device security" application, select "Core isolation," go to "Core isolation details," and turn "Memory integrity" to "On."
4. Restart the VM.
5. Run cases on the HLK server.
Expected results
WHQL cases should run successfully with VBS enabled.
Actual results
WHQL cases were stuck but ran successfully with VBS disabled.
Additional Notes: the whole qemu cmdline
/usr/libexec/qemu-kvm \ -name xxxxx \ -enable-kvm \ -m 8G \ -smp 8 \ -uuid 0e5c38f1-a9fc-4ec0-9cee-7d50cdff5909 \ -nodefaults \ -cpu EPYC,hv_stimer,hv_synic,hv_time,hv_vpindex,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_frequencies,hv_runtime,hv_tlbflush,hv_reenlightenment,hv_stimer_direct,hv_ipi,svm=on \ -chardev socket,id=charmonitor,path=/tmp/xxxxxx,server=on,wait=off \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=localtime,driftfix=slew \ -boot order=cd,menu=on \ -device piix3-usb-uhci,id=usb \ -blockdev driver=file,cache.direct=off,cache.no-flush=on,filename=xxxxxx,node-name=my_file \ -blockdev driver=raw,node-name=my,file=my_file \ -device ide-hd,drive=my,id=ide0-0-0,bus=ide.0,unit=0,bootindex=1 \ -blockdev driver=file,cache.direct=off,cache.no-flush=on,filename=/home/kvm_autotest_root/iso/ISO/Win2025/26100.1.240331-1435.ge_release_SERVER_OEMRET_x64FRE_en-us_redownload.iso,node-name=my_cd,read-only=on \ -blockdev driver=raw,node-name=mycd,file=my_cd,read-only=on \ -device ide-cd,drive=mycd,id=ide0-1-0,bus=ide.1,bootindex=2 \ -cdrom 262NIC256435CI8.iso \ -device usb-tablet,id=input0 \ -vnc 0.0.0.0:0 \ -blockdev driver=file,cache.direct=off,cache.no-flush=on,filename=/home/kvm_autotest_root/iso/windows/FOD.iso,node-name=my_iso,read-only=on \ -blockdev driver=raw,node-name=myiso,file=my_iso,read-only=on \ -device ide-cd,drive=myiso,id=ide0-1-1,bus=ide.4 \ -blockdev node-name=file_ovmf_code,driver=file,filename=xxxxx_ovmf/OVMF_CODE.secboot.fd,auto-read-only=on,discard=unmap \ -blockdev node-name=drive_ovmf_code,driver=raw,read-only=on,file=file_ovmf_code \ -blockdev node-name=file_ovmf_vars,driver=file,filename=xxxxx_ovmf/OVMF_VARS.fd,auto-read-only=on,discard=unmap \ -blockdev node-name=drive_ovmf_vars,driver=raw,read-only=off,file=file_ovmf_vars \ -machine q35,pflash0=drive_ovmf_code,pflash1=drive_ovmf_vars \ -device pcie-root-port,bus=pcie.0,id=root1.0,multifunction=on,port=0x10,chassis=1,addr=0x7 \ -device pcie-root-port,bus=pcie.0,id=root2.0,port=0x11,chassis=2,addr=0x7.0x1 \ -netdev tap,script=/etc/qemu-ifup1,downscript=no,id=hostnet0 \ -device e1000e,bus=root1.0,netdev=hostnet0,id=net0,mac=xxxxxx \ -vga std \ -netdev tap,script=/etc/qemu-ifup-private,downscript=no,id=hostnet1,vhost=on \ -device virtio-net-pci,netdev=hostnet1,bus=root2.0,id=net1,speed=1000,mac=xxxx