Windows 11 and Windows Server 2025 VMs with NVIDIA vGPUs or pGPUs on AMD hosts faces vGPU may not be detected ("Code 43" error) after guest GPU driver installation. This happens due to a conflict with the AMD HyperTransport reserved memory range (1008GiB - 1024GiB).
QEMU's current memory management logic, active in machine types `pc-q35-rhel9.2.0` and later, relocates all memory above 4GiB above 1TiB when the VM's memory configuration conflicts with this range. This fragmented memory layout triggers NVIDIA driver issues.
Technical Details:
- HyperTransport Reserved Range: 0xFD00000000 to 0xFFFFFFFFFF (1008GiB to 1024GiB, ~16GiB segment)
- Current Logic: When guest memory extends into HyperTransport range, QEMU moves entire above_4g memory region above 1TiB
- Issue: This fragmented memory layout causes NVIDIA GPU driver failures ("Code 43" errors) in Windows 11/Windows Server 2025 VMs with >1TiB hotplug memory and Q35 machine type
The problematic code:
if (IS_AMD_CPU(&cpu->env)) { /* Bail out if max possible address does not cross HT range */ if (pc_max_used_gpa(pcms, pci_hole64_size) >= AMD_HT_START) { x86ms->above_4g_mem_start = AMD_ABOVE_1TB_START; }
Regression Information:
- Working: pc-q35-rhel9.0.0 and earlier (enforce_amd_1tb_hole = false)
- Broken: pc-q35-rhel9.2.0+ (enforce_amd_1tb_hole = true)
This issue is specific to:
- AMD CPU hosts (Intel hosts don't have this issue)
- Windows 11/Windows Server 2025 guests (Windows 10 and Linux guest don't have this issue)
- Q35 machine type >= q35-rhel9.2.0 (other machine types earlier than q35-rhel9.2.0 and machine type "PC" don't have this issue)
- VMs with memory configurations that trigger the HyperTransport hole, e.g. hot pluggable memory is more than 1TiB.
- Issue is present with both vGPU and pGPU VMs
Impact: Users have consistently experienced Windows 11/Windows Server 2025 vGPU and pGPU failures on AMD hosts with Q35 machine type after the guest driver installation.
Our Proposed Solution: Split the above_4g memory region into two contiguous segments:
- Region 1: 4GiB up to HyperTransport hole start (0xFD00000000)
- Region 2: After HyperTransport hole end (0x10000000000/1024GiB+)
This solution fixes the above described "code 43" issue. Also, this approach precisely respects the 16GiB HyperTransport reserved range while maximizing contiguous usable memory for guests. This fix is intended for a new machine type (`pc-q35-rhel9.5.0`).
Questions for Red Hat:
- Design Rationale: Could you please provide insights into why the blunt memory relocation strategy (moving all high memory above 1TiB) was originally chosen in QEMU when addressing the AMD HyperTransport hole, instead of a more granular split approach? Were there other known downsides of memory holes or specific benefits of that blunt approach that we might be missing?
- Compatibility Concerns: Does the proposed refined memory splitting approach have any known or potential unforeseen side effects or cause regressions with other guest OS versions, non-NVIDIA devices, or specific hardware/software configurations? Our concern is whether this fix might inadvertently introduce new issues in scenarios not directly related to GPU or memory hotplug functionality.
Reference:
- AMD I/O Virtualization Technology (IOMMU) Specification #48882-PUB, Section 2.1.2, Table 3
- Red Hat Bugzilla Bug #1983208
Expected Outcome: Understanding the original design decision and validation that our memory splitting approach won't introduce unintended side effects beyond the NVIDIA GPU use case.
System information:
[root@Agastya41-1 ~]# virsh version --daemon Compiled against library: libvirt 10.0.0 Using library: libvirt 10.0.0 Using API: QEMU 10.0.0 Running hypervisor: QEMU 8.2.0 Running against daemon: 10.0.0 [root@Agastya41-1 ~]# uname -a Linux Agastya41-1 6.1.92-10.0.1s2c14r5.el8.x86_64 #1 SMP Wed Dec 11 12:00:00 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux [root@Agastya41-1 ~]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 96 On-line CPU(s) list: 0-95 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 2 NUMA node(s): 2 Vendor ID: AuthenticAMD CPU family: 25 Model: 17 Model name: AMD EPYC 9254 24-Core Processor Stepping: 1 CPU MHz: 2900.000 CPU max MHz: 4151.7568 CPU min MHz: 1500.0000 BogoMIPS: 5799.86 Virtualization: AMD-V L1d cache: 32K L1i cache: 32K L2 cache: 1024K L3 cache: 32768K NUMA node0 CPU(s): 0-23,48-71 NUMA node1 CPU(s): 24-47,72-95 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d