-
Bug
-
Resolution: Unresolved
-
Minor
-
CNV v4.19.10
-
None
-
Quality / Stability / Reliability
-
0.42
-
False
-
-
False
-
None
-
-
Moderate
-
None
Description of problem:
Customers are running into situations where they are unaware their Guests are getting VMX instructions - allowing nested virt.
As seen on RHEL-100414, even if there is no actual use of the extensions from within the Guest, this can trigger other issues, as its not officially supported except in a few very specific use cases.
Ideally CNV should have more control over this, to not allow it to happen so easily if a default cluster CPU model is not configured.
See:
1. VM defined as
# oc get vm rhel-ai -o yaml | yq '.spec.template.spec.domain.cpu' cores: 8 sockets: 1 threads: 1
2. The XML generated by CNV will default to just use host-model, without any specific feature policy
# oc logs virt-launcher-rhel-ai-4vr6t | egrep -o "Base64 dump [A-Za-z0-9+]+" | tail -n 1 | cut -d ' ' -f3 | base64 -d | xmllint -format --xpath //domain//cpu - <cpu mode="host-model"><topology sockets="4" cores="8" threads="1"/></cpu>
3. The guest ends up running with VMX enabled
# oc rsh virt-launcher-rhel-ai-4vr6t virsh dumpxml 1 --xpath //domain//cpu <cpu mode="custom" match="exact" check="full"> <model fallback="forbid">SierraForest</model> <vendor>Intel</vendor> <topology sockets="4" dies="1" clusters="1" cores="8" threads="1"/> <feature policy="require" name="vmx"/> <feature policy="require" name="hypervisor"/> <feature policy="require" name="ss"/> <feature policy="require" name="tsc_adjust"/> <feature policy="require" name="waitpkg"/> <feature policy="require" name="movdiri"/> <feature policy="require" name="movdir64b"/> <feature policy="require" name="md-clear"/> <feature policy="require" name="stibp"/> <feature policy="require" name="flush-l1d"/> <feature policy="require" name="ibpb"/> <feature policy="require" name="ibrs"/> <feature policy="require" name="amd-stibp"/> <feature policy="require" name="amd-ssbd"/> <feature policy="require" name="gds-no"/> <feature policy="require" name="rfds-clear"/> <feature policy="require" name="vmx-activity-wait-sipi"/> <feature policy="require" name="vmx-tsc-scaling"/> <feature policy="require" name="vmx-enable-user-wait-pause"/> <feature policy="disable" name="bus-lock-detect"/> <feature policy="disable" name="cmpccxadd"/> <feature policy="disable" name="avx-ifma"/> <feature policy="disable" name="avx-vnni-int8"/> <feature policy="disable" name="avx-ne-convert"/> <feature policy="disable" name="mcdt-no"/> <feature policy="disable" name="wbnoinvd"/> <feature policy="disable" name="pbrsb-no"/> <feature policy="disable" name="vmx-exit-load-perf-global-ctrl"/> <feature policy="disable" name="vmx-entry-load-perf-global-ctrl"/> <numa> <cell id="0" cpus="0-31" memory="33554432" unit="KiB"/> </numa> </cpu>
Version-Release number of selected component (if applicable):
4.19.10
How reproducible:
100%
Steps to Reproduce:
1. As above
Actual results:
VMX enabled inside the Guest
Expected results:
VMX always disabled inside the Guest unless specifically configured to enabled.
- is related to
-
RHEL-100414 [cnv] the cpu is offline in guest after VCPU hotplug (after migration)
-
- New
-