-
Bug
-
Resolution: Done-Errata
-
Major
-
None
-
False
-
-
False
-
CLOSED
-
---
-
---
-
-
Important
-
No
Description of problem:
After adding autoCPULimitNamespaceLabelSelector to the Kubevirt - all VMs set resources.limits.cpu automatically. I created VMMRQ for allocating additional resources during the migration, but it does not help - the migration is still in Pending state.
VM and VMI have resources.requests only:
> oc get vm vm-fedora-cpu-auto-lim -o json | jq .spec.template.spec.domain.resources
> {
> "requests":
> }
The POD has requests and limits (because of auto CPU limits set):
> $ oc get pod virt-launcher-vm-fedora-cpu-auto-lim-qdj5q -o json | jq .spec.containers[0].resources
> {
> "limits":
,
> "requests":
> }
The CPU usage is very close to resource quota:
> $ oc get resourcequota
> NAME AGE REQUEST LIMIT
> quota-cpu 80m requests.cpu: 1001m/1100m limits.cpu: 1010m/1100m
Created VMMRQ with additional cpu:
> $ oc get vmmrq my-vmmrq-cpu-4 -o json | jq .spec
> {
> "additionalMigrationResources":
> }
However, the migration still in Pending state:
> $ oc get vmim
> NAME PHASE VMI
> kubevirt-migrate-vm-kd2rh Pending vm-fedora-cpu-auto-lim
Version-Release number of selected component (if applicable):
4.14
How reproducible:
100%
Steps to Reproduce:
1. enable autoCPULimitNamespaceLabelSelector in HCO
2. create resourcequota with request and limits
3. create VM with cpu request, but without limits
4. create vmmrq with resource and limits
5. migrate VM
Actual results:
Migration is in Pending state
Expected results:
Migration completed
Additional info: