-
Bug
-
Resolution: Done
-
Major
-
None
-
False
-
-
False
-
CLOSED
-
---
-
---
-
-
-
High
-
None
+++ This bug was initially created as a clone of Bug #2152534 +++
Description of problem:
The namespace has got default cpu requests configured to 100m:
~~~
oc describe limits
Name: resource-limits
Namespace: default
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- — — --------------- ------------- -----------------------
Container cpu - 2 100m 2 -
Container memory - 5Gi 4Gi 4Gi -
~~~
Created a new VM with 10 vCPU in this namespace from OpenShift console using RHEL 7 template.
~~~
oc get vm rhel7-anonymous-flea -o yaml |yq -y '.spec.template.spec.domain.cpu'
cores: 10
sockets: 1
threads: 1
~~~
With the default cpuAllocationRatio of 10, the virt-launcher pod should get requests.cpu of 1. However, it gets 100m (default request) from the limitrange.
~~~
oc get vmi rhel7-anonymous-flea -o yaml |yq -y '.spec.domain.resources.requests'
cpu: 100m
memory: 2Gi
oc get pod virt-launcher-rhel7-anonymous-flea-d54ls -o yaml|yq -y '.spec.containers[0].resources.requests'
cpu: 100m
devices.kubevirt.io/kvm: '1'
devices.kubevirt.io/tun: '1'
devices.kubevirt.io/vhost-net: '1'
ephemeral-storage: 50M
memory: 2364Mi
~~~
So although VM will see 10 vCPU, it is not requested in the POD and can cause CPU starvation in the virtual machine.
If I delete the limitrange, it works as expected:
~~~
oc get pod virt-launcher-rhel7-anonymous-flea-cl85h -o yaml|yq -y '.spec.containers[0].resources.requests'
cpu: '1'
devices.kubevirt.io/kvm: '1'
devices.kubevirt.io/tun: '1'
devices.kubevirt.io/vhost-net: '1'
ephemeral-storage: 50M
memory: 2364Mi
~~~
Version-Release number of selected component (if applicable):
OpenShift Virtualization 4.11.1
How reproducible:
100%
Steps to Reproduce:
1. Configure limitrange in the namespace.
2. Create a virtual machine from the OpenShift console.
3. Start the virtual machine and check the requests.cpu of the vmi and virt-launcher pod. It will be always default requests.cpu configured in the limitrange.
Actual results:
Default CPU request in namespace limitrange takes precedence over the VMs configured CPU
Expected results:
The default cpu in limitrange is not applicable when vCPU is configured for the VM. Here all the VMs in the namespace will get the same cpu.requests regardless of the number of vCPU configured for the VM.
Additional info:
— Additional comment from on 2022-12-12 21:40:40 UTC —
Can you share the limitrange object you used as a reproducer?
— Additional comment from on 2022-12-13 18:29:01 UTC —
Nijin, can you please post manifests for the VMI, the limitrange object and the resulting pod?
— Additional comment from nijin ashok on 2022-12-14 02:27:08 UTC —
— Additional comment from Red Hat Bugzilla on 2022-12-15 08:29:24 UTC —
Account disabled by LDAP Audit for extended failure