-
Bug
-
Resolution: Not a Bug
-
Critical
-
None
-
CNV v4.16.0
-
0.42
-
False
-
-
False
-
None
-
---
-
---
-
-
No
Description of problem:
VM which is using SWAP memory fails to Live migrate target pod stays in pending state with following error and is eventually killed with VMIM in failed state: Warning FailedScheduling 89s default-scheduler 0/6 nodes are available: 3 Insufficient memory, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 3 No preemption victims found for incoming pod, 3 Preemption is not helpful for scheduling. even though other nodes have sufficient memory sh-5.1# swapon --show NAME TYPE SIZE USED PRIO /var/tmp/swapfile file 8G 2.8M -2 Allocatable: cpu: 7500m devices.kubevirt.io/kvm: 1k devices.kubevirt.io/tun: 1k devices.kubevirt.io/vhost-net: 1k ephemeral-storage: 143336860229 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15222912Ki pods: 250
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Create a VM with guest memory set high to overload the node RAM 2. run stress on VM and see if swap memory is getting consumed by monitoring /sys/fs/cgroup/memory.swap.current!=0 in the launcher pod 3. Live Migrate the VM
Actual results:
vm fails to live migrate with vmim in faied state
Expected results:
VM Live migrates successfully
Additional info: