Uploaded image for project: 'OpenShift Virtualization'
  1. OpenShift Virtualization
  2. CNV-26600

[2176031] Overhead of management layer in virt-launcher is not calculated accurately

XMLWordPrintable

    • High
    • None

      +++ This bug was initially created as a clone of Bug #2165618 +++

      Description of problem:
      Kubevirt computes [1] the estimation of total memory needed for the domain to operate properly. This includes the memory needed for the guest and memory for Qemu and OS overhead.

      When running IO intensive workloads, the memory consumption by the management layer (e.g. libvirt, QEMU, and other processes in the compute container) is larger than calculated. The difference between the expected and actual consumption is between 700M and 1.3G approximately.

      [1] https://github.com/kubevirt/kubevirt/blob/v0.59.0-alpha.2/pkg/virt-controller/services/renderresources.go#L272

      Version-Release number of selected component (if applicable):
      4.10.1

      How reproducible:
      100%

      Steps to Reproduce:
      1. create a VM with either local or ceph disks (Manifests are attached)
      2. create xfs filesystem
      3. mount the disks
      4. run vdbench test as per example (via "vdbench -f test_file")

      Actual results:
      Memory used by the virt-launcher pod is as or below what's calculated

      Expected results:
      The difference between the expected and actual consumption is between 700M and 1.3G approximately.

      Additional info:
      VM Manifest attached

      — Additional comment from Itamar Holder on 2023-01-30 15:10:49 UTC —

      — Additional comment from Itamar Holder on 2023-01-30 15:11:21 UTC —

      — Additional comment from Jenifer Abrams on 2023-02-02 15:32:22 UTC —

      Just FYI a quick way I've found to get vdbench running:

      yum install -y podman java-11-openjdk
      img=`podman create quay.io/ebattat/centos-stream8-vdbench5.04.07-pod:v1.0.13`
      fs=`podman mount $img`
      mkdir /home/vdbench
      mount -o bind $fs/vdbench/ /home/vdbench/
      cd /home/vdbench/

      1. create target(s) as desired, ex:
        mkdir /root/virtio
      2. ( mount disk you want to tests against to this path & make sure vdbench paramfile uses path in 'anchor' param(s))

      ./vdbench -f your_paramfile
      #(see ex. paramfile attachment '64kb.vd')


      Mem overhead notes from chat, two cases to consider for the I/O workloads tested:

      1) cases w/out cache=none – Kubevirt docs say "none cache mode is set as default if the file system supports direct I/O, otherwise, writethrough is used." (and hotplug volumes are not getting cache=none by default until 4.12) – saw large growth in kmem usage causing OOM when a mem limit was set (but node pressure would reclaim it), "up to ~1.7GB" value seen but still unknown if this is the upper bound

      2) when cache=none is used, the missing overhead value is more like ~100MB according to Boaz and Michey's latest findings (pending ongoing investigation for other small growth cases when the I/O application is restarted many times)

      Also want to mention this BZ which is unrelated to these I/O cases but falls under the category of memory overhead calculations: Bug 2164593 - High memory request (Windows VM) hitting KubevirtVmHighMemoryUsage alert

      — Additional comment from Itamar Holder on 2023-02-27 09:10:47 UTC —

      PR is now out there: https://github.com/kubevirt/kubevirt/pull/9322

              iholder@redhat.com Itamar Holder
              iholder@redhat.com Itamar Holder
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated:
                Resolved: