Uploaded image for project: 'OpenShift Virtualization'
  1. OpenShift Virtualization
  2. CNV-23525

[2155622] L1 Cache info not correct

XMLWordPrintable

    • CNV Virtualization Sprint 240, CNV Virtualization Sprint 241
    • Moderate
    • No

      Description of problem:

      The information which CPUs and their hyperthread share which cache seems to be wrong in the guest.

      Version-Release number of selected component (if applicable):

      oc version
      Client Version: 4.10.45
      Server Version: 4.10.45
      Kubernetes Version: v1.23.12+8a6bfe4

      How reproducible:

      Guest

      1. lscpu --all --extended

      CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE
      0 0 0 0 0:0:0:0 yes
      1 0 0 0 1:1:0:0 yes
      2 0 0 1 2:2:1:0 yes
      3 0 0 1 3:3:1:0 yes
      4 0 0 2 4:4:2:0 yes
      5 0 0 2 5:5:2:0 yes

      Dom XML

      <vcpupin vcpu='0' cpuset='1'/>
      <vcpupin vcpu='1' cpuset='113'/>
      <vcpupin vcpu='2' cpuset='2'/>
      <vcpupin vcpu='3' cpuset='114'/>
      <vcpupin vcpu='4' cpuset='3'/>
      <vcpupin vcpu='5' cpuset='115'/>

      Bare metal host

      lscpu --all --extended
      CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ
      0 0 0 0 0:0:0:0 yes 4300.0000 1000.0000
      1 0 0 1 1:1:1:0 yes 4300.0000 1000.0000
      2 0 0 2 2:2:2:0 yes 4300.0000 1000.0000
      3 0 0 3 3:3:3:0 yes 4300.0000 1000.0000
      ...
      112 0 0 0 0:0:0:0 yes 4300.0000 1000.0000
      113 0 0 1 1:1:1:0 yes 4300.0000 1000.0000
      114 0 0 2 2:2:2:0 yes 4300.0000 1000.0000
      115 0 0 3 3:3:3:0 yes 4300.0000 1000.0000

      Note that on the HW the it's indicated that a CPU and it's Hyperthread share all caches (e.g. cpu1 and cpu113 have 1:1:1:0).

      In the guest the coresponding cpu0 and cpu1 have a different line there 0:0:0:0
      vs 1:1:0:0.

      Now I can't tell if thats a real issue or just a cosmetic thing but it's definately different and should be corrected.

              bmordeha@redhat.com Barak Mordehai
              nilskoenigrh Nils Koenig
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: