Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-17537

Memory requests oversized for some ovnkube-node pod containers in multi-zone IC

    XMLWordPrintable

Details

    • No
    • SDN Sprint 240, SDN Sprint 241, SDN Sprint 242, SDN Sprint 243, SDN Sprint 244, SDN Sprint 245, SDN Sprint 246, SDN Sprint 247, SDN Sprint 248
    • 9
    • Rejected
    • False
    • Hide

      None

      Show
      None

    Description

      Description of problem:

      Using telemetry data and running an intense workload on a 51 node ROSA cluster, we observed that the peak memory usage in some containers of the ovnkube-node pod is less than the requests, so it perhaps makes sense to adjust the requests to create additional capacity for the workload pods on workers.

      Version-Release number of selected component (if applicable):

       4.14.0-0.nightly-2023-08-08-094653

      How reproducible:

      100%

      Steps to Reproduce:

      1. Use kube-burner to run this workload on a ROSA/Self-managed OCP cluster
      2. https://github.com/smalleni/kube-burner/commit/d2b4f20f1de20ca0e70d71070331ae61e15698a0 
      3. kube-burner ocp cluster-density-v2 --iterations=650

      You can also reproduce this by running the regular clcuster-density-v2 workload with churn=true 

      Actual results:

      Max container memory in MiB during testing:
      northd: 85
      ovn-controller-95
      nbdb: 40
      sbdb: 55
      
      

      Expected results:

       

      Additional info:

       

      Attachments

        Activity

          People

            npinaeva@redhat.com Nadia Pinaeva
            smalleni@redhat.com Sai Sindhur Malleni
            Anurag Saxena Anurag Saxena
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: