Uploaded image for project: 'Openshift sandboxed containers'
  1. Openshift sandboxed containers
  2. KATA-2980

controller-manager gets OOMKILLed when kataconfig is created

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Duplicate
    • Icon: Medium Medium
    • None
    • OSC 1.6.0, OSC 1.5.3
    • Operator
    • None
    • False
    • None
    • False
    • 0
    • 0

      Description

      <What were you trying to do that didn't work?>

      When I created a kataconfig, I noticed that the controller-manager pod restarted 2 times due to OOMKILL.

      Steps to reproduce

      <What actions did you take to hit the bug?>
      1. Install OSC
      2.Create kataconfig with peerpods enabled
      3. watch controller-manager pod

      Expected result

      <What did you expect to happen?>

      controller-manager does not get OOMKILLed

      Actual result

      <What actually happened?>

      Gets OOMKILLed

      Impact

      <How badly does this interfere with using the software?>

      Loose information about what happened during the time killed instances were running.  Previous logs are not available.  No noticiable end user affects.

      Env

      <Where was the bug found, i.e. OCP build, operator build, kata-containers build, cluster infra, test case id>

      OSC 1.6.0-41 OCP 4.15.9 and OCP 4.15.10

      Additional helpful info

      <logs, screenshot, doc links, etc.>

      When the pod stays up and running, pod metrics show that it is running right at 100Mi for memory. memory limit is set at 100Mi and request if 40Mi.  I think these need to be updated.  I would suggest request of 100Mi and limit of 150Mi to be safe.

            Unassigned Unassigned
            cmeadors@redhat.com Cameron Meadors
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: