Uploaded image for project: 'OpenShift Monitoring'
  1. OpenShift Monitoring
  2. MON-3394

Obo-prometheus-operator pod stuck in crashloop because of memory limits

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • None
    • None
    • observability-operator
    • None
    • 3
    • False
    • None
    • False
    • NEW
    • NEW
    • MON Sprint 243, MON Sprint 244

      The pod is getting OOMKilled due to high memory usage (>1536Mi)

      $ oc get pods -n openshift-observability-operator
      NAME                                                              READY   STATUS      RESTARTS         AGE
      726dcc8c24520f8eed5ba4eecce2cd291ba6934ddb924c4be152b9f51b2bwgr   0/1     Completed   0                38m
      obo-prometheus-operator-6455845dbf-ks5g6                          0/1     OOMKilled   10 (6m37s ago)   37m
      obo-prometheus-operator-admission-webhook-79d75d74d8-8tnzw        1/1     Running     0                37m
      obo-prometheus-operator-admission-webhook-79d75d74d8-96ck4        1/1     Running     0                37m
      observability-operator-854dbbc8b7-d8jcn                           1/1     Running     0                37m
      observability-operator-catalog-rhdbl                              1/1     Running     0      

            spasquie@redhat.com Simon Pasquier
            zmird.openshift Zakaria Mird
            Jan Fajerski
            Hongyan Li Hongyan Li
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: